diff --git a/.gitignore b/.gitignore
index 6f89417..5a62519 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,9 +1,17 @@
**/__pycache__/
venv/
-_generated/
dist/
build/
*.egg-info/
+.tox/
+*.spec
+
+# macOS
.DS_Store
-.idea/*
\ No newline at end of file
+# IntelliJ
+.idea/*
+
+# Created directories during runs
+*_generated/
+*logs/
\ No newline at end of file
diff --git a/README.md b/README.md
index c04e630..fb25f7f 100644
--- a/README.md
+++ b/README.md
@@ -1,16 +1,13 @@
-# precice-generator
+# preCICE Case Generate
-
+preCICE case-generate is a python based utility designed to simplify the generation of preCICE application cases.
+Such cases consist of the central `precice-config.xml` file which defines all sorts of connections and relations between
+involved solvers, as well as `adapter-config.json` files for each solver.
-
-
-
-
-## Project Overview
-
-The preCICE case-generate package is a Python-based utility designed to automate the generation of preCICE configuration files from
-simple YAML topology descriptions. This tool simplifies the process of setting up multi-physics simulations by transforming
-user-defined YAML configurations into preCICE-compatible XML configuration files.
+These files involve a lot of complex elements and modifiers, which are often times not needed.
+This tool introduces a simpler, easier to read and write `topology.yaml` file,
+which covers a wide range of features of the `precice-config.xml` file, yet with only a fraction of the complexity.
+An overview over the `topology.yaml` file can be found in `precicecasegenerate/schemas/README.md`.
## Key Features
@@ -20,16 +17,23 @@ user-defined YAML configurations into preCICE-compatible XML configuration files
- Comprehensive error logging and handling
- Simple command-line interface
-## Installation
+## Getting Started
### Prerequisites
-- Python 3.9 or
- higher ([workflow validated](https://github.com/precice/case-generate/actions/workflows/installation.yml)
- with 3.9, 3.10, 3.11 and 3.12)
+Required dependencies are:
+
+- Python ≥ 3.10
- pip
-- venv
-- (preCICE library)
+- git for cloning the repository :)
+- [preCICE Config Graph](https://github.com/precice/config-graph) (will be installed during the setup)
+- pyyaml
+- jsonschema
+
+Optional dependencies are:
+
+- pytest
+- [preCICE Config Check](https://github.com/precice/config-check)
### Manual Installation
@@ -37,7 +41,7 @@ user-defined YAML configurations into preCICE-compatible XML configuration files
```bash
git clone https://github.com/precice/case-generate.git
-cd precice-generator
+cd case-generate
```
2. Create a virtual environment
@@ -63,6 +67,11 @@ pip install build
pip install -e .
```
+Optional dependencies for testing can be installed via
+```bash
+pip install -e ".[dev]"
+```
+
### Using Setup Scripts
#### Unix/macOS
@@ -92,102 +101,54 @@ precice-case-generate --help
Generate a preCICE configuration file from a YAML topology called `topology.yaml`:
```bash
-precice-case-generate
+precice-case-generate path/to/topology.yaml
```
-or pass a topology file via argument;
-
-```bash
-precice-case-generate -f path/to/your/topology.yaml
-```
+The only required argument is the `path/to/topology.yaml`.
The `precice-case-generate` tool supports the following optional parameters:
-- `-f, --input-file`: Path to the input topology.yaml file.
- - **Default**: `./topology.yaml`
- - **Description**: Specify a custom topology file for configuration generation.
-
- `-o, --output-path`: Destination path for the generated folder.
- **Default**: `./_generated/`
- - **Description**: Choose a specific output location for generated files.
+ - **Description**: Choose a specific output location for the `_generated/` directory.
-- `-v, --verbose`: Enable verbose logging.
+- `-v, --verbose`: Enable verbose console logging.
- **Default**: Disabled
- **Description**: Provides detailed logging information during execution.
-- `--validate-topology`: Validate the input topology.yaml against the preCICE topology schema.
- - **Default**: Enabled
- - **Description**: Ensures the topology file meets the required schema specifications.
-
-Example usage:
-```bash
-precice-case-generate -f custom_topology.yaml -o /path/to/output -v
-```
> [!NOTE]
-> You should validate your files by running them through precice-tools and the
-> preCICE [config-checker](https://github.com/precice/case-generate) to avoid errors.
+> While it is not expected, the topology generation might fail or produce faulty configuration files.
+> This might happen in situations where the `topology.yaml` contains multiple edge cases,
+> such as many data exchanges with the same `data`-tag.
+> The preCICE [Config Check](https://github.com/precice/config-check) is designed to identify and alert to such errors.
+
+### Examples
+
+Valid `topology.yaml` <-> application case pairs can be found in the `examples/` directory.
+They include the preCICE tutorials 1-4 as well as some more complex simulations.
### Configuration
1. Prepare a YAML topology file describing your multi-physics simulation setup.
2. Use the command-line interface to generate the preCICE configuration.
-3. The tool will create the necessary configuration files in the `_generated/` directory.
+3. preCICE Case Generate will create the necessary configuration files in the `_generated/` directory.
-## Creating Topology with MetaConfigurator
+## Creating Topologies with MetaConfigurator
You can create a topology for your preCICE simulation using the online MetaConfigurator.
We provide a preloaded schema to help you get started:
1. Open the MetaConfigurator with the preloaded
- schema: [MetaConfigurator Link](https://metaconfigurator.github.io/meta-configurator/?schema=https://github.com/precice/case-generate/blob/main/precicecasegenerate/schemas/topology-schema.json&settings=https://github.com/precice/case-generate/blob/main/precicecasegenerate/templates/metaConfiguratorSettings.json)
+ schema: [MetaConfigurator link](https://metaconfigurator.github.io/meta-configurator/?schema=https://github.com/precice/case-generate/blob/main/precicecasegenerate/schemas/topology-schema.json&settings=https://github.com/precice/case-generate/blob/main/precicecasegenerate/templates/metaConfiguratorSettings.json)
2. Use the interactive interface to define your topology:
- The preloaded schema provides a structured way to describe your simulation components
- - Add configuration details on the right side of the screen
3. Once complete, export your topology as a YAML file
- Save the generated YAML file
- - Use this file with the `precice-generator` tool to create your preCICE configuration
- - Validate the generated preCICE config
- with [config-checker](https://github.com/precice/config-check)
- - Use `precice-config-checker` and/or `precice-tools check` to validate the generated preCICE config
-
-### Benefits of Using MetaConfigurator
-
-- Visual, user-friendly interface
-- Real-time validation against our predefined schema
-- Reduces manual configuration errors
-- Simplifies topology creation process
-
-## Example Configurations
-
-### Normal Examples (0-5)
-
-Our project provides a set of progressively complex example configurations to help you get started with preCICE
-simulations:
-
-- Located in `examples/0` through `examples/5`
-- Designed for beginners and intermediate users
-- Each example includes:
- - A `topology.yaml` file defining the simulation setup
- - A `precice-config.xml` file
- - Subdirectories for different simulation components
-- Showcase simple, linear multi-physics scenarios
-- Ideal for learning basic preCICE configuration concepts
-
-### Expert Examples
-
-For advanced users, we offer more sophisticated configuration examples:
-
-- Located in `examples/expert`
-- Contain more advanced usage of topology options but extend the according example with the same number
-- Demonstrate advanced coupling strategies and intricate topology configurations
-- Targeted at users with a better understanding of preCICE
-
-> [!TIP]
-> Start with normal examples (0-5) and progress to expert examples as you become more comfortable with preCICE
-> configurations.
+ - Use `precice-case-generate` to create your preCICE application case and configuration files
+ - Validate the generated preCICE config with [config-checker](https://github.com/precice/config-check)
## Documentation
@@ -207,10 +168,6 @@ Alongside it, you will find `README.md`, which explains the topology's parameter
- Ensure all dependencies are correctly installed
- Verify the format of your input YAML file
-- Check the generated logs for detailed error information
-
-## Acknowledgements
+- Check the generated logs (`./.logs`) for detailed process information
-This project was started with code from the [preCICE controller](https://github.com/precice/controller) repository.
-The file `format_precice_config.py` was taken
-from [preCICE pre-commit hook file](https://github.com/precice/precice-pre-commit-hooks/blob/main/format_precice_config/format_precice_config.py)
+If all else fails, open a pull request describing the issue you are encountering.
\ No newline at end of file
diff --git a/examples/0/README.md b/examples/0/README.md
deleted file mode 100644
index eb7f297..0000000
--- a/examples/0/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-This examples represents the most basic configuration. It was directly inspired by preCICE's workshop example for beginners to start learning preCICE.
-
-Since it contains only a uni-directional data exchange and only data point, its not a fully functional configuration.
-
-Inspired by: https://github.com/precice-forschungsprojekt/precice-generator/pull/55 and probably https://precice.org/configuration-coupling-mesh-exchange.html#example-configuration
-
-
-
diff --git a/examples/0/config_graph.png b/examples/0/config_graph.png
deleted file mode 100644
index ad27af0..0000000
Binary files a/examples/0/config_graph.png and /dev/null differ
diff --git a/examples/0/fluid-su2/adapter-config.json b/examples/0/fluid-su2/adapter-config.json
deleted file mode 100644
index 37443dd..0000000
--- a/examples/0/fluid-su2/adapter-config.json
+++ /dev/null
@@ -1,13 +0,0 @@
-{
- "participant_name": "Fluid",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": null,
- "patches": [],
- "write_data_names": [
- "Force"
- ]
- }
- ]
-}
diff --git a/examples/0/precice-config.xml b/examples/0/precice-config.xml
deleted file mode 100644
index 01053d7..0000000
--- a/examples/0/precice-config.xml
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/examples/0/solid-calculix/adapter-config.json b/examples/0/solid-calculix/adapter-config.json
deleted file mode 100644
index cbe3039..0000000
--- a/examples/0/solid-calculix/adapter-config.json
+++ /dev/null
@@ -1,15 +0,0 @@
-{
- "participant_name": "Solid",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Solid-Mesh",
- "patches": [
- "surface"
- ],
- "read_data_names": [
- "Force"
- ]
- }
- ]
-}
diff --git a/examples/0/topology.yaml b/examples/0/topology.yaml
deleted file mode 100644
index a467de7..0000000
--- a/examples/0/topology.yaml
+++ /dev/null
@@ -1,14 +0,0 @@
-coupling-scheme:
- display_standard_values: false
-participants:
- - name: Fluid
- solver: SU2
- - name: Solid
- solver: Calculix
-exchanges:
- - from: Fluid
- from-patch: interface
- to: Solid
- to-patch: surface
- data: Force
- type: strong
diff --git a/examples/0/vis.png b/examples/0/vis.png
deleted file mode 100644
index 12aa087..0000000
Binary files a/examples/0/vis.png and /dev/null differ
diff --git a/examples/1/README.md b/examples/1/README.md
deleted file mode 100644
index a2ecf42..0000000
--- a/examples/1/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-This is a standard and very basic example of a preCICE configuration with 2 participants and simple bi-directional data exchange.
-
-The coupling scheme uses standard display values for properties like max-time, etc.
-
-Inspired by: https://github.com/precice-forschungsprojekt/precice-generator/pull/55 and probably https://precice.org/configuration-coupling-mesh-exchange.html#example-configuration
-
-
-
diff --git a/examples/1/config_graph.png b/examples/1/config_graph.png
deleted file mode 100644
index cc9de58..0000000
Binary files a/examples/1/config_graph.png and /dev/null differ
diff --git a/examples/1/fluid-su2/precice-adapter-config.json b/examples/1/fluid-su2/precice-adapter-config.json
deleted file mode 100644
index 8b7ef35..0000000
--- a/examples/1/fluid-su2/precice-adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Fluid",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Fluid-Mesh",
- "patches": [
- "interface"
- ],
- "read_data_names": [
- "Displacement"
- ],
- "write_data_names": [
- "Force"
- ]
- }
- ]
-}
diff --git a/examples/1/image.png b/examples/1/image.png
deleted file mode 100644
index e9e8b9c..0000000
Binary files a/examples/1/image.png and /dev/null differ
diff --git a/examples/1/precice-config.xml b/examples/1/precice-config.xml
deleted file mode 100644
index cb8b623..0000000
--- a/examples/1/precice-config.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/examples/1/solid-calculix/precice-adapter-config.json b/examples/1/solid-calculix/precice-adapter-config.json
deleted file mode 100644
index 110836c..0000000
--- a/examples/1/solid-calculix/precice-adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Solid",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Solid-Mesh",
- "patches": [
- "surface"
- ],
- "read_data_names": [
- "Force"
- ],
- "write_data_names": [
- "Displacement"
- ]
- }
- ]
-}
diff --git a/examples/2/README.md b/examples/2/README.md
deleted file mode 100644
index e792318..0000000
--- a/examples/2/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
-This example is an extension to example 1 since it uses specific, manually set, properties for coupling, like max-time and time-window-size.
-
-Inspired by: https://github.com/precice-forschungsprojekt/precice-generator/pull/55 and probably https://precice.org/configuration-coupling-mesh-exchange.html#example-configuration
-
-
-
diff --git a/examples/2/config_graph.png b/examples/2/config_graph.png
deleted file mode 100644
index d43ed61..0000000
Binary files a/examples/2/config_graph.png and /dev/null differ
diff --git a/examples/2/fluid-su2/precice-adapter-config.json b/examples/2/fluid-su2/precice-adapter-config.json
deleted file mode 100644
index 8b7ef35..0000000
--- a/examples/2/fluid-su2/precice-adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Fluid",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Fluid-Mesh",
- "patches": [
- "interface"
- ],
- "read_data_names": [
- "Displacement"
- ],
- "write_data_names": [
- "Force"
- ]
- }
- ]
-}
diff --git a/examples/2/image.png b/examples/2/image.png
deleted file mode 100644
index e9e8b9c..0000000
Binary files a/examples/2/image.png and /dev/null differ
diff --git a/examples/2/precice-config.xml b/examples/2/precice-config.xml
deleted file mode 100644
index cb8b623..0000000
--- a/examples/2/precice-config.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/examples/2/solid-calculix/precice-adapter-config.json b/examples/2/solid-calculix/precice-adapter-config.json
deleted file mode 100644
index 110836c..0000000
--- a/examples/2/solid-calculix/precice-adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Solid",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Solid-Mesh",
- "patches": [
- "surface"
- ],
- "read_data_names": [
- "Force"
- ],
- "write_data_names": [
- "Displacement"
- ]
- }
- ]
-}
diff --git a/examples/2/topology.yaml b/examples/2/topology.yaml
deleted file mode 100644
index a19e5bb..0000000
--- a/examples/2/topology.yaml
+++ /dev/null
@@ -1,21 +0,0 @@
-coupling-scheme:
- max-time: 1e-1
- time-window-size: 1e-3
-participants:
- - name: Fluid
- solver: SU2
- - name: Solid
- solver: Calculix
-exchanges:
- - from: Fluid
- from-patch: interface
- to: Solid
- to-patch: surface
- data: Force
- type: strong
- - from: Solid
- from-patch: surface
- to: Fluid
- to-patch: interface
- data: Displacement
- type: strong
diff --git a/examples/3/README.md b/examples/3/README.md
deleted file mode 100644
index 9a2ee63..0000000
--- a/examples/3/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
-This is a first example with more complex structure.
-
-1. It uses manually set couling-scheme properties, like max-time and coupling.
-
-2. It defines dimensionality of each participant.
-
-3. It manually defines acceleration to be used in the config.
-
-Inspired by: https://github.com/precice/tutorials/tree/master/flow-over-heated-plate-partitioned-flow
-
-
-
diff --git a/examples/3/config_graph.png b/examples/3/config_graph.png
deleted file mode 100644
index 9a4432a..0000000
Binary files a/examples/3/config_graph.png and /dev/null differ
diff --git a/examples/3/image.png b/examples/3/image.png
deleted file mode 100644
index 0801638..0000000
Binary files a/examples/3/image.png and /dev/null differ
diff --git a/examples/3/solid-fenics/adapter-config.json b/examples/3/solid-fenics/adapter-config.json
deleted file mode 100644
index abafcc0..0000000
--- a/examples/3/solid-fenics/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Solid",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Solid-Mesh",
- "patches": [
- "interface"
- ],
- "write_data_names": [
- "HeatTransfer"
- ],
- "read_data_names": [
- "Temperature"
- ]
- }
- ]
-}
diff --git a/examples/3/topology.yaml b/examples/3/topology.yaml
deleted file mode 100644
index e2b9161..0000000
--- a/examples/3/topology.yaml
+++ /dev/null
@@ -1,32 +0,0 @@
-coupling-scheme:
- max-time: 1.0
- time-window-size: 1e-3
- max-iterations: 30
- coupling: serial
-
-participants:
- - name: Fluid
- solver: OpenFOAM
- dimensionality: 2
- - name: Solid
- solver: FEniCS
- dimensionality: 2
-
-acceleration:
- name: aitken
- initial-relaxation:
- value: 0.5
-
-exchanges:
- - from: Solid
- from-patch: interface
- to: Fluid
- to-patch: surface
- data: HeatTransfer # Fluid reads heat flux
- type: strong
- - from: Fluid
- from-patch: surface
- to: Solid
- to-patch: interface
- data: Temperature # Solid reads temperature
- type: strong
diff --git a/examples/4/README.md b/examples/4/README.md
deleted file mode 100644
index 1cf0acc..0000000
--- a/examples/4/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
-This example adds additional complexity like Custom couplings and different participant solvers and names.
-
-Inspired by: https://github.com/precice/tutorials/tree/develop/partitioned-heat-conduction-complex
-
-
-
-
diff --git a/examples/4/config_graph.png b/examples/4/config_graph.png
deleted file mode 100644
index e9bb65d..0000000
Binary files a/examples/4/config_graph.png and /dev/null differ
diff --git a/examples/4/dirichlet-dirichletfenics/adapter-config.json b/examples/4/dirichlet-dirichletfenics/adapter-config.json
deleted file mode 100644
index 81c32c7..0000000
--- a/examples/4/dirichlet-dirichletfenics/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Dirichlet",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Dirichlet-Mesh",
- "patches": [
- "interface"
- ],
- "write_data_names": [
- "HeatTransfer"
- ],
- "read_data_names": [
- "Temperature"
- ]
- }
- ]
-}
diff --git a/examples/4/image.png b/examples/4/image.png
deleted file mode 100644
index 218baf7..0000000
Binary files a/examples/4/image.png and /dev/null differ
diff --git a/examples/4/neumann-fenics/adapter-config.json b/examples/4/neumann-fenics/adapter-config.json
deleted file mode 100644
index 04ddcfd..0000000
--- a/examples/4/neumann-fenics/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Neumann",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Neumann-Mesh",
- "patches": [
- "surface"
- ],
- "write_data_names": [
- "Temperature"
- ],
- "read_data_names": [
- "HeatTransfer"
- ]
- }
- ]
-}
diff --git a/examples/4/precice-config.xml b/examples/4/precice-config.xml
deleted file mode 100644
index 46452be..0000000
--- a/examples/4/precice-config.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/examples/4/topology.yaml b/examples/4/topology.yaml
deleted file mode 100644
index 423d91b..0000000
--- a/examples/4/topology.yaml
+++ /dev/null
@@ -1,28 +0,0 @@
-coupling-scheme:
- max-time: 1.0
- time-window-size: 0.1
- max-iterations: 100
- coupling: serial
-
-participants:
- - name: Dirichlet
- solver: DirichletFEniCS
- dimensionality: 2
- - name: Neumann
- solver: FEniCS
- dimensionality: 2
-
-exchanges:
- - from: Dirichlet
- from-patch: interface
- to: Neumann
- to-patch: surface
- data: HeatTransfer # Fluid reads heat flux
- data-type: vector
- type: strong
- - from: Neumann
- from-patch: surface
- to: Dirichlet
- to-patch: interface
- data: Temperature # Solid reads temperature
- type: strong
diff --git a/examples/5/README.md b/examples/5/README.md
deleted file mode 100644
index d2ea1f1..0000000
--- a/examples/5/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
-This example presents a much more complex configuration, with 3 different participants and more complex naming conventions like 'Fluid-Top" instead of just 'Fluid'. It also contains more exchanges.
-
-Inspired by: https://github.com/precice/tutorials/tree/develop/heat-exchanger-simplified
-
-
-
diff --git a/examples/5/config_graph.png b/examples/5/config_graph.png
deleted file mode 100644
index 4806195..0000000
Binary files a/examples/5/config_graph.png and /dev/null differ
diff --git a/examples/5/fluid-bottom-openfoam/adapter-config.json b/examples/5/fluid-bottom-openfoam/adapter-config.json
deleted file mode 100644
index f0fe7d6..0000000
--- a/examples/5/fluid-bottom-openfoam/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Fluid-Bottom",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Fluid-Bottom-Mesh",
- "patches": [
- "surface"
- ],
- "write_data_names": [
- "Temperature"
- ],
- "read_data_names": [
- "HeatTransfer"
- ]
- }
- ]
-}
diff --git a/examples/5/fluid-top-openfoam/adapter-config.json b/examples/5/fluid-top-openfoam/adapter-config.json
deleted file mode 100644
index 3c49376..0000000
--- a/examples/5/fluid-top-openfoam/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Fluid-Top",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Fluid-Top-Mesh",
- "patches": [
- "interface"
- ],
- "write_data_names": [
- "Temperature"
- ],
- "read_data_names": [
- "HeatTransfer"
- ]
- }
- ]
-}
diff --git a/examples/5/image.png b/examples/5/image.png
deleted file mode 100644
index 8d4573e..0000000
Binary files a/examples/5/image.png and /dev/null differ
diff --git a/examples/5/precice-config.xml b/examples/5/precice-config.xml
deleted file mode 100644
index 6cef4ac..0000000
--- a/examples/5/precice-config.xml
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/examples/5/solid-calculix/adapter-config.json b/examples/5/solid-calculix/adapter-config.json
deleted file mode 100644
index abafcc0..0000000
--- a/examples/5/solid-calculix/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Solid",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Solid-Mesh",
- "patches": [
- "interface"
- ],
- "write_data_names": [
- "HeatTransfer"
- ],
- "read_data_names": [
- "Temperature"
- ]
- }
- ]
-}
diff --git a/examples/complex-multi-coupling/_reference/README.md b/examples/complex-multi-coupling/_reference/README.md
new file mode 100644
index 0000000..fca3965
--- /dev/null
+++ b/examples/complex-multi-coupling/_reference/README.md
@@ -0,0 +1,88 @@
+# Multiphysics Simulation Project
+
+> [!NOTE] This `README.md` file was auto-generated by preCICE case-generate.
+
+
+---
+
+## Project Overview
+
+This project uses **preCICE** for a multiphysics simulation involving:
+
+- Solver `ASolver` with participant `NASTIN`
+- Solver `SolVer` with participant `SOLIDZ1`
+- Solver `SolVer` with participant `SOLIDZ2`
+- Solver `SolVer` with participant `SOLIDZ3`
+
+### Project Structure
+
+Global files that are generated are: `precice-config.xml`, `README.md` and `clean.sh`. Additionally, for each participant, a folder with an `adapter-config.json` and a `run.sh` file are created.
+The folder structure is as follows:
+
+```
+_generated/
+ ├── README.md # This file
+ ├── clean.sh # Clean up script
+ ├──nastin-asolver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──solidz1-solver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──solidz2-solver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──solidz3-solver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ └── precice-config.xml # Global precice-config.xml file
+```
+
+
+- `precice-config.xml` is the global preCICE configuration file which defines the parameters and communication of participants
+- `adapter-config.json` is a configuration file to couple the solvers with preCICE.
+- `run.sh` is a script that is meant to execute a participant. Note, however, that since different solvers are executed differently, this file is not implemented yet.
+- `clean.sh` removes any files in the current root directory that were not created by preCICE case-generate (and moves them to a backup folder).
+Execution:
+
+```bash
+./clean.sh [--force] [--dry-run]
+```
+
+- `--force` Deletes the files and any backup folders
+- `--dry-run` Does not delete any files, but prints files that would be deleted
+
+---
+
+## Prerequisites
+
+Before running the simulation, ensure you have the following installed:
+
+- The preCICE coupling library
+- Solver `ASolver` and its dependencies
+- Solver `SolVer` and its dependencies
+- Solver `SolVer` and its dependencies
+- Solver `SolVer` and its dependencies
+
+---
+
+## Running the Simulation
+
+### Quick Start
+
+```bash
+# Navigate to the `_generated` folder
+cd _generated/
+
+# Implement the run script
+
+# Make the run script executable
+chmod +x run.sh
+
+# Execute the simulation
+./run.sh
+```
+
+---
+
+For more information, see the [preCICE documentation](https://precice.org/docs.html) and [precice-case-generate](https://github.com/precice/case-generate).
\ No newline at end of file
diff --git a/examples/complex-multi-coupling/_reference/clean.sh b/examples/complex-multi-coupling/_reference/clean.sh
new file mode 100644
index 0000000..2954b31
--- /dev/null
+++ b/examples/complex-multi-coupling/_reference/clean.sh
@@ -0,0 +1,350 @@
+#!/usr/bin/env bash
+
+# -------------------------------------------------------------------
+# Script Name: clean.sh
+# Description: Recursively deletes/moves files/dirs except:
+# - Global preserved filenames anywhere (run.sh, adapter-config.json...)
+# - Specific root files (README.md, precice-config.xml)
+# Usage: ./clean.sh [--dry-run] [--force]
+# --dry-run : show what would happen, don't remove/move
+# --force : permanently delete unpreserved items AND remove existing backups
+# -------------------------------------------------------------------
+
+# Strict mode:
+# -e: exit on error
+# -u: exit on undefined variable
+# -o pipefail: exit if any command in a pipe fails
+set -euo pipefail
+
+# --- CONFIGURATION ---
+ROOT_DIR="$(pwd)"
+LOG_FILE="cleanup.log"
+BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
+
+# 1. GLOBAL PRESERVES: filenames to keep anywhere in the tree
+GLOBAL_PRESERVE_NAMES=(
+ "run.sh"
+ "adapter-config.json"
+)
+
+# 2. ROOT PRESERVES: filenames to keep only if in ROOT_DIR
+ROOT_PRESERVE_PATHS=(
+ "clean.sh"
+ "README.md"
+ "precice-config.xml"
+ "$LOG_FILE" # always keep the log (will be overwritten)
+)
+
+# --- DEFAULTS ---
+DRY_RUN=0 # No dry-run by default
+FORCE=0 # No permanent removal of files by default
+MOVED_COUNT=0 # Counter to track if we actually backed anything up
+DELETED_COUNT=0 # Counter to track if we actually deleted anything
+
+# --- HELPERS ---
+
+log() {
+ # We use tee -a (append) here so we don't overwrite previous lines *of this run*.
+ # The file is cleared once at the start of the MAIN block.
+ echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
+}
+
+# Is filename (basename) a global preserved name?
+is_global_preserved() {
+ local filename="$1"
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [[ "$filename" == "$name" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Is a relative path preserved at root?
+is_root_preserved() {
+ local relpath="$1"
+ for p in "${ROOT_PRESERVE_PATHS[@]}"; do
+ if [[ "$relpath" == "$p" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Find out whether directory contains any preserved file *anywhere* under it.
+# Implementation Note: Uses `head -n 1` instead of `-quit` for maximum compatibility
+# across all Linux distros (including Alpine/BusyBox) and BSD/MacOS.
+dir_contains_preserved_content() {
+ local dir="$1"
+ local name
+ local -a find_args=()
+
+ # Build find arguments: ( -name A -o -name B ... )
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [ ${#find_args[@]} -eq 0 ]; then
+ find_args+=( -name "$name" )
+ else
+ find_args+=( -o -name "$name" )
+ fi
+ done
+
+ # Check if find returns any match.
+ # piping to head -n 1 causes find to receive SIGPIPE and stop early if a match is found.
+ if [ -n "$(find "$dir" \( "${find_args[@]}" \) -print 2>/dev/null | head -n 1)" ]; then
+ return 0
+ fi
+ return 1
+}
+
+# Ensure backup destination exists and safely move source there preserving relative path.
+# Prevents collisions by making unique names if needed.
+# Args:
+# $1 = absolute src path
+# $2 = rel path relative to ROOT_DIR (used to recreate path inside backup)
+safe_move_to_backup() {
+ local src="$1"
+ local rel="$2"
+
+ # Destination path = $BACKUP_DIR/$rel
+ local dest="$BACKUP_DIR/$rel"
+ local dest_dir
+ dest_dir="$(dirname "$dest")"
+
+ # Normalize: if dirname is ".", use $BACKUP_DIR as target dir
+ if [[ "$dest_dir" == "$BACKUP_DIR/." ]] || [[ "$dest_dir" == "." ]]; then
+ dest_dir="$BACKUP_DIR"
+ fi
+
+ if [ "$DRY_RUN" -eq 0 ]; then
+ mkdir -p "$dest_dir"
+ fi
+
+ # If dest exists, append numeric suffix before extension to avoid overwrite
+ if [ -e "$dest" ]; then
+ local base name ext candidate n
+ base="$(basename "$dest")"
+ name="${base%.*}"
+ ext="${base##*.}"
+ n=1
+
+ # Check if file has an extension
+ if [[ "$ext" == "$base" ]]; then
+ # No extension
+ candidate="$dest_dir/${name}_$n"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n"
+ done
+ else
+ # Has extension
+ candidate="$dest_dir/${name}_$n.$ext"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n.$ext"
+ done
+ fi
+ dest="$candidate"
+ fi
+
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would be deleted: ${rel}"
+ else
+ mv -- "$src" "$dest"
+ # Increment the counter because we successfully moved a file
+ MOVED_COUNT=$((MOVED_COUNT + 1))
+ log "Deleted: ${rel}"
+ fi
+}
+
+# Safe remove (used by --force). Respects --dry-run.
+safe_remove() {
+ local src="$1"
+ local rel="$2"
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove: $rel"
+ else
+ rm -rf -- "$src"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed: $rel"
+ fi
+}
+
+# Remove existing backup_* directories permanently (used when FORCE=1).
+remove_existing_backups_permanently() {
+ shopt -s nullglob
+ local found=0
+ for old in "$ROOT_DIR"/backup_*; do
+ if [ -d "$old" ]; then
+ found=1
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove backup: $(basename "$old")"
+ else
+ rm -rf -- "$old"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed backup: $(basename "$old")"
+ fi
+ fi
+ done
+ shopt -u nullglob
+ if [ "$found" -eq 0 ]; then
+ log "No existing backup directories to remove."
+ fi
+}
+
+# Perform action for a single path (file or directory) that is NOT preserved.
+# Moves to backup preserving tree, or removes permanently when FORCE=1.
+perform_action() {
+ local item="$1"
+ local rel_path="${item#$ROOT_DIR/}"
+
+ if [ "$FORCE" -eq 1 ]; then
+ safe_remove "$item" "$rel_path"
+ else
+ safe_move_to_backup "$item" "$rel_path"
+ fi
+}
+
+# --- RECURSIVE CLEANUP ---
+recursive_cleanup() {
+ local current_dir="$1"
+
+ # dotglob: includes hidden files (starting with .)
+ # nullglob: makes the loop not run if no files match (avoids literal string issues)
+ shopt -s dotglob nullglob
+
+ local item rel_path base
+
+ for item in "$current_dir"/*; do
+ # Sanity check: ensure file exists (handles rare race conditions or broken links)
+ [ ! -e "$item" ] && [ ! -L "$item" ] && continue
+
+ rel_path="${item#$ROOT_DIR/}"
+ base="$(basename "$item")"
+
+ # Skip . and .. (though glob usually excludes them, safety first)
+ if [[ "$base" == "." || "$base" == ".." ]]; then
+ continue
+ fi
+
+ # --- FILE or SYMLINK ---
+ if [ -f "$item" ] || [ -L "$item" ]; then
+ # 1. Preserve by global name anywhere?
+ if is_global_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # 2. Preserve if at root and matches root-preserve list?
+ if [[ "$current_dir" == "$ROOT_DIR" ]] && is_root_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # Otherwise: perform action
+ perform_action "$item"
+ continue
+ fi
+
+ # --- DIRECTORY ---
+ if [ -d "$item" ]; then
+ # Check if dir contains any preserved content anywhere below
+ if dir_contains_preserved_content "$item"; then
+ recursive_cleanup "$item"
+
+ # After recursion, if directory is empty, remove it (respecting dry-run)
+ if [ -z "$(ls -A "$item")" ]; then
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would remove empty directory: $rel_path"
+ else
+ rmdir -- "$item"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Removed empty directory: $rel_path"
+ fi
+ fi
+ else
+ # Directory contains no preserved files anywhere deep: remove/move entire directory
+ perform_action "$item"
+ fi
+ continue
+ fi
+ done
+
+ # Restore defaults for shopt to avoid side effects if function is reused
+ shopt -u dotglob nullglob
+}
+
+# --- MAIN ---
+
+# Parse flags
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --dry-run) DRY_RUN=1 ;;
+ --force) FORCE=1 ;;
+ *) echo "Unknown parameter: $1"; exit 1 ;;
+ esac
+ shift
+done
+
+# Initialize/Clear the log file for this new run
+: > "$LOG_FILE"
+
+# Confirmation prompt (skip when dry-run)
+if [ "$DRY_RUN" -eq 1 ]; then
+ log "Dry run enabled, nothing will be removed."
+else
+ read -p "This will delete all files except preserved ones. Proceed? [y/n]: " confirm
+ case "$confirm" in
+ [yY][eE][sS]|[yY]) ;;
+ *) log "Cleanup aborted."; exit 0 ;;
+ esac
+fi
+
+if [ "$FORCE" -eq 1 ] && [ "$DRY_RUN" -eq 1 ]; then
+ log "Ignoring --force."
+ FORCE=0
+fi
+
+log "Starting cleanup..."
+
+
+if [ "$FORCE" -eq 1 ]; then
+ remove_existing_backups_permanently
+fi
+
+recursive_cleanup "$ROOT_DIR"
+
+if [ "$DELETED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+fi
+
+OUTPUT_STR=""
+
+# Output if FORCE
+if [ "$FORCE" -eq 1 ]; then
+ OUTPUT_STR="Deleted $DELETED_COUNT $FILE_STR or $DIRECTORY_STR. "
+fi
+
+if [ "$DRY_RUN" -eq 1 ]; then
+ OUTPUT_STR="Dry-run completed successfully."
+else
+ OUTPUT_STR="${OUTPUT_STR}Cleanup completed successfully."
+fi
+
+# Only append the backup message if we actually moved files (MOVED_COUNT > 0)
+if [ "$MOVED_COUNT" -gt 0 ]; then
+ # Correct wording
+ if [ "$MOVED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+ else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+ fi
+ OUTPUT_STR="$OUTPUT_STR Backed up $MOVED_COUNT deleted $FILE_STR or $DIRECTORY_STR in '$BACKUP_DIR'."
+fi
+
+log "$OUTPUT_STR"
\ No newline at end of file
diff --git a/examples/complex-multi-coupling/_reference/nastin-asolver/adapter-config.json b/examples/complex-multi-coupling/_reference/nastin-asolver/adapter-config.json
new file mode 100644
index 0000000..0dfaa04
--- /dev/null
+++ b/examples/complex-multi-coupling/_reference/nastin-asolver/adapter-config.json
@@ -0,0 +1,60 @@
+{
+ "participant_name": "NASTIN",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "NASTIN-SOLIDZ1-Extensive-Mesh",
+ "patches": [
+ "interface-extensive"
+ ],
+ "write_data_names": [
+ "Forces1"
+ ]
+ },
+ {
+ "mesh_name": "NASTIN-SOLIDZ1-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "read_data_names": [
+ "Displacements1"
+ ]
+ },
+ {
+ "mesh_name": "NASTIN-SOLIDZ2-Extensive-Mesh",
+ "patches": [
+ "interface-extensive"
+ ],
+ "write_data_names": [
+ "Forces2"
+ ]
+ },
+ {
+ "mesh_name": "NASTIN-SOLIDZ2-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "read_data_names": [
+ "Displacements2"
+ ]
+ },
+ {
+ "mesh_name": "NASTIN-SOLIDZ3-Extensive-Mesh",
+ "patches": [
+ "interface-extensive"
+ ],
+ "write_data_names": [
+ "Forces3"
+ ]
+ },
+ {
+ "mesh_name": "NASTIN-SOLIDZ3-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "read_data_names": [
+ "Displacements3"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/0/solid-calculix/run.sh b/examples/complex-multi-coupling/_reference/nastin-asolver/run.sh
similarity index 56%
rename from examples/0/solid-calculix/run.sh
rename to examples/complex-multi-coupling/_reference/nastin-asolver/run.sh
index 8817731..5fdeba8 100644
--- a/examples/0/solid-calculix/run.sh
+++ b/examples/complex-multi-coupling/_reference/nastin-asolver/run.sh
@@ -1,20 +1,21 @@
-#!/bin/bash
+#!/usr/bin/env bash
+
#
-# run.sh Script
+# run.sh script
#
# This is a template. You need to implement it yourself.
#
-# If you are trying to launch a Python script, you need to add:
+# If you are trying to launch a python script, you need to add:
#
# python path/to/file.py
#
# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
#
-# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
-# or installing dependencies) have already been completed.
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
# Therefore, you may only need to add:
#
# python path/to/file.py
#
-set -e # Exit immediately if any command fails
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/complex-multi-coupling/_reference/precice-config.xml b/examples/complex-multi-coupling/_reference/precice-config.xml
new file mode 100644
index 0000000..ed6673e
--- /dev/null
+++ b/examples/complex-multi-coupling/_reference/precice-config.xml
@@ -0,0 +1,166 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/complex-multi-coupling/_reference/solidz1-solver/adapter-config.json b/examples/complex-multi-coupling/_reference/solidz1-solver/adapter-config.json
new file mode 100644
index 0000000..89fe9f3
--- /dev/null
+++ b/examples/complex-multi-coupling/_reference/solidz1-solver/adapter-config.json
@@ -0,0 +1,24 @@
+{
+ "participant_name": "SOLIDZ1",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "SOLIDZ1-Extensive-Mesh",
+ "patches": [
+ "interface-extensive"
+ ],
+ "read_data_names": [
+ "Forces1"
+ ]
+ },
+ {
+ "mesh_name": "SOLIDZ1-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "write_data_names": [
+ "Displacements1"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/0/fluid-su2/run.sh b/examples/complex-multi-coupling/_reference/solidz1-solver/run.sh
similarity index 56%
rename from examples/0/fluid-su2/run.sh
rename to examples/complex-multi-coupling/_reference/solidz1-solver/run.sh
index 8817731..5fdeba8 100644
--- a/examples/0/fluid-su2/run.sh
+++ b/examples/complex-multi-coupling/_reference/solidz1-solver/run.sh
@@ -1,20 +1,21 @@
-#!/bin/bash
+#!/usr/bin/env bash
+
#
-# run.sh Script
+# run.sh script
#
# This is a template. You need to implement it yourself.
#
-# If you are trying to launch a Python script, you need to add:
+# If you are trying to launch a python script, you need to add:
#
# python path/to/file.py
#
# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
#
-# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
-# or installing dependencies) have already been completed.
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
# Therefore, you may only need to add:
#
# python path/to/file.py
#
-set -e # Exit immediately if any command fails
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/complex-multi-coupling/_reference/solidz2-solver/adapter-config.json b/examples/complex-multi-coupling/_reference/solidz2-solver/adapter-config.json
new file mode 100644
index 0000000..5ce6436
--- /dev/null
+++ b/examples/complex-multi-coupling/_reference/solidz2-solver/adapter-config.json
@@ -0,0 +1,24 @@
+{
+ "participant_name": "SOLIDZ2",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "SOLIDZ2-Extensive-Mesh",
+ "patches": [
+ "interface-extensive"
+ ],
+ "read_data_names": [
+ "Forces2"
+ ]
+ },
+ {
+ "mesh_name": "SOLIDZ2-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "write_data_names": [
+ "Displacements2"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/precicecasegenerate/templates/template_run.sh b/examples/complex-multi-coupling/_reference/solidz2-solver/run.sh
similarity index 56%
rename from precicecasegenerate/templates/template_run.sh
rename to examples/complex-multi-coupling/_reference/solidz2-solver/run.sh
index 8817731..5fdeba8 100644
--- a/precicecasegenerate/templates/template_run.sh
+++ b/examples/complex-multi-coupling/_reference/solidz2-solver/run.sh
@@ -1,20 +1,21 @@
-#!/bin/bash
+#!/usr/bin/env bash
+
#
-# run.sh Script
+# run.sh script
#
# This is a template. You need to implement it yourself.
#
-# If you are trying to launch a Python script, you need to add:
+# If you are trying to launch a python script, you need to add:
#
# python path/to/file.py
#
# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
#
-# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
-# or installing dependencies) have already been completed.
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
# Therefore, you may only need to add:
#
# python path/to/file.py
#
-set -e # Exit immediately if any command fails
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/complex-multi-coupling/_reference/solidz3-solver/adapter-config.json b/examples/complex-multi-coupling/_reference/solidz3-solver/adapter-config.json
new file mode 100644
index 0000000..521035c
--- /dev/null
+++ b/examples/complex-multi-coupling/_reference/solidz3-solver/adapter-config.json
@@ -0,0 +1,24 @@
+{
+ "participant_name": "SOLIDZ3",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "SOLIDZ3-Extensive-Mesh",
+ "patches": [
+ "interface-extensive"
+ ],
+ "read_data_names": [
+ "Forces3"
+ ]
+ },
+ {
+ "mesh_name": "SOLIDZ3-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "write_data_names": [
+ "Displacements3"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/complex-multi-coupling/_reference/solidz3-solver/run.sh b/examples/complex-multi-coupling/_reference/solidz3-solver/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/complex-multi-coupling/_reference/solidz3-solver/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/complex-multi-coupling/topology.yaml b/examples/complex-multi-coupling/topology.yaml
new file mode 100644
index 0000000..607764b
--- /dev/null
+++ b/examples/complex-multi-coupling/topology.yaml
@@ -0,0 +1,56 @@
+participants:
+ - name: NASTIN
+ solver: ASolver
+ dimensionality: 3
+ - name: SOLIDZ1
+ solver: SolVer
+ dimensionality: 3
+ - name: SOLIDZ2
+ solver: SolVer
+ dimensionality: 3
+ - name: SOLIDZ3
+ solver: SolVer
+ dimensionality: 3
+exchanges:
+ - from: NASTIN
+ to: SOLIDZ1
+ from-patch: interface
+ to-patch: interface
+ data: Forces1
+ type: strong
+ data-type: vector
+ - from: NASTIN
+ to: SOLIDZ2
+ from-patch: interface
+ to-patch: interface
+ data: Forces2
+ type: strong
+ data-type: vector
+ - from: NASTIN
+ to: SOLIDZ3
+ from-patch: interface
+ to-patch: interface
+ data: Forces3
+ type: strong
+ data-type: vector
+ - from: SOLIDZ1
+ to: NASTIN
+ from-patch: interface
+ to-patch: interface
+ data: Displacements1
+ type: strong
+ data-type: vector
+ - from: SOLIDZ2
+ to: NASTIN
+ from-patch: interface
+ to-patch: interface
+ data: Displacements2
+ type: strong
+ data-type: vector
+ - from: SOLIDZ3
+ to: NASTIN
+ from-patch: interface
+ to-patch: interface
+ data: Displacements3
+ type: strong
+ data-type: vector
diff --git a/examples/expert/4/README.md b/examples/expert/4/README.md
deleted file mode 100644
index 34a1bed..0000000
--- a/examples/expert/4/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-This is a first 'Expert' example, which contains more complex acceleration definitions. It is a identical example to example 4, hence name "4".
-
-This accelerations implements a filter which has a type and a limit.
-
-Inspired by: https://github.com/precice/tutorials/tree/develop/partitioned-heat-conduction-complex
-
-
-
diff --git a/examples/expert/4/config_graph.png b/examples/expert/4/config_graph.png
deleted file mode 100644
index e9bb65d..0000000
Binary files a/examples/expert/4/config_graph.png and /dev/null differ
diff --git a/examples/expert/4/dirichlet-dirichletfenics/adapter-config.json b/examples/expert/4/dirichlet-dirichletfenics/adapter-config.json
deleted file mode 100644
index 81c32c7..0000000
--- a/examples/expert/4/dirichlet-dirichletfenics/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Dirichlet",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Dirichlet-Mesh",
- "patches": [
- "interface"
- ],
- "write_data_names": [
- "HeatTransfer"
- ],
- "read_data_names": [
- "Temperature"
- ]
- }
- ]
-}
diff --git a/examples/expert/4/image.png b/examples/expert/4/image.png
deleted file mode 100644
index 218baf7..0000000
Binary files a/examples/expert/4/image.png and /dev/null differ
diff --git a/examples/expert/4/neumann-fenics/adapter-config.json b/examples/expert/4/neumann-fenics/adapter-config.json
deleted file mode 100644
index 04ddcfd..0000000
--- a/examples/expert/4/neumann-fenics/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Neumann",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Neumann-Mesh",
- "patches": [
- "surface"
- ],
- "write_data_names": [
- "Temperature"
- ],
- "read_data_names": [
- "HeatTransfer"
- ]
- }
- ]
-}
diff --git a/examples/expert/4/precice-config.xml b/examples/expert/4/precice-config.xml
deleted file mode 100644
index 5981746..0000000
--- a/examples/expert/4/precice-config.xml
+++ /dev/null
@@ -1,64 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/examples/expert/4/topology.yaml b/examples/expert/4/topology.yaml
deleted file mode 100644
index 0548fa0..0000000
--- a/examples/expert/4/topology.yaml
+++ /dev/null
@@ -1,35 +0,0 @@
-coupling-scheme:
- max-time: 1.0
- time-window-size: 0.1
- max-iterations: 100
-
-acceleration:
- initial-relaxation:
- value: 0.1
- max-used-iterations: 10
- time-windows-reused: 5
- filter:
- type: QR2
- limit: 1e-3
-
-participants:
- - name: Dirichlet
- solver: DirichletFEniCS
- dimensionality: 2
- - name: Neumann
- solver: FEniCS
- dimensionality: 2
-
-exchanges:
- - from: Dirichlet
- from-patch: interface
- to: Neumann
- to-patch: surface
- data: HeatTransfer # Fluid reads heat flux
- type: strong
- - from: Neumann
- from-patch: surface
- to: Dirichlet
- to-patch: interface
- data: Temperature # Solid reads temperature
- type: strong
diff --git a/examples/expert/5/README.md b/examples/expert/5/README.md
deleted file mode 100644
index ec39daf..0000000
--- a/examples/expert/5/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
-This expert example is identical example to example 5, but its acceleration contains more complex settings like filters, max-used iterations, preconditioner etc.
-
-Inspired by: https://github.com/precice/tutorials/tree/develop/heat-exchanger-simplified
-
-
-
diff --git a/examples/expert/5/config_graph.png b/examples/expert/5/config_graph.png
deleted file mode 100644
index 4806195..0000000
Binary files a/examples/expert/5/config_graph.png and /dev/null differ
diff --git a/examples/expert/5/fluid-bottom-openfoam/adapter-config.json b/examples/expert/5/fluid-bottom-openfoam/adapter-config.json
deleted file mode 100644
index f0fe7d6..0000000
--- a/examples/expert/5/fluid-bottom-openfoam/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Fluid-Bottom",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Fluid-Bottom-Mesh",
- "patches": [
- "surface"
- ],
- "write_data_names": [
- "Temperature"
- ],
- "read_data_names": [
- "HeatTransfer"
- ]
- }
- ]
-}
diff --git a/examples/expert/5/fluid-top-openfoam/adapter-config.json b/examples/expert/5/fluid-top-openfoam/adapter-config.json
deleted file mode 100644
index 3c49376..0000000
--- a/examples/expert/5/fluid-top-openfoam/adapter-config.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
- "participant_name": "Fluid-Top",
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": "Fluid-Top-Mesh",
- "patches": [
- "interface"
- ],
- "write_data_names": [
- "Temperature"
- ],
- "read_data_names": [
- "HeatTransfer"
- ]
- }
- ]
-}
diff --git a/examples/expert/5/image.png b/examples/expert/5/image.png
deleted file mode 100644
index 8d4573e..0000000
Binary files a/examples/expert/5/image.png and /dev/null differ
diff --git a/examples/expert/5/precice-config.xml b/examples/expert/5/precice-config.xml
deleted file mode 100644
index a523e4d..0000000
--- a/examples/expert/5/precice-config.xml
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/examples/expert/5/topology.yaml b/examples/expert/5/topology.yaml
deleted file mode 100644
index fc092f5..0000000
--- a/examples/expert/5/topology.yaml
+++ /dev/null
@@ -1,54 +0,0 @@
-coupling-scheme:
- max-time: 5.0
- time-window-size: 1e-2
- max-iterations: 30
-
-acceleration:
- name: IQN-ILS
- initial-relaxation:
- value: 0.5
- preconditioner:
- freeze-after: -1
- type: residual-sum
- filter:
- limit: 1e-6
- type: QR1
- max-used-iterations: 50
- time-windows-reused: 10
-
-participants:
- - name: Fluid-Top
- solver: OpenFOAM
- dimensionality: 2
- - name: Fluid-Bottom
- solver: OpenFOAM
- dimensionality: 2
- - name: Solid
- solver: CalculiX
- dimensionality: 2
-
-exchanges:
- - from: Fluid-Top
- from-patch: surface
- to: Solid
- to-patch: interface
- data: Temperature-Top # Solid reads temperature
- type: strong
- - from: Solid
- from-patch: surface
- to: Fluid-Top
- to-patch: interface
- data: HeatTransfer-Top # Solid reads temperature
- type: strong
- - from: Fluid-Bottom
- from-patch: surface
- to: Solid
- to-patch: interface
- data: Temperature-Bottom # Solid reads temperature
- type: strong
- - from: Solid
- from-patch: interface
- to: Fluid-Bottom
- to-patch: surface
- data: HeatTransfer-Bottom # Fluid reads heat flux
- type: strong
diff --git a/examples/multi-coupling/_reference/README.md b/examples/multi-coupling/_reference/README.md
new file mode 100644
index 0000000..67842e7
--- /dev/null
+++ b/examples/multi-coupling/_reference/README.md
@@ -0,0 +1,83 @@
+# Multiphysics Simulation Project
+
+> [!NOTE] This `README.md` file was auto-generated by preCICE case-generate.
+
+
+---
+
+## Project Overview
+
+This project uses **preCICE** for a multiphysics simulation involving:
+
+- Solver `OpenFOAM` with participant `Fluid-Top`
+- Solver `OpenFOAM` with participant `Fluid-Bottom`
+- Solver `CalculiX` with participant `Solid`
+
+### Project Structure
+
+Global files that are generated are: `precice-config.xml`, `README.md` and `clean.sh`. Additionally, for each participant, a folder with an `adapter-config.json` and a `run.sh` file are created.
+The folder structure is as follows:
+
+```
+_generated/
+ ├── README.md # This file
+ ├── clean.sh # Clean up script
+ ├──fluid-top-openfoam/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──fluid-bottom-openfoam/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──solid-calculix/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ └── precice-config.xml # Global precice-config.xml file
+```
+
+
+- `precice-config.xml` is the global preCICE configuration file which defines the parameters and communication of participants
+- `adapter-config.json` is a configuration file to couple the solvers with preCICE.
+- `run.sh` is a script that is meant to execute a participant. Note, however, that since different solvers are executed differently, this file is not implemented yet.
+- `clean.sh` removes any files in the current root directory that were not created by preCICE case-generate (and moves them to a backup folder).
+Execution:
+
+```bash
+./clean.sh [--force] [--dry-run]
+```
+
+- `--force` Deletes the files and any backup folders
+- `--dry-run` Does not delete any files, but prints files that would be deleted
+
+---
+
+## Prerequisites
+
+Before running the simulation, ensure you have the following installed:
+
+- The preCICE coupling library
+- Solver `OpenFOAM` and its dependencies
+- Solver `OpenFOAM` and its dependencies
+- Solver `CalculiX` and its dependencies
+
+---
+
+## Running the Simulation
+
+### Quick Start
+
+```bash
+# Navigate to the `_generated` folder
+cd _generated/
+
+# Implement the run script
+
+# Make the run script executable
+chmod +x run.sh
+
+# Execute the simulation
+./run.sh
+```
+
+---
+
+For more information, see the [preCICE documentation](https://precice.org/docs.html) and [precice-case-generate](https://github.com/precice/case-generate).
\ No newline at end of file
diff --git a/examples/multi-coupling/_reference/clean.sh b/examples/multi-coupling/_reference/clean.sh
new file mode 100644
index 0000000..2954b31
--- /dev/null
+++ b/examples/multi-coupling/_reference/clean.sh
@@ -0,0 +1,350 @@
+#!/usr/bin/env bash
+
+# -------------------------------------------------------------------
+# Script Name: clean.sh
+# Description: Recursively deletes/moves files/dirs except:
+# - Global preserved filenames anywhere (run.sh, adapter-config.json...)
+# - Specific root files (README.md, precice-config.xml)
+# Usage: ./clean.sh [--dry-run] [--force]
+# --dry-run : show what would happen, don't remove/move
+# --force : permanently delete unpreserved items AND remove existing backups
+# -------------------------------------------------------------------
+
+# Strict mode:
+# -e: exit on error
+# -u: exit on undefined variable
+# -o pipefail: exit if any command in a pipe fails
+set -euo pipefail
+
+# --- CONFIGURATION ---
+ROOT_DIR="$(pwd)"
+LOG_FILE="cleanup.log"
+BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
+
+# 1. GLOBAL PRESERVES: filenames to keep anywhere in the tree
+GLOBAL_PRESERVE_NAMES=(
+ "run.sh"
+ "adapter-config.json"
+)
+
+# 2. ROOT PRESERVES: filenames to keep only if in ROOT_DIR
+ROOT_PRESERVE_PATHS=(
+ "clean.sh"
+ "README.md"
+ "precice-config.xml"
+ "$LOG_FILE" # always keep the log (will be overwritten)
+)
+
+# --- DEFAULTS ---
+DRY_RUN=0 # No dry-run by default
+FORCE=0 # No permanent removal of files by default
+MOVED_COUNT=0 # Counter to track if we actually backed anything up
+DELETED_COUNT=0 # Counter to track if we actually deleted anything
+
+# --- HELPERS ---
+
+log() {
+ # We use tee -a (append) here so we don't overwrite previous lines *of this run*.
+ # The file is cleared once at the start of the MAIN block.
+ echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
+}
+
+# Is filename (basename) a global preserved name?
+is_global_preserved() {
+ local filename="$1"
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [[ "$filename" == "$name" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Is a relative path preserved at root?
+is_root_preserved() {
+ local relpath="$1"
+ for p in "${ROOT_PRESERVE_PATHS[@]}"; do
+ if [[ "$relpath" == "$p" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Find out whether directory contains any preserved file *anywhere* under it.
+# Implementation Note: Uses `head -n 1` instead of `-quit` for maximum compatibility
+# across all Linux distros (including Alpine/BusyBox) and BSD/MacOS.
+dir_contains_preserved_content() {
+ local dir="$1"
+ local name
+ local -a find_args=()
+
+ # Build find arguments: ( -name A -o -name B ... )
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [ ${#find_args[@]} -eq 0 ]; then
+ find_args+=( -name "$name" )
+ else
+ find_args+=( -o -name "$name" )
+ fi
+ done
+
+ # Check if find returns any match.
+ # piping to head -n 1 causes find to receive SIGPIPE and stop early if a match is found.
+ if [ -n "$(find "$dir" \( "${find_args[@]}" \) -print 2>/dev/null | head -n 1)" ]; then
+ return 0
+ fi
+ return 1
+}
+
+# Ensure backup destination exists and safely move source there preserving relative path.
+# Prevents collisions by making unique names if needed.
+# Args:
+# $1 = absolute src path
+# $2 = rel path relative to ROOT_DIR (used to recreate path inside backup)
+safe_move_to_backup() {
+ local src="$1"
+ local rel="$2"
+
+ # Destination path = $BACKUP_DIR/$rel
+ local dest="$BACKUP_DIR/$rel"
+ local dest_dir
+ dest_dir="$(dirname "$dest")"
+
+ # Normalize: if dirname is ".", use $BACKUP_DIR as target dir
+ if [[ "$dest_dir" == "$BACKUP_DIR/." ]] || [[ "$dest_dir" == "." ]]; then
+ dest_dir="$BACKUP_DIR"
+ fi
+
+ if [ "$DRY_RUN" -eq 0 ]; then
+ mkdir -p "$dest_dir"
+ fi
+
+ # If dest exists, append numeric suffix before extension to avoid overwrite
+ if [ -e "$dest" ]; then
+ local base name ext candidate n
+ base="$(basename "$dest")"
+ name="${base%.*}"
+ ext="${base##*.}"
+ n=1
+
+ # Check if file has an extension
+ if [[ "$ext" == "$base" ]]; then
+ # No extension
+ candidate="$dest_dir/${name}_$n"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n"
+ done
+ else
+ # Has extension
+ candidate="$dest_dir/${name}_$n.$ext"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n.$ext"
+ done
+ fi
+ dest="$candidate"
+ fi
+
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would be deleted: ${rel}"
+ else
+ mv -- "$src" "$dest"
+ # Increment the counter because we successfully moved a file
+ MOVED_COUNT=$((MOVED_COUNT + 1))
+ log "Deleted: ${rel}"
+ fi
+}
+
+# Safe remove (used by --force). Respects --dry-run.
+safe_remove() {
+ local src="$1"
+ local rel="$2"
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove: $rel"
+ else
+ rm -rf -- "$src"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed: $rel"
+ fi
+}
+
+# Remove existing backup_* directories permanently (used when FORCE=1).
+remove_existing_backups_permanently() {
+ shopt -s nullglob
+ local found=0
+ for old in "$ROOT_DIR"/backup_*; do
+ if [ -d "$old" ]; then
+ found=1
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove backup: $(basename "$old")"
+ else
+ rm -rf -- "$old"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed backup: $(basename "$old")"
+ fi
+ fi
+ done
+ shopt -u nullglob
+ if [ "$found" -eq 0 ]; then
+ log "No existing backup directories to remove."
+ fi
+}
+
+# Perform action for a single path (file or directory) that is NOT preserved.
+# Moves to backup preserving tree, or removes permanently when FORCE=1.
+perform_action() {
+ local item="$1"
+ local rel_path="${item#$ROOT_DIR/}"
+
+ if [ "$FORCE" -eq 1 ]; then
+ safe_remove "$item" "$rel_path"
+ else
+ safe_move_to_backup "$item" "$rel_path"
+ fi
+}
+
+# --- RECURSIVE CLEANUP ---
+recursive_cleanup() {
+ local current_dir="$1"
+
+ # dotglob: includes hidden files (starting with .)
+ # nullglob: makes the loop not run if no files match (avoids literal string issues)
+ shopt -s dotglob nullglob
+
+ local item rel_path base
+
+ for item in "$current_dir"/*; do
+ # Sanity check: ensure file exists (handles rare race conditions or broken links)
+ [ ! -e "$item" ] && [ ! -L "$item" ] && continue
+
+ rel_path="${item#$ROOT_DIR/}"
+ base="$(basename "$item")"
+
+ # Skip . and .. (though glob usually excludes them, safety first)
+ if [[ "$base" == "." || "$base" == ".." ]]; then
+ continue
+ fi
+
+ # --- FILE or SYMLINK ---
+ if [ -f "$item" ] || [ -L "$item" ]; then
+ # 1. Preserve by global name anywhere?
+ if is_global_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # 2. Preserve if at root and matches root-preserve list?
+ if [[ "$current_dir" == "$ROOT_DIR" ]] && is_root_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # Otherwise: perform action
+ perform_action "$item"
+ continue
+ fi
+
+ # --- DIRECTORY ---
+ if [ -d "$item" ]; then
+ # Check if dir contains any preserved content anywhere below
+ if dir_contains_preserved_content "$item"; then
+ recursive_cleanup "$item"
+
+ # After recursion, if directory is empty, remove it (respecting dry-run)
+ if [ -z "$(ls -A "$item")" ]; then
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would remove empty directory: $rel_path"
+ else
+ rmdir -- "$item"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Removed empty directory: $rel_path"
+ fi
+ fi
+ else
+ # Directory contains no preserved files anywhere deep: remove/move entire directory
+ perform_action "$item"
+ fi
+ continue
+ fi
+ done
+
+ # Restore defaults for shopt to avoid side effects if function is reused
+ shopt -u dotglob nullglob
+}
+
+# --- MAIN ---
+
+# Parse flags
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --dry-run) DRY_RUN=1 ;;
+ --force) FORCE=1 ;;
+ *) echo "Unknown parameter: $1"; exit 1 ;;
+ esac
+ shift
+done
+
+# Initialize/Clear the log file for this new run
+: > "$LOG_FILE"
+
+# Confirmation prompt (skip when dry-run)
+if [ "$DRY_RUN" -eq 1 ]; then
+ log "Dry run enabled, nothing will be removed."
+else
+ read -p "This will delete all files except preserved ones. Proceed? [y/n]: " confirm
+ case "$confirm" in
+ [yY][eE][sS]|[yY]) ;;
+ *) log "Cleanup aborted."; exit 0 ;;
+ esac
+fi
+
+if [ "$FORCE" -eq 1 ] && [ "$DRY_RUN" -eq 1 ]; then
+ log "Ignoring --force."
+ FORCE=0
+fi
+
+log "Starting cleanup..."
+
+
+if [ "$FORCE" -eq 1 ]; then
+ remove_existing_backups_permanently
+fi
+
+recursive_cleanup "$ROOT_DIR"
+
+if [ "$DELETED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+fi
+
+OUTPUT_STR=""
+
+# Output if FORCE
+if [ "$FORCE" -eq 1 ]; then
+ OUTPUT_STR="Deleted $DELETED_COUNT $FILE_STR or $DIRECTORY_STR. "
+fi
+
+if [ "$DRY_RUN" -eq 1 ]; then
+ OUTPUT_STR="Dry-run completed successfully."
+else
+ OUTPUT_STR="${OUTPUT_STR}Cleanup completed successfully."
+fi
+
+# Only append the backup message if we actually moved files (MOVED_COUNT > 0)
+if [ "$MOVED_COUNT" -gt 0 ]; then
+ # Correct wording
+ if [ "$MOVED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+ else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+ fi
+ OUTPUT_STR="$OUTPUT_STR Backed up $MOVED_COUNT deleted $FILE_STR or $DIRECTORY_STR in '$BACKUP_DIR'."
+fi
+
+log "$OUTPUT_STR"
\ No newline at end of file
diff --git a/examples/multi-coupling/_reference/fluid-bottom-openfoam/adapter-config.json b/examples/multi-coupling/_reference/fluid-bottom-openfoam/adapter-config.json
new file mode 100644
index 0000000..6c084e6
--- /dev/null
+++ b/examples/multi-coupling/_reference/fluid-bottom-openfoam/adapter-config.json
@@ -0,0 +1,24 @@
+{
+ "participant_name": "Fluid-Bottom",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Fluid-Bottom-Extensive-Mesh",
+ "patches": [
+ "surface-extensive"
+ ],
+ "read_data_names": [
+ "HeatTransfer-Bottom"
+ ]
+ },
+ {
+ "mesh_name": "Fluid-Bottom-Intensive-Mesh",
+ "patches": [
+ "surface-intensive"
+ ],
+ "write_data_names": [
+ "Temperature-Bottom"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/multi-coupling/_reference/fluid-bottom-openfoam/run.sh b/examples/multi-coupling/_reference/fluid-bottom-openfoam/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/multi-coupling/_reference/fluid-bottom-openfoam/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/multi-coupling/_reference/fluid-top-openfoam/adapter-config.json b/examples/multi-coupling/_reference/fluid-top-openfoam/adapter-config.json
new file mode 100644
index 0000000..c3e6b61
--- /dev/null
+++ b/examples/multi-coupling/_reference/fluid-top-openfoam/adapter-config.json
@@ -0,0 +1,24 @@
+{
+ "participant_name": "Fluid-Top",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Fluid-Top-Extensive-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "read_data_names": [
+ "HeatTransfer-Top"
+ ]
+ },
+ {
+ "mesh_name": "Fluid-Top-Intensive-Mesh",
+ "patches": [
+ "surface"
+ ],
+ "write_data_names": [
+ "Temperature-Top"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/multi-coupling/_reference/fluid-top-openfoam/run.sh b/examples/multi-coupling/_reference/fluid-top-openfoam/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/multi-coupling/_reference/fluid-top-openfoam/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/multi-coupling/_reference/precice-config.xml b/examples/multi-coupling/_reference/precice-config.xml
new file mode 100644
index 0000000..ec147f3
--- /dev/null
+++ b/examples/multi-coupling/_reference/precice-config.xml
@@ -0,0 +1,137 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/multi-coupling/_reference/solid-calculix/adapter-config.json b/examples/multi-coupling/_reference/solid-calculix/adapter-config.json
new file mode 100644
index 0000000..0fab59d
--- /dev/null
+++ b/examples/multi-coupling/_reference/solid-calculix/adapter-config.json
@@ -0,0 +1,42 @@
+{
+ "participant_name": "Solid",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Solid-Fluid-Top-Extensive-Mesh",
+ "patches": [
+ "surface"
+ ],
+ "write_data_names": [
+ "HeatTransfer-Top"
+ ]
+ },
+ {
+ "mesh_name": "Solid-Fluid-Top-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "read_data_names": [
+ "Temperature-Top"
+ ]
+ },
+ {
+ "mesh_name": "Solid-Fluid-Bottom-Extensive-Mesh",
+ "patches": [
+ "interface-extensive"
+ ],
+ "write_data_names": [
+ "HeatTransfer-Bottom"
+ ]
+ },
+ {
+ "mesh_name": "Solid-Fluid-Bottom-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "read_data_names": [
+ "Temperature-Bottom"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/multi-coupling/_reference/solid-calculix/run.sh b/examples/multi-coupling/_reference/solid-calculix/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/multi-coupling/_reference/solid-calculix/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/5/topology.yaml b/examples/multi-coupling/topology.yaml
similarity index 91%
rename from examples/5/topology.yaml
rename to examples/multi-coupling/topology.yaml
index 50bd652..3286748 100644
--- a/examples/5/topology.yaml
+++ b/examples/multi-coupling/topology.yaml
@@ -1,8 +1,3 @@
-coupling-scheme:
- max-time: 1e-1
- time-window-size: 1e-3
- max-iterations: 30
-
participants:
- name: Fluid-Top
solver: OpenFOAM
diff --git a/examples/strong-coupling/_reference/README.md b/examples/strong-coupling/_reference/README.md
new file mode 100644
index 0000000..b915599
--- /dev/null
+++ b/examples/strong-coupling/_reference/README.md
@@ -0,0 +1,78 @@
+# Multiphysics Simulation Project
+
+> [!NOTE] This `README.md` file was auto-generated by preCICE case-generate.
+
+
+---
+
+## Project Overview
+
+This project uses **preCICE** for a multiphysics simulation involving:
+
+- Solver `SU2` with participant `Fluid`
+- Solver `Calculix` with participant `Solid`
+
+### Project Structure
+
+Global files that are generated are: `precice-config.xml`, `README.md` and `clean.sh`. Additionally, for each participant, a folder with an `adapter-config.json` and a `run.sh` file are created.
+The folder structure is as follows:
+
+```
+_generated/
+ ├── README.md # This file
+ ├── clean.sh # Clean up script
+ ├──fluid-su2/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──solid-calculix/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ └── precice-config.xml # Global precice-config.xml file
+```
+
+
+- `precice-config.xml` is the global preCICE configuration file which defines the parameters and communication of participants
+- `adapter-config.json` is a configuration file to couple the solvers with preCICE.
+- `run.sh` is a script that is meant to execute a participant. Note, however, that since different solvers are executed differently, this file is not implemented yet.
+- `clean.sh` removes any files in the current root directory that were not created by preCICE case-generate (and moves them to a backup folder).
+Execution:
+
+```bash
+./clean.sh [--force] [--dry-run]
+```
+
+- `--force` Deletes the files and any backup folders
+- `--dry-run` Does not delete any files, but prints files that would be deleted
+
+---
+
+## Prerequisites
+
+Before running the simulation, ensure you have the following installed:
+
+- The preCICE coupling library
+- Solver `SU2` and its dependencies
+- Solver `Calculix` and its dependencies
+
+---
+
+## Running the Simulation
+
+### Quick Start
+
+```bash
+# Navigate to the `_generated` folder
+cd _generated/
+
+# Implement the run script
+
+# Make the run script executable
+chmod +x run.sh
+
+# Execute the simulation
+./run.sh
+```
+
+---
+
+For more information, see the [preCICE documentation](https://precice.org/docs.html) and [precice-case-generate](https://github.com/precice/case-generate).
\ No newline at end of file
diff --git a/examples/strong-coupling/_reference/clean.sh b/examples/strong-coupling/_reference/clean.sh
new file mode 100644
index 0000000..2954b31
--- /dev/null
+++ b/examples/strong-coupling/_reference/clean.sh
@@ -0,0 +1,350 @@
+#!/usr/bin/env bash
+
+# -------------------------------------------------------------------
+# Script Name: clean.sh
+# Description: Recursively deletes/moves files/dirs except:
+# - Global preserved filenames anywhere (run.sh, adapter-config.json...)
+# - Specific root files (README.md, precice-config.xml)
+# Usage: ./clean.sh [--dry-run] [--force]
+# --dry-run : show what would happen, don't remove/move
+# --force : permanently delete unpreserved items AND remove existing backups
+# -------------------------------------------------------------------
+
+# Strict mode:
+# -e: exit on error
+# -u: exit on undefined variable
+# -o pipefail: exit if any command in a pipe fails
+set -euo pipefail
+
+# --- CONFIGURATION ---
+ROOT_DIR="$(pwd)"
+LOG_FILE="cleanup.log"
+BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
+
+# 1. GLOBAL PRESERVES: filenames to keep anywhere in the tree
+GLOBAL_PRESERVE_NAMES=(
+ "run.sh"
+ "adapter-config.json"
+)
+
+# 2. ROOT PRESERVES: filenames to keep only if in ROOT_DIR
+ROOT_PRESERVE_PATHS=(
+ "clean.sh"
+ "README.md"
+ "precice-config.xml"
+ "$LOG_FILE" # always keep the log (will be overwritten)
+)
+
+# --- DEFAULTS ---
+DRY_RUN=0 # No dry-run by default
+FORCE=0 # No permanent removal of files by default
+MOVED_COUNT=0 # Counter to track if we actually backed anything up
+DELETED_COUNT=0 # Counter to track if we actually deleted anything
+
+# --- HELPERS ---
+
+log() {
+ # We use tee -a (append) here so we don't overwrite previous lines *of this run*.
+ # The file is cleared once at the start of the MAIN block.
+ echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
+}
+
+# Is filename (basename) a global preserved name?
+is_global_preserved() {
+ local filename="$1"
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [[ "$filename" == "$name" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Is a relative path preserved at root?
+is_root_preserved() {
+ local relpath="$1"
+ for p in "${ROOT_PRESERVE_PATHS[@]}"; do
+ if [[ "$relpath" == "$p" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Find out whether directory contains any preserved file *anywhere* under it.
+# Implementation Note: Uses `head -n 1` instead of `-quit` for maximum compatibility
+# across all Linux distros (including Alpine/BusyBox) and BSD/MacOS.
+dir_contains_preserved_content() {
+ local dir="$1"
+ local name
+ local -a find_args=()
+
+ # Build find arguments: ( -name A -o -name B ... )
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [ ${#find_args[@]} -eq 0 ]; then
+ find_args+=( -name "$name" )
+ else
+ find_args+=( -o -name "$name" )
+ fi
+ done
+
+ # Check if find returns any match.
+ # piping to head -n 1 causes find to receive SIGPIPE and stop early if a match is found.
+ if [ -n "$(find "$dir" \( "${find_args[@]}" \) -print 2>/dev/null | head -n 1)" ]; then
+ return 0
+ fi
+ return 1
+}
+
+# Ensure backup destination exists and safely move source there preserving relative path.
+# Prevents collisions by making unique names if needed.
+# Args:
+# $1 = absolute src path
+# $2 = rel path relative to ROOT_DIR (used to recreate path inside backup)
+safe_move_to_backup() {
+ local src="$1"
+ local rel="$2"
+
+ # Destination path = $BACKUP_DIR/$rel
+ local dest="$BACKUP_DIR/$rel"
+ local dest_dir
+ dest_dir="$(dirname "$dest")"
+
+ # Normalize: if dirname is ".", use $BACKUP_DIR as target dir
+ if [[ "$dest_dir" == "$BACKUP_DIR/." ]] || [[ "$dest_dir" == "." ]]; then
+ dest_dir="$BACKUP_DIR"
+ fi
+
+ if [ "$DRY_RUN" -eq 0 ]; then
+ mkdir -p "$dest_dir"
+ fi
+
+ # If dest exists, append numeric suffix before extension to avoid overwrite
+ if [ -e "$dest" ]; then
+ local base name ext candidate n
+ base="$(basename "$dest")"
+ name="${base%.*}"
+ ext="${base##*.}"
+ n=1
+
+ # Check if file has an extension
+ if [[ "$ext" == "$base" ]]; then
+ # No extension
+ candidate="$dest_dir/${name}_$n"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n"
+ done
+ else
+ # Has extension
+ candidate="$dest_dir/${name}_$n.$ext"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n.$ext"
+ done
+ fi
+ dest="$candidate"
+ fi
+
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would be deleted: ${rel}"
+ else
+ mv -- "$src" "$dest"
+ # Increment the counter because we successfully moved a file
+ MOVED_COUNT=$((MOVED_COUNT + 1))
+ log "Deleted: ${rel}"
+ fi
+}
+
+# Safe remove (used by --force). Respects --dry-run.
+safe_remove() {
+ local src="$1"
+ local rel="$2"
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove: $rel"
+ else
+ rm -rf -- "$src"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed: $rel"
+ fi
+}
+
+# Remove existing backup_* directories permanently (used when FORCE=1).
+remove_existing_backups_permanently() {
+ shopt -s nullglob
+ local found=0
+ for old in "$ROOT_DIR"/backup_*; do
+ if [ -d "$old" ]; then
+ found=1
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove backup: $(basename "$old")"
+ else
+ rm -rf -- "$old"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed backup: $(basename "$old")"
+ fi
+ fi
+ done
+ shopt -u nullglob
+ if [ "$found" -eq 0 ]; then
+ log "No existing backup directories to remove."
+ fi
+}
+
+# Perform action for a single path (file or directory) that is NOT preserved.
+# Moves to backup preserving tree, or removes permanently when FORCE=1.
+perform_action() {
+ local item="$1"
+ local rel_path="${item#$ROOT_DIR/}"
+
+ if [ "$FORCE" -eq 1 ]; then
+ safe_remove "$item" "$rel_path"
+ else
+ safe_move_to_backup "$item" "$rel_path"
+ fi
+}
+
+# --- RECURSIVE CLEANUP ---
+recursive_cleanup() {
+ local current_dir="$1"
+
+ # dotglob: includes hidden files (starting with .)
+ # nullglob: makes the loop not run if no files match (avoids literal string issues)
+ shopt -s dotglob nullglob
+
+ local item rel_path base
+
+ for item in "$current_dir"/*; do
+ # Sanity check: ensure file exists (handles rare race conditions or broken links)
+ [ ! -e "$item" ] && [ ! -L "$item" ] && continue
+
+ rel_path="${item#$ROOT_DIR/}"
+ base="$(basename "$item")"
+
+ # Skip . and .. (though glob usually excludes them, safety first)
+ if [[ "$base" == "." || "$base" == ".." ]]; then
+ continue
+ fi
+
+ # --- FILE or SYMLINK ---
+ if [ -f "$item" ] || [ -L "$item" ]; then
+ # 1. Preserve by global name anywhere?
+ if is_global_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # 2. Preserve if at root and matches root-preserve list?
+ if [[ "$current_dir" == "$ROOT_DIR" ]] && is_root_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # Otherwise: perform action
+ perform_action "$item"
+ continue
+ fi
+
+ # --- DIRECTORY ---
+ if [ -d "$item" ]; then
+ # Check if dir contains any preserved content anywhere below
+ if dir_contains_preserved_content "$item"; then
+ recursive_cleanup "$item"
+
+ # After recursion, if directory is empty, remove it (respecting dry-run)
+ if [ -z "$(ls -A "$item")" ]; then
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would remove empty directory: $rel_path"
+ else
+ rmdir -- "$item"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Removed empty directory: $rel_path"
+ fi
+ fi
+ else
+ # Directory contains no preserved files anywhere deep: remove/move entire directory
+ perform_action "$item"
+ fi
+ continue
+ fi
+ done
+
+ # Restore defaults for shopt to avoid side effects if function is reused
+ shopt -u dotglob nullglob
+}
+
+# --- MAIN ---
+
+# Parse flags
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --dry-run) DRY_RUN=1 ;;
+ --force) FORCE=1 ;;
+ *) echo "Unknown parameter: $1"; exit 1 ;;
+ esac
+ shift
+done
+
+# Initialize/Clear the log file for this new run
+: > "$LOG_FILE"
+
+# Confirmation prompt (skip when dry-run)
+if [ "$DRY_RUN" -eq 1 ]; then
+ log "Dry run enabled, nothing will be removed."
+else
+ read -p "This will delete all files except preserved ones. Proceed? [y/n]: " confirm
+ case "$confirm" in
+ [yY][eE][sS]|[yY]) ;;
+ *) log "Cleanup aborted."; exit 0 ;;
+ esac
+fi
+
+if [ "$FORCE" -eq 1 ] && [ "$DRY_RUN" -eq 1 ]; then
+ log "Ignoring --force."
+ FORCE=0
+fi
+
+log "Starting cleanup..."
+
+
+if [ "$FORCE" -eq 1 ]; then
+ remove_existing_backups_permanently
+fi
+
+recursive_cleanup "$ROOT_DIR"
+
+if [ "$DELETED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+fi
+
+OUTPUT_STR=""
+
+# Output if FORCE
+if [ "$FORCE" -eq 1 ]; then
+ OUTPUT_STR="Deleted $DELETED_COUNT $FILE_STR or $DIRECTORY_STR. "
+fi
+
+if [ "$DRY_RUN" -eq 1 ]; then
+ OUTPUT_STR="Dry-run completed successfully."
+else
+ OUTPUT_STR="${OUTPUT_STR}Cleanup completed successfully."
+fi
+
+# Only append the backup message if we actually moved files (MOVED_COUNT > 0)
+if [ "$MOVED_COUNT" -gt 0 ]; then
+ # Correct wording
+ if [ "$MOVED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+ else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+ fi
+ OUTPUT_STR="$OUTPUT_STR Backed up $MOVED_COUNT deleted $FILE_STR or $DIRECTORY_STR in '$BACKUP_DIR'."
+fi
+
+log "$OUTPUT_STR"
\ No newline at end of file
diff --git a/examples/strong-coupling/_reference/fluid-su2/adapter-config.json b/examples/strong-coupling/_reference/fluid-su2/adapter-config.json
new file mode 100644
index 0000000..a4bc88e
--- /dev/null
+++ b/examples/strong-coupling/_reference/fluid-su2/adapter-config.json
@@ -0,0 +1,24 @@
+{
+ "participant_name": "Fluid",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Fluid-Extensive-Mesh",
+ "patches": [
+ "interface-extensive"
+ ],
+ "write_data_names": [
+ "Force"
+ ]
+ },
+ {
+ "mesh_name": "Fluid-Intensive-Mesh",
+ "patches": [
+ "interface-intensive"
+ ],
+ "read_data_names": [
+ "Displacement"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/strong-coupling/_reference/fluid-su2/run.sh b/examples/strong-coupling/_reference/fluid-su2/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/strong-coupling/_reference/fluid-su2/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/strong-coupling/_reference/precice-config.xml b/examples/strong-coupling/_reference/precice-config.xml
new file mode 100644
index 0000000..131bb0c
--- /dev/null
+++ b/examples/strong-coupling/_reference/precice-config.xml
@@ -0,0 +1,67 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/strong-coupling/_reference/solid-calculix/adapter-config.json b/examples/strong-coupling/_reference/solid-calculix/adapter-config.json
new file mode 100644
index 0000000..d6404fe
--- /dev/null
+++ b/examples/strong-coupling/_reference/solid-calculix/adapter-config.json
@@ -0,0 +1,24 @@
+{
+ "participant_name": "Solid",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Solid-Extensive-Mesh",
+ "patches": [
+ "surface-extensive"
+ ],
+ "read_data_names": [
+ "Force"
+ ]
+ },
+ {
+ "mesh_name": "Solid-Intensive-Mesh",
+ "patches": [
+ "surface-intensive"
+ ],
+ "write_data_names": [
+ "Displacement"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/strong-coupling/_reference/solid-calculix/run.sh b/examples/strong-coupling/_reference/solid-calculix/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/strong-coupling/_reference/solid-calculix/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/1/topology.yaml b/examples/strong-coupling/topology.yaml
similarity index 86%
rename from examples/1/topology.yaml
rename to examples/strong-coupling/topology.yaml
index a6519a7..4689ccb 100644
--- a/examples/1/topology.yaml
+++ b/examples/strong-coupling/topology.yaml
@@ -1,5 +1,3 @@
-coupling-scheme:
- display_standard_values: true
participants:
- name: Fluid
solver: SU2
diff --git a/examples/tutorial1/_reference/README.md b/examples/tutorial1/_reference/README.md
new file mode 100644
index 0000000..5662797
--- /dev/null
+++ b/examples/tutorial1/_reference/README.md
@@ -0,0 +1,78 @@
+# Multiphysics Simulation Project
+
+> [!NOTE] This `README.md` file was auto-generated by preCICE case-generate.
+
+
+---
+
+## Project Overview
+
+This project uses **preCICE** for a multiphysics simulation involving:
+
+- Solver `ASolver` with participant `Generator`
+- Solver `BSolver` with participant `Propagator`
+
+### Project Structure
+
+Global files that are generated are: `precice-config.xml`, `README.md` and `clean.sh`. Additionally, for each participant, a folder with an `adapter-config.json` and a `run.sh` file are created.
+The folder structure is as follows:
+
+```
+_generated/
+ ├── README.md # This file
+ ├── clean.sh # Clean up script
+ ├──generator-asolver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──propagator-bsolver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ └── precice-config.xml # Global precice-config.xml file
+```
+
+
+- `precice-config.xml` is the global preCICE configuration file which defines the parameters and communication of participants
+- `adapter-config.json` is a configuration file to couple the solvers with preCICE.
+- `run.sh` is a script that is meant to execute a participant. Note, however, that since different solvers are executed differently, this file is not implemented yet.
+- `clean.sh` removes any files in the current root directory that were not created by preCICE case-generate (and moves them to a backup folder).
+Execution:
+
+```bash
+./clean.sh [--force] [--dry-run]
+```
+
+- `--force` Deletes the files and any backup folders
+- `--dry-run` Does not delete any files, but prints files that would be deleted
+
+---
+
+## Prerequisites
+
+Before running the simulation, ensure you have the following installed:
+
+- The preCICE coupling library
+- Solver `ASolver` and its dependencies
+- Solver `BSolver` and its dependencies
+
+---
+
+## Running the Simulation
+
+### Quick Start
+
+```bash
+# Navigate to the `_generated` folder
+cd _generated/
+
+# Implement the run script
+
+# Make the run script executable
+chmod +x run.sh
+
+# Execute the simulation
+./run.sh
+```
+
+---
+
+For more information, see the [preCICE documentation](https://precice.org/docs.html) and [precice-case-generate](https://github.com/precice/case-generate).
\ No newline at end of file
diff --git a/examples/tutorial1/_reference/clean.sh b/examples/tutorial1/_reference/clean.sh
new file mode 100644
index 0000000..2954b31
--- /dev/null
+++ b/examples/tutorial1/_reference/clean.sh
@@ -0,0 +1,350 @@
+#!/usr/bin/env bash
+
+# -------------------------------------------------------------------
+# Script Name: clean.sh
+# Description: Recursively deletes/moves files/dirs except:
+# - Global preserved filenames anywhere (run.sh, adapter-config.json...)
+# - Specific root files (README.md, precice-config.xml)
+# Usage: ./clean.sh [--dry-run] [--force]
+# --dry-run : show what would happen, don't remove/move
+# --force : permanently delete unpreserved items AND remove existing backups
+# -------------------------------------------------------------------
+
+# Strict mode:
+# -e: exit on error
+# -u: exit on undefined variable
+# -o pipefail: exit if any command in a pipe fails
+set -euo pipefail
+
+# --- CONFIGURATION ---
+ROOT_DIR="$(pwd)"
+LOG_FILE="cleanup.log"
+BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
+
+# 1. GLOBAL PRESERVES: filenames to keep anywhere in the tree
+GLOBAL_PRESERVE_NAMES=(
+ "run.sh"
+ "adapter-config.json"
+)
+
+# 2. ROOT PRESERVES: filenames to keep only if in ROOT_DIR
+ROOT_PRESERVE_PATHS=(
+ "clean.sh"
+ "README.md"
+ "precice-config.xml"
+ "$LOG_FILE" # always keep the log (will be overwritten)
+)
+
+# --- DEFAULTS ---
+DRY_RUN=0 # No dry-run by default
+FORCE=0 # No permanent removal of files by default
+MOVED_COUNT=0 # Counter to track if we actually backed anything up
+DELETED_COUNT=0 # Counter to track if we actually deleted anything
+
+# --- HELPERS ---
+
+log() {
+ # We use tee -a (append) here so we don't overwrite previous lines *of this run*.
+ # The file is cleared once at the start of the MAIN block.
+ echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
+}
+
+# Is filename (basename) a global preserved name?
+is_global_preserved() {
+ local filename="$1"
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [[ "$filename" == "$name" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Is a relative path preserved at root?
+is_root_preserved() {
+ local relpath="$1"
+ for p in "${ROOT_PRESERVE_PATHS[@]}"; do
+ if [[ "$relpath" == "$p" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Find out whether directory contains any preserved file *anywhere* under it.
+# Implementation Note: Uses `head -n 1` instead of `-quit` for maximum compatibility
+# across all Linux distros (including Alpine/BusyBox) and BSD/MacOS.
+dir_contains_preserved_content() {
+ local dir="$1"
+ local name
+ local -a find_args=()
+
+ # Build find arguments: ( -name A -o -name B ... )
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [ ${#find_args[@]} -eq 0 ]; then
+ find_args+=( -name "$name" )
+ else
+ find_args+=( -o -name "$name" )
+ fi
+ done
+
+ # Check if find returns any match.
+ # piping to head -n 1 causes find to receive SIGPIPE and stop early if a match is found.
+ if [ -n "$(find "$dir" \( "${find_args[@]}" \) -print 2>/dev/null | head -n 1)" ]; then
+ return 0
+ fi
+ return 1
+}
+
+# Ensure backup destination exists and safely move source there preserving relative path.
+# Prevents collisions by making unique names if needed.
+# Args:
+# $1 = absolute src path
+# $2 = rel path relative to ROOT_DIR (used to recreate path inside backup)
+safe_move_to_backup() {
+ local src="$1"
+ local rel="$2"
+
+ # Destination path = $BACKUP_DIR/$rel
+ local dest="$BACKUP_DIR/$rel"
+ local dest_dir
+ dest_dir="$(dirname "$dest")"
+
+ # Normalize: if dirname is ".", use $BACKUP_DIR as target dir
+ if [[ "$dest_dir" == "$BACKUP_DIR/." ]] || [[ "$dest_dir" == "." ]]; then
+ dest_dir="$BACKUP_DIR"
+ fi
+
+ if [ "$DRY_RUN" -eq 0 ]; then
+ mkdir -p "$dest_dir"
+ fi
+
+ # If dest exists, append numeric suffix before extension to avoid overwrite
+ if [ -e "$dest" ]; then
+ local base name ext candidate n
+ base="$(basename "$dest")"
+ name="${base%.*}"
+ ext="${base##*.}"
+ n=1
+
+ # Check if file has an extension
+ if [[ "$ext" == "$base" ]]; then
+ # No extension
+ candidate="$dest_dir/${name}_$n"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n"
+ done
+ else
+ # Has extension
+ candidate="$dest_dir/${name}_$n.$ext"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n.$ext"
+ done
+ fi
+ dest="$candidate"
+ fi
+
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would be deleted: ${rel}"
+ else
+ mv -- "$src" "$dest"
+ # Increment the counter because we successfully moved a file
+ MOVED_COUNT=$((MOVED_COUNT + 1))
+ log "Deleted: ${rel}"
+ fi
+}
+
+# Safe remove (used by --force). Respects --dry-run.
+safe_remove() {
+ local src="$1"
+ local rel="$2"
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove: $rel"
+ else
+ rm -rf -- "$src"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed: $rel"
+ fi
+}
+
+# Remove existing backup_* directories permanently (used when FORCE=1).
+remove_existing_backups_permanently() {
+ shopt -s nullglob
+ local found=0
+ for old in "$ROOT_DIR"/backup_*; do
+ if [ -d "$old" ]; then
+ found=1
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove backup: $(basename "$old")"
+ else
+ rm -rf -- "$old"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed backup: $(basename "$old")"
+ fi
+ fi
+ done
+ shopt -u nullglob
+ if [ "$found" -eq 0 ]; then
+ log "No existing backup directories to remove."
+ fi
+}
+
+# Perform action for a single path (file or directory) that is NOT preserved.
+# Moves to backup preserving tree, or removes permanently when FORCE=1.
+perform_action() {
+ local item="$1"
+ local rel_path="${item#$ROOT_DIR/}"
+
+ if [ "$FORCE" -eq 1 ]; then
+ safe_remove "$item" "$rel_path"
+ else
+ safe_move_to_backup "$item" "$rel_path"
+ fi
+}
+
+# --- RECURSIVE CLEANUP ---
+recursive_cleanup() {
+ local current_dir="$1"
+
+ # dotglob: includes hidden files (starting with .)
+ # nullglob: makes the loop not run if no files match (avoids literal string issues)
+ shopt -s dotglob nullglob
+
+ local item rel_path base
+
+ for item in "$current_dir"/*; do
+ # Sanity check: ensure file exists (handles rare race conditions or broken links)
+ [ ! -e "$item" ] && [ ! -L "$item" ] && continue
+
+ rel_path="${item#$ROOT_DIR/}"
+ base="$(basename "$item")"
+
+ # Skip . and .. (though glob usually excludes them, safety first)
+ if [[ "$base" == "." || "$base" == ".." ]]; then
+ continue
+ fi
+
+ # --- FILE or SYMLINK ---
+ if [ -f "$item" ] || [ -L "$item" ]; then
+ # 1. Preserve by global name anywhere?
+ if is_global_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # 2. Preserve if at root and matches root-preserve list?
+ if [[ "$current_dir" == "$ROOT_DIR" ]] && is_root_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # Otherwise: perform action
+ perform_action "$item"
+ continue
+ fi
+
+ # --- DIRECTORY ---
+ if [ -d "$item" ]; then
+ # Check if dir contains any preserved content anywhere below
+ if dir_contains_preserved_content "$item"; then
+ recursive_cleanup "$item"
+
+ # After recursion, if directory is empty, remove it (respecting dry-run)
+ if [ -z "$(ls -A "$item")" ]; then
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would remove empty directory: $rel_path"
+ else
+ rmdir -- "$item"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Removed empty directory: $rel_path"
+ fi
+ fi
+ else
+ # Directory contains no preserved files anywhere deep: remove/move entire directory
+ perform_action "$item"
+ fi
+ continue
+ fi
+ done
+
+ # Restore defaults for shopt to avoid side effects if function is reused
+ shopt -u dotglob nullglob
+}
+
+# --- MAIN ---
+
+# Parse flags
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --dry-run) DRY_RUN=1 ;;
+ --force) FORCE=1 ;;
+ *) echo "Unknown parameter: $1"; exit 1 ;;
+ esac
+ shift
+done
+
+# Initialize/Clear the log file for this new run
+: > "$LOG_FILE"
+
+# Confirmation prompt (skip when dry-run)
+if [ "$DRY_RUN" -eq 1 ]; then
+ log "Dry run enabled, nothing will be removed."
+else
+ read -p "This will delete all files except preserved ones. Proceed? [y/n]: " confirm
+ case "$confirm" in
+ [yY][eE][sS]|[yY]) ;;
+ *) log "Cleanup aborted."; exit 0 ;;
+ esac
+fi
+
+if [ "$FORCE" -eq 1 ] && [ "$DRY_RUN" -eq 1 ]; then
+ log "Ignoring --force."
+ FORCE=0
+fi
+
+log "Starting cleanup..."
+
+
+if [ "$FORCE" -eq 1 ]; then
+ remove_existing_backups_permanently
+fi
+
+recursive_cleanup "$ROOT_DIR"
+
+if [ "$DELETED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+fi
+
+OUTPUT_STR=""
+
+# Output if FORCE
+if [ "$FORCE" -eq 1 ]; then
+ OUTPUT_STR="Deleted $DELETED_COUNT $FILE_STR or $DIRECTORY_STR. "
+fi
+
+if [ "$DRY_RUN" -eq 1 ]; then
+ OUTPUT_STR="Dry-run completed successfully."
+else
+ OUTPUT_STR="${OUTPUT_STR}Cleanup completed successfully."
+fi
+
+# Only append the backup message if we actually moved files (MOVED_COUNT > 0)
+if [ "$MOVED_COUNT" -gt 0 ]; then
+ # Correct wording
+ if [ "$MOVED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+ else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+ fi
+ OUTPUT_STR="$OUTPUT_STR Backed up $MOVED_COUNT deleted $FILE_STR or $DIRECTORY_STR in '$BACKUP_DIR'."
+fi
+
+log "$OUTPUT_STR"
\ No newline at end of file
diff --git a/examples/tutorial1/_reference/generator-asolver/adapter-config.json b/examples/tutorial1/_reference/generator-asolver/adapter-config.json
new file mode 100644
index 0000000..fadc0b1
--- /dev/null
+++ b/examples/tutorial1/_reference/generator-asolver/adapter-config.json
@@ -0,0 +1,15 @@
+{
+ "participant_name": "Generator",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Generator-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "write_data_names": [
+ "Color"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/tutorial1/_reference/generator-asolver/run.sh b/examples/tutorial1/_reference/generator-asolver/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial1/_reference/generator-asolver/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/tutorial1/_reference/precice-config.xml b/examples/tutorial1/_reference/precice-config.xml
new file mode 100644
index 0000000..daf4753
--- /dev/null
+++ b/examples/tutorial1/_reference/precice-config.xml
@@ -0,0 +1,41 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/tutorial1/_reference/propagator-bsolver/adapter-config.json b/examples/tutorial1/_reference/propagator-bsolver/adapter-config.json
new file mode 100644
index 0000000..d892d87
--- /dev/null
+++ b/examples/tutorial1/_reference/propagator-bsolver/adapter-config.json
@@ -0,0 +1,15 @@
+{
+ "participant_name": "Propagator",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Propagator-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "read_data_names": [
+ "Color"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/tutorial1/_reference/propagator-bsolver/run.sh b/examples/tutorial1/_reference/propagator-bsolver/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial1/_reference/propagator-bsolver/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/tutorial1/topology.yaml b/examples/tutorial1/topology.yaml
new file mode 100644
index 0000000..62ddb59
--- /dev/null
+++ b/examples/tutorial1/topology.yaml
@@ -0,0 +1,15 @@
+participants:
+ - name: Generator
+ solver: ASolver
+ dimensionality: 2
+ - name: Propagator
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: Generator
+ to: Propagator
+ from-patch: interface
+ to-patch: interface
+ data: Color
+ type: weak
+ data-type: scalar
\ No newline at end of file
diff --git a/examples/tutorial2/_reference/README.md b/examples/tutorial2/_reference/README.md
new file mode 100644
index 0000000..891ce73
--- /dev/null
+++ b/examples/tutorial2/_reference/README.md
@@ -0,0 +1,83 @@
+# Multiphysics Simulation Project
+
+> [!NOTE] This `README.md` file was auto-generated by preCICE case-generate.
+
+
+---
+
+## Project Overview
+
+This project uses **preCICE** for a multiphysics simulation involving:
+
+- Solver `ASolver` with participant `Generator-Left`
+- Solver `ASolver` with participant `Generator-Right`
+- Solver `BSolver` with participant `Propagator`
+
+### Project Structure
+
+Global files that are generated are: `precice-config.xml`, `README.md` and `clean.sh`. Additionally, for each participant, a folder with an `adapter-config.json` and a `run.sh` file are created.
+The folder structure is as follows:
+
+```
+_generated/
+ ├── README.md # This file
+ ├── clean.sh # Clean up script
+ ├──generator-left-asolver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──generator-right-asolver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──propagator-bsolver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ └── precice-config.xml # Global precice-config.xml file
+```
+
+
+- `precice-config.xml` is the global preCICE configuration file which defines the parameters and communication of participants
+- `adapter-config.json` is a configuration file to couple the solvers with preCICE.
+- `run.sh` is a script that is meant to execute a participant. Note, however, that since different solvers are executed differently, this file is not implemented yet.
+- `clean.sh` removes any files in the current root directory that were not created by preCICE case-generate (and moves them to a backup folder).
+Execution:
+
+```bash
+./clean.sh [--force] [--dry-run]
+```
+
+- `--force` Deletes the files and any backup folders
+- `--dry-run` Does not delete any files, but prints files that would be deleted
+
+---
+
+## Prerequisites
+
+Before running the simulation, ensure you have the following installed:
+
+- The preCICE coupling library
+- Solver `ASolver` and its dependencies
+- Solver `ASolver` and its dependencies
+- Solver `BSolver` and its dependencies
+
+---
+
+## Running the Simulation
+
+### Quick Start
+
+```bash
+# Navigate to the `_generated` folder
+cd _generated/
+
+# Implement the run script
+
+# Make the run script executable
+chmod +x run.sh
+
+# Execute the simulation
+./run.sh
+```
+
+---
+
+For more information, see the [preCICE documentation](https://precice.org/docs.html) and [precice-case-generate](https://github.com/precice/case-generate).
\ No newline at end of file
diff --git a/examples/tutorial2/_reference/clean.sh b/examples/tutorial2/_reference/clean.sh
new file mode 100644
index 0000000..2954b31
--- /dev/null
+++ b/examples/tutorial2/_reference/clean.sh
@@ -0,0 +1,350 @@
+#!/usr/bin/env bash
+
+# -------------------------------------------------------------------
+# Script Name: clean.sh
+# Description: Recursively deletes/moves files/dirs except:
+# - Global preserved filenames anywhere (run.sh, adapter-config.json...)
+# - Specific root files (README.md, precice-config.xml)
+# Usage: ./clean.sh [--dry-run] [--force]
+# --dry-run : show what would happen, don't remove/move
+# --force : permanently delete unpreserved items AND remove existing backups
+# -------------------------------------------------------------------
+
+# Strict mode:
+# -e: exit on error
+# -u: exit on undefined variable
+# -o pipefail: exit if any command in a pipe fails
+set -euo pipefail
+
+# --- CONFIGURATION ---
+ROOT_DIR="$(pwd)"
+LOG_FILE="cleanup.log"
+BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
+
+# 1. GLOBAL PRESERVES: filenames to keep anywhere in the tree
+GLOBAL_PRESERVE_NAMES=(
+ "run.sh"
+ "adapter-config.json"
+)
+
+# 2. ROOT PRESERVES: filenames to keep only if in ROOT_DIR
+ROOT_PRESERVE_PATHS=(
+ "clean.sh"
+ "README.md"
+ "precice-config.xml"
+ "$LOG_FILE" # always keep the log (will be overwritten)
+)
+
+# --- DEFAULTS ---
+DRY_RUN=0 # No dry-run by default
+FORCE=0 # No permanent removal of files by default
+MOVED_COUNT=0 # Counter to track if we actually backed anything up
+DELETED_COUNT=0 # Counter to track if we actually deleted anything
+
+# --- HELPERS ---
+
+log() {
+ # We use tee -a (append) here so we don't overwrite previous lines *of this run*.
+ # The file is cleared once at the start of the MAIN block.
+ echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
+}
+
+# Is filename (basename) a global preserved name?
+is_global_preserved() {
+ local filename="$1"
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [[ "$filename" == "$name" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Is a relative path preserved at root?
+is_root_preserved() {
+ local relpath="$1"
+ for p in "${ROOT_PRESERVE_PATHS[@]}"; do
+ if [[ "$relpath" == "$p" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Find out whether directory contains any preserved file *anywhere* under it.
+# Implementation Note: Uses `head -n 1` instead of `-quit` for maximum compatibility
+# across all Linux distros (including Alpine/BusyBox) and BSD/MacOS.
+dir_contains_preserved_content() {
+ local dir="$1"
+ local name
+ local -a find_args=()
+
+ # Build find arguments: ( -name A -o -name B ... )
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [ ${#find_args[@]} -eq 0 ]; then
+ find_args+=( -name "$name" )
+ else
+ find_args+=( -o -name "$name" )
+ fi
+ done
+
+ # Check if find returns any match.
+ # piping to head -n 1 causes find to receive SIGPIPE and stop early if a match is found.
+ if [ -n "$(find "$dir" \( "${find_args[@]}" \) -print 2>/dev/null | head -n 1)" ]; then
+ return 0
+ fi
+ return 1
+}
+
+# Ensure backup destination exists and safely move source there preserving relative path.
+# Prevents collisions by making unique names if needed.
+# Args:
+# $1 = absolute src path
+# $2 = rel path relative to ROOT_DIR (used to recreate path inside backup)
+safe_move_to_backup() {
+ local src="$1"
+ local rel="$2"
+
+ # Destination path = $BACKUP_DIR/$rel
+ local dest="$BACKUP_DIR/$rel"
+ local dest_dir
+ dest_dir="$(dirname "$dest")"
+
+ # Normalize: if dirname is ".", use $BACKUP_DIR as target dir
+ if [[ "$dest_dir" == "$BACKUP_DIR/." ]] || [[ "$dest_dir" == "." ]]; then
+ dest_dir="$BACKUP_DIR"
+ fi
+
+ if [ "$DRY_RUN" -eq 0 ]; then
+ mkdir -p "$dest_dir"
+ fi
+
+ # If dest exists, append numeric suffix before extension to avoid overwrite
+ if [ -e "$dest" ]; then
+ local base name ext candidate n
+ base="$(basename "$dest")"
+ name="${base%.*}"
+ ext="${base##*.}"
+ n=1
+
+ # Check if file has an extension
+ if [[ "$ext" == "$base" ]]; then
+ # No extension
+ candidate="$dest_dir/${name}_$n"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n"
+ done
+ else
+ # Has extension
+ candidate="$dest_dir/${name}_$n.$ext"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n.$ext"
+ done
+ fi
+ dest="$candidate"
+ fi
+
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would be deleted: ${rel}"
+ else
+ mv -- "$src" "$dest"
+ # Increment the counter because we successfully moved a file
+ MOVED_COUNT=$((MOVED_COUNT + 1))
+ log "Deleted: ${rel}"
+ fi
+}
+
+# Safe remove (used by --force). Respects --dry-run.
+safe_remove() {
+ local src="$1"
+ local rel="$2"
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove: $rel"
+ else
+ rm -rf -- "$src"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed: $rel"
+ fi
+}
+
+# Remove existing backup_* directories permanently (used when FORCE=1).
+remove_existing_backups_permanently() {
+ shopt -s nullglob
+ local found=0
+ for old in "$ROOT_DIR"/backup_*; do
+ if [ -d "$old" ]; then
+ found=1
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove backup: $(basename "$old")"
+ else
+ rm -rf -- "$old"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed backup: $(basename "$old")"
+ fi
+ fi
+ done
+ shopt -u nullglob
+ if [ "$found" -eq 0 ]; then
+ log "No existing backup directories to remove."
+ fi
+}
+
+# Perform action for a single path (file or directory) that is NOT preserved.
+# Moves to backup preserving tree, or removes permanently when FORCE=1.
+perform_action() {
+ local item="$1"
+ local rel_path="${item#$ROOT_DIR/}"
+
+ if [ "$FORCE" -eq 1 ]; then
+ safe_remove "$item" "$rel_path"
+ else
+ safe_move_to_backup "$item" "$rel_path"
+ fi
+}
+
+# --- RECURSIVE CLEANUP ---
+recursive_cleanup() {
+ local current_dir="$1"
+
+ # dotglob: includes hidden files (starting with .)
+ # nullglob: makes the loop not run if no files match (avoids literal string issues)
+ shopt -s dotglob nullglob
+
+ local item rel_path base
+
+ for item in "$current_dir"/*; do
+ # Sanity check: ensure file exists (handles rare race conditions or broken links)
+ [ ! -e "$item" ] && [ ! -L "$item" ] && continue
+
+ rel_path="${item#$ROOT_DIR/}"
+ base="$(basename "$item")"
+
+ # Skip . and .. (though glob usually excludes them, safety first)
+ if [[ "$base" == "." || "$base" == ".." ]]; then
+ continue
+ fi
+
+ # --- FILE or SYMLINK ---
+ if [ -f "$item" ] || [ -L "$item" ]; then
+ # 1. Preserve by global name anywhere?
+ if is_global_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # 2. Preserve if at root and matches root-preserve list?
+ if [[ "$current_dir" == "$ROOT_DIR" ]] && is_root_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # Otherwise: perform action
+ perform_action "$item"
+ continue
+ fi
+
+ # --- DIRECTORY ---
+ if [ -d "$item" ]; then
+ # Check if dir contains any preserved content anywhere below
+ if dir_contains_preserved_content "$item"; then
+ recursive_cleanup "$item"
+
+ # After recursion, if directory is empty, remove it (respecting dry-run)
+ if [ -z "$(ls -A "$item")" ]; then
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would remove empty directory: $rel_path"
+ else
+ rmdir -- "$item"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Removed empty directory: $rel_path"
+ fi
+ fi
+ else
+ # Directory contains no preserved files anywhere deep: remove/move entire directory
+ perform_action "$item"
+ fi
+ continue
+ fi
+ done
+
+ # Restore defaults for shopt to avoid side effects if function is reused
+ shopt -u dotglob nullglob
+}
+
+# --- MAIN ---
+
+# Parse flags
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --dry-run) DRY_RUN=1 ;;
+ --force) FORCE=1 ;;
+ *) echo "Unknown parameter: $1"; exit 1 ;;
+ esac
+ shift
+done
+
+# Initialize/Clear the log file for this new run
+: > "$LOG_FILE"
+
+# Confirmation prompt (skip when dry-run)
+if [ "$DRY_RUN" -eq 1 ]; then
+ log "Dry run enabled, nothing will be removed."
+else
+ read -p "This will delete all files except preserved ones. Proceed? [y/n]: " confirm
+ case "$confirm" in
+ [yY][eE][sS]|[yY]) ;;
+ *) log "Cleanup aborted."; exit 0 ;;
+ esac
+fi
+
+if [ "$FORCE" -eq 1 ] && [ "$DRY_RUN" -eq 1 ]; then
+ log "Ignoring --force."
+ FORCE=0
+fi
+
+log "Starting cleanup..."
+
+
+if [ "$FORCE" -eq 1 ]; then
+ remove_existing_backups_permanently
+fi
+
+recursive_cleanup "$ROOT_DIR"
+
+if [ "$DELETED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+fi
+
+OUTPUT_STR=""
+
+# Output if FORCE
+if [ "$FORCE" -eq 1 ]; then
+ OUTPUT_STR="Deleted $DELETED_COUNT $FILE_STR or $DIRECTORY_STR. "
+fi
+
+if [ "$DRY_RUN" -eq 1 ]; then
+ OUTPUT_STR="Dry-run completed successfully."
+else
+ OUTPUT_STR="${OUTPUT_STR}Cleanup completed successfully."
+fi
+
+# Only append the backup message if we actually moved files (MOVED_COUNT > 0)
+if [ "$MOVED_COUNT" -gt 0 ]; then
+ # Correct wording
+ if [ "$MOVED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+ else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+ fi
+ OUTPUT_STR="$OUTPUT_STR Backed up $MOVED_COUNT deleted $FILE_STR or $DIRECTORY_STR in '$BACKUP_DIR'."
+fi
+
+log "$OUTPUT_STR"
\ No newline at end of file
diff --git a/examples/tutorial2/_reference/generator-left-asolver/adapter-config.json b/examples/tutorial2/_reference/generator-left-asolver/adapter-config.json
new file mode 100644
index 0000000..30e6217
--- /dev/null
+++ b/examples/tutorial2/_reference/generator-left-asolver/adapter-config.json
@@ -0,0 +1,15 @@
+{
+ "participant_name": "Generator-Left",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Generator-Left-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "write_data_names": [
+ "Color"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/tutorial2/_reference/generator-left-asolver/run.sh b/examples/tutorial2/_reference/generator-left-asolver/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial2/_reference/generator-left-asolver/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/tutorial2/_reference/generator-right-asolver/adapter-config.json b/examples/tutorial2/_reference/generator-right-asolver/adapter-config.json
new file mode 100644
index 0000000..7d09b06
--- /dev/null
+++ b/examples/tutorial2/_reference/generator-right-asolver/adapter-config.json
@@ -0,0 +1,15 @@
+{
+ "participant_name": "Generator-Right",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Generator-Right-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "write_data_names": [
+ "Color"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/tutorial2/_reference/generator-right-asolver/run.sh b/examples/tutorial2/_reference/generator-right-asolver/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial2/_reference/generator-right-asolver/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/tutorial2/_reference/precice-config.xml b/examples/tutorial2/_reference/precice-config.xml
new file mode 100644
index 0000000..7843a41
--- /dev/null
+++ b/examples/tutorial2/_reference/precice-config.xml
@@ -0,0 +1,70 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/tutorial2/_reference/propagator-bsolver/adapter-config.json b/examples/tutorial2/_reference/propagator-bsolver/adapter-config.json
new file mode 100644
index 0000000..e5efc3a
--- /dev/null
+++ b/examples/tutorial2/_reference/propagator-bsolver/adapter-config.json
@@ -0,0 +1,24 @@
+{
+ "participant_name": "Propagator",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "Propagator-Generator-Left-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "read_data_names": [
+ "Color"
+ ]
+ },
+ {
+ "mesh_name": "Propagator-Generator-Right-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "read_data_names": [
+ "Color"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/tutorial2/_reference/propagator-bsolver/run.sh b/examples/tutorial2/_reference/propagator-bsolver/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial2/_reference/propagator-bsolver/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/tutorial2/topology.yaml b/examples/tutorial2/topology.yaml
new file mode 100644
index 0000000..76933df
--- /dev/null
+++ b/examples/tutorial2/topology.yaml
@@ -0,0 +1,25 @@
+participants:
+ - name: Generator-Left
+ solver: ASolver
+ dimensionality: 2
+ - name: Generator-Right
+ solver: ASolver
+ dimensionality: 2
+ - name: Propagator
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: Generator-Left
+ to: Propagator
+ from-patch: interface
+ to-patch: interface
+ data: Color
+ type: weak
+ data-type: scalar
+ - from: Generator-Right
+ to: Propagator
+ from-patch: interface
+ to-patch: interface
+ data: Color
+ type: weak
+ data-type: scalar
\ No newline at end of file
diff --git a/examples/tutorial3/_reference/README.md b/examples/tutorial3/_reference/README.md
new file mode 100644
index 0000000..65463c3
--- /dev/null
+++ b/examples/tutorial3/_reference/README.md
@@ -0,0 +1,78 @@
+# Multiphysics Simulation Project
+
+> [!NOTE] This `README.md` file was auto-generated by preCICE case-generate.
+
+
+---
+
+## Project Overview
+
+This project uses **preCICE** for a multiphysics simulation involving:
+
+- Solver `OpenFOAM` with participant `Fluid`
+- Solver `Nutils` with participant `Solid`
+
+### Project Structure
+
+Global files that are generated are: `precice-config.xml`, `README.md` and `clean.sh`. Additionally, for each participant, a folder with an `adapter-config.json` and a `run.sh` file are created.
+The folder structure is as follows:
+
+```
+_generated/
+ ├── README.md # This file
+ ├── clean.sh # Clean up script
+ ├──fluid-openfoam/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──solid-nutils/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ └── precice-config.xml # Global precice-config.xml file
+```
+
+
+- `precice-config.xml` is the global preCICE configuration file which defines the parameters and communication of participants
+- `adapter-config.json` is a configuration file to couple the solvers with preCICE.
+- `run.sh` is a script that is meant to execute a participant. Note, however, that since different solvers are executed differently, this file is not implemented yet.
+- `clean.sh` removes any files in the current root directory that were not created by preCICE case-generate (and moves them to a backup folder).
+Execution:
+
+```bash
+./clean.sh [--force] [--dry-run]
+```
+
+- `--force` Deletes the files and any backup folders
+- `--dry-run` Does not delete any files, but prints files that would be deleted
+
+---
+
+## Prerequisites
+
+Before running the simulation, ensure you have the following installed:
+
+- The preCICE coupling library
+- Solver `OpenFOAM` and its dependencies
+- Solver `Nutils` and its dependencies
+
+---
+
+## Running the Simulation
+
+### Quick Start
+
+```bash
+# Navigate to the `_generated` folder
+cd _generated/
+
+# Implement the run script
+
+# Make the run script executable
+chmod +x run.sh
+
+# Execute the simulation
+./run.sh
+```
+
+---
+
+For more information, see the [preCICE documentation](https://precice.org/docs.html) and [precice-case-generate](https://github.com/precice/case-generate).
\ No newline at end of file
diff --git a/examples/tutorial3/_reference/clean.sh b/examples/tutorial3/_reference/clean.sh
new file mode 100644
index 0000000..2954b31
--- /dev/null
+++ b/examples/tutorial3/_reference/clean.sh
@@ -0,0 +1,350 @@
+#!/usr/bin/env bash
+
+# -------------------------------------------------------------------
+# Script Name: clean.sh
+# Description: Recursively deletes/moves files/dirs except:
+# - Global preserved filenames anywhere (run.sh, adapter-config.json...)
+# - Specific root files (README.md, precice-config.xml)
+# Usage: ./clean.sh [--dry-run] [--force]
+# --dry-run : show what would happen, don't remove/move
+# --force : permanently delete unpreserved items AND remove existing backups
+# -------------------------------------------------------------------
+
+# Strict mode:
+# -e: exit on error
+# -u: exit on undefined variable
+# -o pipefail: exit if any command in a pipe fails
+set -euo pipefail
+
+# --- CONFIGURATION ---
+ROOT_DIR="$(pwd)"
+LOG_FILE="cleanup.log"
+BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
+
+# 1. GLOBAL PRESERVES: filenames to keep anywhere in the tree
+GLOBAL_PRESERVE_NAMES=(
+ "run.sh"
+ "adapter-config.json"
+)
+
+# 2. ROOT PRESERVES: filenames to keep only if in ROOT_DIR
+ROOT_PRESERVE_PATHS=(
+ "clean.sh"
+ "README.md"
+ "precice-config.xml"
+ "$LOG_FILE" # always keep the log (will be overwritten)
+)
+
+# --- DEFAULTS ---
+DRY_RUN=0 # No dry-run by default
+FORCE=0 # No permanent removal of files by default
+MOVED_COUNT=0 # Counter to track if we actually backed anything up
+DELETED_COUNT=0 # Counter to track if we actually deleted anything
+
+# --- HELPERS ---
+
+log() {
+ # We use tee -a (append) here so we don't overwrite previous lines *of this run*.
+ # The file is cleared once at the start of the MAIN block.
+ echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
+}
+
+# Is filename (basename) a global preserved name?
+is_global_preserved() {
+ local filename="$1"
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [[ "$filename" == "$name" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Is a relative path preserved at root?
+is_root_preserved() {
+ local relpath="$1"
+ for p in "${ROOT_PRESERVE_PATHS[@]}"; do
+ if [[ "$relpath" == "$p" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Find out whether directory contains any preserved file *anywhere* under it.
+# Implementation Note: Uses `head -n 1` instead of `-quit` for maximum compatibility
+# across all Linux distros (including Alpine/BusyBox) and BSD/MacOS.
+dir_contains_preserved_content() {
+ local dir="$1"
+ local name
+ local -a find_args=()
+
+ # Build find arguments: ( -name A -o -name B ... )
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [ ${#find_args[@]} -eq 0 ]; then
+ find_args+=( -name "$name" )
+ else
+ find_args+=( -o -name "$name" )
+ fi
+ done
+
+ # Check if find returns any match.
+ # piping to head -n 1 causes find to receive SIGPIPE and stop early if a match is found.
+ if [ -n "$(find "$dir" \( "${find_args[@]}" \) -print 2>/dev/null | head -n 1)" ]; then
+ return 0
+ fi
+ return 1
+}
+
+# Ensure backup destination exists and safely move source there preserving relative path.
+# Prevents collisions by making unique names if needed.
+# Args:
+# $1 = absolute src path
+# $2 = rel path relative to ROOT_DIR (used to recreate path inside backup)
+safe_move_to_backup() {
+ local src="$1"
+ local rel="$2"
+
+ # Destination path = $BACKUP_DIR/$rel
+ local dest="$BACKUP_DIR/$rel"
+ local dest_dir
+ dest_dir="$(dirname "$dest")"
+
+ # Normalize: if dirname is ".", use $BACKUP_DIR as target dir
+ if [[ "$dest_dir" == "$BACKUP_DIR/." ]] || [[ "$dest_dir" == "." ]]; then
+ dest_dir="$BACKUP_DIR"
+ fi
+
+ if [ "$DRY_RUN" -eq 0 ]; then
+ mkdir -p "$dest_dir"
+ fi
+
+ # If dest exists, append numeric suffix before extension to avoid overwrite
+ if [ -e "$dest" ]; then
+ local base name ext candidate n
+ base="$(basename "$dest")"
+ name="${base%.*}"
+ ext="${base##*.}"
+ n=1
+
+ # Check if file has an extension
+ if [[ "$ext" == "$base" ]]; then
+ # No extension
+ candidate="$dest_dir/${name}_$n"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n"
+ done
+ else
+ # Has extension
+ candidate="$dest_dir/${name}_$n.$ext"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n.$ext"
+ done
+ fi
+ dest="$candidate"
+ fi
+
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would be deleted: ${rel}"
+ else
+ mv -- "$src" "$dest"
+ # Increment the counter because we successfully moved a file
+ MOVED_COUNT=$((MOVED_COUNT + 1))
+ log "Deleted: ${rel}"
+ fi
+}
+
+# Safe remove (used by --force). Respects --dry-run.
+safe_remove() {
+ local src="$1"
+ local rel="$2"
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove: $rel"
+ else
+ rm -rf -- "$src"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed: $rel"
+ fi
+}
+
+# Remove existing backup_* directories permanently (used when FORCE=1).
+remove_existing_backups_permanently() {
+ shopt -s nullglob
+ local found=0
+ for old in "$ROOT_DIR"/backup_*; do
+ if [ -d "$old" ]; then
+ found=1
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove backup: $(basename "$old")"
+ else
+ rm -rf -- "$old"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed backup: $(basename "$old")"
+ fi
+ fi
+ done
+ shopt -u nullglob
+ if [ "$found" -eq 0 ]; then
+ log "No existing backup directories to remove."
+ fi
+}
+
+# Perform action for a single path (file or directory) that is NOT preserved.
+# Moves to backup preserving tree, or removes permanently when FORCE=1.
+perform_action() {
+ local item="$1"
+ local rel_path="${item#$ROOT_DIR/}"
+
+ if [ "$FORCE" -eq 1 ]; then
+ safe_remove "$item" "$rel_path"
+ else
+ safe_move_to_backup "$item" "$rel_path"
+ fi
+}
+
+# --- RECURSIVE CLEANUP ---
+recursive_cleanup() {
+ local current_dir="$1"
+
+ # dotglob: includes hidden files (starting with .)
+ # nullglob: makes the loop not run if no files match (avoids literal string issues)
+ shopt -s dotglob nullglob
+
+ local item rel_path base
+
+ for item in "$current_dir"/*; do
+ # Sanity check: ensure file exists (handles rare race conditions or broken links)
+ [ ! -e "$item" ] && [ ! -L "$item" ] && continue
+
+ rel_path="${item#$ROOT_DIR/}"
+ base="$(basename "$item")"
+
+ # Skip . and .. (though glob usually excludes them, safety first)
+ if [[ "$base" == "." || "$base" == ".." ]]; then
+ continue
+ fi
+
+ # --- FILE or SYMLINK ---
+ if [ -f "$item" ] || [ -L "$item" ]; then
+ # 1. Preserve by global name anywhere?
+ if is_global_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # 2. Preserve if at root and matches root-preserve list?
+ if [[ "$current_dir" == "$ROOT_DIR" ]] && is_root_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # Otherwise: perform action
+ perform_action "$item"
+ continue
+ fi
+
+ # --- DIRECTORY ---
+ if [ -d "$item" ]; then
+ # Check if dir contains any preserved content anywhere below
+ if dir_contains_preserved_content "$item"; then
+ recursive_cleanup "$item"
+
+ # After recursion, if directory is empty, remove it (respecting dry-run)
+ if [ -z "$(ls -A "$item")" ]; then
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would remove empty directory: $rel_path"
+ else
+ rmdir -- "$item"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Removed empty directory: $rel_path"
+ fi
+ fi
+ else
+ # Directory contains no preserved files anywhere deep: remove/move entire directory
+ perform_action "$item"
+ fi
+ continue
+ fi
+ done
+
+ # Restore defaults for shopt to avoid side effects if function is reused
+ shopt -u dotglob nullglob
+}
+
+# --- MAIN ---
+
+# Parse flags
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --dry-run) DRY_RUN=1 ;;
+ --force) FORCE=1 ;;
+ *) echo "Unknown parameter: $1"; exit 1 ;;
+ esac
+ shift
+done
+
+# Initialize/Clear the log file for this new run
+: > "$LOG_FILE"
+
+# Confirmation prompt (skip when dry-run)
+if [ "$DRY_RUN" -eq 1 ]; then
+ log "Dry run enabled, nothing will be removed."
+else
+ read -p "This will delete all files except preserved ones. Proceed? [y/n]: " confirm
+ case "$confirm" in
+ [yY][eE][sS]|[yY]) ;;
+ *) log "Cleanup aborted."; exit 0 ;;
+ esac
+fi
+
+if [ "$FORCE" -eq 1 ] && [ "$DRY_RUN" -eq 1 ]; then
+ log "Ignoring --force."
+ FORCE=0
+fi
+
+log "Starting cleanup..."
+
+
+if [ "$FORCE" -eq 1 ]; then
+ remove_existing_backups_permanently
+fi
+
+recursive_cleanup "$ROOT_DIR"
+
+if [ "$DELETED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+fi
+
+OUTPUT_STR=""
+
+# Output if FORCE
+if [ "$FORCE" -eq 1 ]; then
+ OUTPUT_STR="Deleted $DELETED_COUNT $FILE_STR or $DIRECTORY_STR. "
+fi
+
+if [ "$DRY_RUN" -eq 1 ]; then
+ OUTPUT_STR="Dry-run completed successfully."
+else
+ OUTPUT_STR="${OUTPUT_STR}Cleanup completed successfully."
+fi
+
+# Only append the backup message if we actually moved files (MOVED_COUNT > 0)
+if [ "$MOVED_COUNT" -gt 0 ]; then
+ # Correct wording
+ if [ "$MOVED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+ else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+ fi
+ OUTPUT_STR="$OUTPUT_STR Backed up $MOVED_COUNT deleted $FILE_STR or $DIRECTORY_STR in '$BACKUP_DIR'."
+fi
+
+log "$OUTPUT_STR"
\ No newline at end of file
diff --git a/examples/3/fluid-openfoam/adapter-config.json b/examples/tutorial3/_reference/fluid-openfoam/adapter-config.json
similarity index 71%
rename from examples/3/fluid-openfoam/adapter-config.json
rename to examples/tutorial3/_reference/fluid-openfoam/adapter-config.json
index 5f53c03..29ebb53 100644
--- a/examples/3/fluid-openfoam/adapter-config.json
+++ b/examples/tutorial3/_reference/fluid-openfoam/adapter-config.json
@@ -1,18 +1,18 @@
{
"participant_name": "Fluid",
- "precice_config_file_name": "../precice-config.xml",
+ "precice_config_file_path": "../precice-config.xml",
"interfaces": [
{
"mesh_name": "Fluid-Mesh",
"patches": [
- "surface"
+ "interface"
+ ],
+ "read_data_names": [
+ "Heat-Flux"
],
"write_data_names": [
"Temperature"
- ],
- "read_data_names": [
- "HeatTransfer"
]
}
]
-}
+}
\ No newline at end of file
diff --git a/examples/tutorial3/_reference/fluid-openfoam/run.sh b/examples/tutorial3/_reference/fluid-openfoam/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial3/_reference/fluid-openfoam/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/3/precice-config.xml b/examples/tutorial3/_reference/precice-config.xml
similarity index 52%
rename from examples/3/precice-config.xml
rename to examples/tutorial3/_reference/precice-config.xml
index 216b048..dbb49dc 100644
--- a/examples/3/precice-config.xml
+++ b/examples/tutorial3/_reference/precice-config.xml
@@ -1,25 +1,27 @@
+
+
+
+
-
+
-
+
-
+
-
-
+
-
-
-
-
+
-
-
-
-
-
-
-
-
+
+
+
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/expert/5/solid-calculix/adapter-config.json b/examples/tutorial3/_reference/solid-nutils/adapter-config.json
similarity index 77%
rename from examples/expert/5/solid-calculix/adapter-config.json
rename to examples/tutorial3/_reference/solid-nutils/adapter-config.json
index abafcc0..6906213 100644
--- a/examples/expert/5/solid-calculix/adapter-config.json
+++ b/examples/tutorial3/_reference/solid-nutils/adapter-config.json
@@ -1,18 +1,18 @@
{
"participant_name": "Solid",
- "precice_config_file_name": "../precice-config.xml",
+ "precice_config_file_path": "../precice-config.xml",
"interfaces": [
{
"mesh_name": "Solid-Mesh",
"patches": [
"interface"
],
- "write_data_names": [
- "HeatTransfer"
- ],
"read_data_names": [
"Temperature"
+ ],
+ "write_data_names": [
+ "Heat-Flux"
]
}
]
-}
+}
\ No newline at end of file
diff --git a/examples/tutorial3/_reference/solid-nutils/run.sh b/examples/tutorial3/_reference/solid-nutils/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial3/_reference/solid-nutils/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/tutorial3/topology.yaml b/examples/tutorial3/topology.yaml
new file mode 100644
index 0000000..6fe97ae
--- /dev/null
+++ b/examples/tutorial3/topology.yaml
@@ -0,0 +1,22 @@
+participants:
+ - name: Fluid
+ solver: OpenFOAM
+ dimensionality: 2
+ - name: Solid
+ solver: Nutils
+ dimensionality: 2
+exchanges:
+ - from: Fluid
+ to: Solid
+ from-patch: interface
+ to-patch: interface
+ data: Temperature
+ type: strong
+ data-type: scalar
+ - from: Solid
+ to: Fluid
+ from-patch: interface
+ to-patch: interface
+ data: Heat-Flux
+ type: strong
+ data-type: scalar
\ No newline at end of file
diff --git a/examples/tutorial4/_reference/README.md b/examples/tutorial4/_reference/README.md
new file mode 100644
index 0000000..9200ef5
--- /dev/null
+++ b/examples/tutorial4/_reference/README.md
@@ -0,0 +1,78 @@
+# Multiphysics Simulation Project
+
+> [!NOTE] This `README.md` file was auto-generated by preCICE case-generate.
+
+
+---
+
+## Project Overview
+
+This project uses **preCICE** for a multiphysics simulation involving:
+
+- Solver `ASolver` with participant `A`
+- Solver `BSolver` with participant `B`
+
+### Project Structure
+
+Global files that are generated are: `precice-config.xml`, `README.md` and `clean.sh`. Additionally, for each participant, a folder with an `adapter-config.json` and a `run.sh` file are created.
+The folder structure is as follows:
+
+```
+_generated/
+ ├── README.md # This file
+ ├── clean.sh # Clean up script
+ ├──a-asolver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ ├──b-bsolver/
+ │ ├── adapter-config.json
+ │ └── run.sh # Not yet implemented
+ └── precice-config.xml # Global precice-config.xml file
+```
+
+
+- `precice-config.xml` is the global preCICE configuration file which defines the parameters and communication of participants
+- `adapter-config.json` is a configuration file to couple the solvers with preCICE.
+- `run.sh` is a script that is meant to execute a participant. Note, however, that since different solvers are executed differently, this file is not implemented yet.
+- `clean.sh` removes any files in the current root directory that were not created by preCICE case-generate (and moves them to a backup folder).
+Execution:
+
+```bash
+./clean.sh [--force] [--dry-run]
+```
+
+- `--force` Deletes the files and any backup folders
+- `--dry-run` Does not delete any files, but prints files that would be deleted
+
+---
+
+## Prerequisites
+
+Before running the simulation, ensure you have the following installed:
+
+- The preCICE coupling library
+- Solver `ASolver` and its dependencies
+- Solver `BSolver` and its dependencies
+
+---
+
+## Running the Simulation
+
+### Quick Start
+
+```bash
+# Navigate to the `_generated` folder
+cd _generated/
+
+# Implement the run script
+
+# Make the run script executable
+chmod +x run.sh
+
+# Execute the simulation
+./run.sh
+```
+
+---
+
+For more information, see the [preCICE documentation](https://precice.org/docs.html) and [precice-case-generate](https://github.com/precice/case-generate).
\ No newline at end of file
diff --git a/examples/tutorial4/_reference/a-asolver/adapter-config.json b/examples/tutorial4/_reference/a-asolver/adapter-config.json
new file mode 100644
index 0000000..4c08c30
--- /dev/null
+++ b/examples/tutorial4/_reference/a-asolver/adapter-config.json
@@ -0,0 +1,15 @@
+{
+ "participant_name": "A",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "A-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "write_data_names": [
+ "Data"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/tutorial4/_reference/a-asolver/run.sh b/examples/tutorial4/_reference/a-asolver/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial4/_reference/a-asolver/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/tutorial4/_reference/b-bsolver/adapter-config.json b/examples/tutorial4/_reference/b-bsolver/adapter-config.json
new file mode 100644
index 0000000..d764adf
--- /dev/null
+++ b/examples/tutorial4/_reference/b-bsolver/adapter-config.json
@@ -0,0 +1,15 @@
+{
+ "participant_name": "B",
+ "precice_config_file_path": "../precice-config.xml",
+ "interfaces": [
+ {
+ "mesh_name": "B-Mesh",
+ "patches": [
+ "interface"
+ ],
+ "read_data_names": [
+ "Data"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/examples/tutorial4/_reference/b-bsolver/run.sh b/examples/tutorial4/_reference/b-bsolver/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/examples/tutorial4/_reference/b-bsolver/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/examples/tutorial4/_reference/clean.sh b/examples/tutorial4/_reference/clean.sh
new file mode 100644
index 0000000..2954b31
--- /dev/null
+++ b/examples/tutorial4/_reference/clean.sh
@@ -0,0 +1,350 @@
+#!/usr/bin/env bash
+
+# -------------------------------------------------------------------
+# Script Name: clean.sh
+# Description: Recursively deletes/moves files/dirs except:
+# - Global preserved filenames anywhere (run.sh, adapter-config.json...)
+# - Specific root files (README.md, precice-config.xml)
+# Usage: ./clean.sh [--dry-run] [--force]
+# --dry-run : show what would happen, don't remove/move
+# --force : permanently delete unpreserved items AND remove existing backups
+# -------------------------------------------------------------------
+
+# Strict mode:
+# -e: exit on error
+# -u: exit on undefined variable
+# -o pipefail: exit if any command in a pipe fails
+set -euo pipefail
+
+# --- CONFIGURATION ---
+ROOT_DIR="$(pwd)"
+LOG_FILE="cleanup.log"
+BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
+
+# 1. GLOBAL PRESERVES: filenames to keep anywhere in the tree
+GLOBAL_PRESERVE_NAMES=(
+ "run.sh"
+ "adapter-config.json"
+)
+
+# 2. ROOT PRESERVES: filenames to keep only if in ROOT_DIR
+ROOT_PRESERVE_PATHS=(
+ "clean.sh"
+ "README.md"
+ "precice-config.xml"
+ "$LOG_FILE" # always keep the log (will be overwritten)
+)
+
+# --- DEFAULTS ---
+DRY_RUN=0 # No dry-run by default
+FORCE=0 # No permanent removal of files by default
+MOVED_COUNT=0 # Counter to track if we actually backed anything up
+DELETED_COUNT=0 # Counter to track if we actually deleted anything
+
+# --- HELPERS ---
+
+log() {
+ # We use tee -a (append) here so we don't overwrite previous lines *of this run*.
+ # The file is cleared once at the start of the MAIN block.
+ echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
+}
+
+# Is filename (basename) a global preserved name?
+is_global_preserved() {
+ local filename="$1"
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [[ "$filename" == "$name" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Is a relative path preserved at root?
+is_root_preserved() {
+ local relpath="$1"
+ for p in "${ROOT_PRESERVE_PATHS[@]}"; do
+ if [[ "$relpath" == "$p" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Find out whether directory contains any preserved file *anywhere* under it.
+# Implementation Note: Uses `head -n 1` instead of `-quit` for maximum compatibility
+# across all Linux distros (including Alpine/BusyBox) and BSD/MacOS.
+dir_contains_preserved_content() {
+ local dir="$1"
+ local name
+ local -a find_args=()
+
+ # Build find arguments: ( -name A -o -name B ... )
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [ ${#find_args[@]} -eq 0 ]; then
+ find_args+=( -name "$name" )
+ else
+ find_args+=( -o -name "$name" )
+ fi
+ done
+
+ # Check if find returns any match.
+ # piping to head -n 1 causes find to receive SIGPIPE and stop early if a match is found.
+ if [ -n "$(find "$dir" \( "${find_args[@]}" \) -print 2>/dev/null | head -n 1)" ]; then
+ return 0
+ fi
+ return 1
+}
+
+# Ensure backup destination exists and safely move source there preserving relative path.
+# Prevents collisions by making unique names if needed.
+# Args:
+# $1 = absolute src path
+# $2 = rel path relative to ROOT_DIR (used to recreate path inside backup)
+safe_move_to_backup() {
+ local src="$1"
+ local rel="$2"
+
+ # Destination path = $BACKUP_DIR/$rel
+ local dest="$BACKUP_DIR/$rel"
+ local dest_dir
+ dest_dir="$(dirname "$dest")"
+
+ # Normalize: if dirname is ".", use $BACKUP_DIR as target dir
+ if [[ "$dest_dir" == "$BACKUP_DIR/." ]] || [[ "$dest_dir" == "." ]]; then
+ dest_dir="$BACKUP_DIR"
+ fi
+
+ if [ "$DRY_RUN" -eq 0 ]; then
+ mkdir -p "$dest_dir"
+ fi
+
+ # If dest exists, append numeric suffix before extension to avoid overwrite
+ if [ -e "$dest" ]; then
+ local base name ext candidate n
+ base="$(basename "$dest")"
+ name="${base%.*}"
+ ext="${base##*.}"
+ n=1
+
+ # Check if file has an extension
+ if [[ "$ext" == "$base" ]]; then
+ # No extension
+ candidate="$dest_dir/${name}_$n"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n"
+ done
+ else
+ # Has extension
+ candidate="$dest_dir/${name}_$n.$ext"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n.$ext"
+ done
+ fi
+ dest="$candidate"
+ fi
+
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would be deleted: ${rel}"
+ else
+ mv -- "$src" "$dest"
+ # Increment the counter because we successfully moved a file
+ MOVED_COUNT=$((MOVED_COUNT + 1))
+ log "Deleted: ${rel}"
+ fi
+}
+
+# Safe remove (used by --force). Respects --dry-run.
+safe_remove() {
+ local src="$1"
+ local rel="$2"
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove: $rel"
+ else
+ rm -rf -- "$src"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed: $rel"
+ fi
+}
+
+# Remove existing backup_* directories permanently (used when FORCE=1).
+remove_existing_backups_permanently() {
+ shopt -s nullglob
+ local found=0
+ for old in "$ROOT_DIR"/backup_*; do
+ if [ -d "$old" ]; then
+ found=1
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove backup: $(basename "$old")"
+ else
+ rm -rf -- "$old"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed backup: $(basename "$old")"
+ fi
+ fi
+ done
+ shopt -u nullglob
+ if [ "$found" -eq 0 ]; then
+ log "No existing backup directories to remove."
+ fi
+}
+
+# Perform action for a single path (file or directory) that is NOT preserved.
+# Moves to backup preserving tree, or removes permanently when FORCE=1.
+perform_action() {
+ local item="$1"
+ local rel_path="${item#$ROOT_DIR/}"
+
+ if [ "$FORCE" -eq 1 ]; then
+ safe_remove "$item" "$rel_path"
+ else
+ safe_move_to_backup "$item" "$rel_path"
+ fi
+}
+
+# --- RECURSIVE CLEANUP ---
+recursive_cleanup() {
+ local current_dir="$1"
+
+ # dotglob: includes hidden files (starting with .)
+ # nullglob: makes the loop not run if no files match (avoids literal string issues)
+ shopt -s dotglob nullglob
+
+ local item rel_path base
+
+ for item in "$current_dir"/*; do
+ # Sanity check: ensure file exists (handles rare race conditions or broken links)
+ [ ! -e "$item" ] && [ ! -L "$item" ] && continue
+
+ rel_path="${item#$ROOT_DIR/}"
+ base="$(basename "$item")"
+
+ # Skip . and .. (though glob usually excludes them, safety first)
+ if [[ "$base" == "." || "$base" == ".." ]]; then
+ continue
+ fi
+
+ # --- FILE or SYMLINK ---
+ if [ -f "$item" ] || [ -L "$item" ]; then
+ # 1. Preserve by global name anywhere?
+ if is_global_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # 2. Preserve if at root and matches root-preserve list?
+ if [[ "$current_dir" == "$ROOT_DIR" ]] && is_root_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # Otherwise: perform action
+ perform_action "$item"
+ continue
+ fi
+
+ # --- DIRECTORY ---
+ if [ -d "$item" ]; then
+ # Check if dir contains any preserved content anywhere below
+ if dir_contains_preserved_content "$item"; then
+ recursive_cleanup "$item"
+
+ # After recursion, if directory is empty, remove it (respecting dry-run)
+ if [ -z "$(ls -A "$item")" ]; then
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would remove empty directory: $rel_path"
+ else
+ rmdir -- "$item"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Removed empty directory: $rel_path"
+ fi
+ fi
+ else
+ # Directory contains no preserved files anywhere deep: remove/move entire directory
+ perform_action "$item"
+ fi
+ continue
+ fi
+ done
+
+ # Restore defaults for shopt to avoid side effects if function is reused
+ shopt -u dotglob nullglob
+}
+
+# --- MAIN ---
+
+# Parse flags
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --dry-run) DRY_RUN=1 ;;
+ --force) FORCE=1 ;;
+ *) echo "Unknown parameter: $1"; exit 1 ;;
+ esac
+ shift
+done
+
+# Initialize/Clear the log file for this new run
+: > "$LOG_FILE"
+
+# Confirmation prompt (skip when dry-run)
+if [ "$DRY_RUN" -eq 1 ]; then
+ log "Dry run enabled, nothing will be removed."
+else
+ read -p "This will delete all files except preserved ones. Proceed? [y/n]: " confirm
+ case "$confirm" in
+ [yY][eE][sS]|[yY]) ;;
+ *) log "Cleanup aborted."; exit 0 ;;
+ esac
+fi
+
+if [ "$FORCE" -eq 1 ] && [ "$DRY_RUN" -eq 1 ]; then
+ log "Ignoring --force."
+ FORCE=0
+fi
+
+log "Starting cleanup..."
+
+
+if [ "$FORCE" -eq 1 ]; then
+ remove_existing_backups_permanently
+fi
+
+recursive_cleanup "$ROOT_DIR"
+
+if [ "$DELETED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+fi
+
+OUTPUT_STR=""
+
+# Output if FORCE
+if [ "$FORCE" -eq 1 ]; then
+ OUTPUT_STR="Deleted $DELETED_COUNT $FILE_STR or $DIRECTORY_STR. "
+fi
+
+if [ "$DRY_RUN" -eq 1 ]; then
+ OUTPUT_STR="Dry-run completed successfully."
+else
+ OUTPUT_STR="${OUTPUT_STR}Cleanup completed successfully."
+fi
+
+# Only append the backup message if we actually moved files (MOVED_COUNT > 0)
+if [ "$MOVED_COUNT" -gt 0 ]; then
+ # Correct wording
+ if [ "$MOVED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+ else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+ fi
+ OUTPUT_STR="$OUTPUT_STR Backed up $MOVED_COUNT deleted $FILE_STR or $DIRECTORY_STR in '$BACKUP_DIR'."
+fi
+
+log "$OUTPUT_STR"
\ No newline at end of file
diff --git a/examples/tutorial4/_reference/precice-config.xml b/examples/tutorial4/_reference/precice-config.xml
new file mode 100644
index 0000000..a4e987a
--- /dev/null
+++ b/examples/tutorial4/_reference/precice-config.xml
@@ -0,0 +1,37 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/tutorial4/topology.yaml b/examples/tutorial4/topology.yaml
new file mode 100644
index 0000000..52d0de4
--- /dev/null
+++ b/examples/tutorial4/topology.yaml
@@ -0,0 +1,15 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 3
+ - name: B
+ solver: BSolver
+ dimensionality: 3
+exchanges:
+ - from: A
+ to: B
+ data: Data
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
\ No newline at end of file
diff --git a/precicecasegenerate/cli.py b/precicecasegenerate/cli.py
index 3db6b54..f145233 100644
--- a/precicecasegenerate/cli.py
+++ b/precicecasegenerate/cli.py
@@ -1,71 +1,100 @@
-from .generation_utils.file_generator import FileGenerator
-import argparse
import sys
+import shutil
+import logging
+import argparse
from pathlib import Path
+from precicecasegenerate import helper
+from precicecasegenerate import cli_helper
+from precicecasegenerate.logging_setup import setup_logging
+from precicecasegenerate.input_handler.topology_reader import TopologyReader
+from precicecasegenerate.node_creator import NodeCreator
+from precicecasegenerate.file_creators.config_creator import ConfigCreator
+from precicecasegenerate.file_creators.adapter_config_creator import AdapterConfigCreator
+from precicecasegenerate.file_creators.utility_file_creator import UtilityFileCreator
+
+logger = logging.getLogger(__name__)
+
+
+def runGenerate(args: argparse.Namespace) -> int:
+ setup_logging(verbose=args.verbose)
+ logger.info("Program started.")
+
+ return_value: int = cli_helper.validate_args(args)
+ if return_value != 0:
+ return return_value
+
+ input_file: Path = Path(args.input_file)
+ output_root: Path = Path(args.output_path)
+
+ return_value = generate_case(input_file, output_root)
+
+ logger.info("Program finished.")
+ return return_value
+
+
+def generate_case(input_file: Path, output_root: Path) -> int:
+ """
+ Generate all files for a preCICE case
+ This method creates the required directories and calls the respective methods to create the nodes from the topology,
+ the preCICE configuration file, the adapter configuration files, and the utility files.
+ :param input_file: The path to the input file containing the topology.
+ :param output_root: The root directory for the generated files.
+ :return: 0 if successful, 1 otherwise.
+ """
+ # Create a new directory for the generated files
+ output_root.mkdir(parents=True, exist_ok=True)
+ logger.debug(f"Created output directory at {output_root}")
+
+ logger.debug("Starting topology reader.")
+ topology_reader: TopologyReader = TopologyReader(input_file.resolve())
+ return_value: int = topology_reader.validate_topology()
+ if return_value != 0:
+ return return_value
+ return_value: int = topology_reader.check_topology()
+ if return_value != 0:
+ return return_value
+ topology: dict = topology_reader.get_topology()
+ logger.debug("Topology reader finished.")
+
+ logger.debug("Starting node creator.")
+ node_creator: NodeCreator = NodeCreator(topology)
+ nodes: dict = node_creator.get_nodes()
+ logger.debug("Node creator finished.")
+
+ logger.debug("Starting config creator.")
+ config_creator: ConfigCreator = ConfigCreator(nodes)
+ config_creator.create_config_file(directory=output_root, filename=cli_helper.PRECICE_CONFIG_FILE_NAME)
+ logger.debug("Config creator finished.")
+
+ logger.debug("Creating participant directories.")
+ participant_solver_map: dict = node_creator.get_participant_solver_map()
+ for participant in participant_solver_map:
+ participant_directory: Path = helper.get_participant_solver_directory(output_root, participant.name,
+ participant_solver_map[participant])
+ # The directory will be overwritten if it already exists and is of the form "_generated/name-solver/"
+ if participant_directory.exists():
+ shutil.rmtree(participant_directory, ignore_errors=True)
+ participant_directory.mkdir(parents=True, exist_ok=True)
+ logger.debug(f"Created participant directory at {participant_directory}")
+
+ logger.debug("Starting adapter config creator.")
+ mesh_patch_map: dict = node_creator.get_mesh_patch_map()
+ adapter_config_creator: AdapterConfigCreator = AdapterConfigCreator(participant_solver_map,
+ mesh_patch_map,
+ precice_config_filename=cli_helper.PRECICE_CONFIG_FILE_NAME)
+ adapter_config_creator.create_adapter_configs(parent_directory=output_root)
+
+ logger.debug("Starting utility file creator.")
+ utility_file_creator: UtilityFileCreator = UtilityFileCreator(participant_solver_map)
+ utility_file_creator.create_utility_files(parent_directory=output_root)
+ return 0
+
-def makeGenerateParser(add_help: bool = True):
- parser = argparse.ArgumentParser(
- description="Initialize a preCICE case given a topology file",
- add_help=add_help,
- )
- parser.add_argument(
- "-f",
- "--input-file",
- type=Path,
- default=Path.cwd() / "topology.yaml",
- help="Input topology.yaml file. Defaults to './topology.yaml'.",
- )
- parser.add_argument(
- "-o",
- "--output-path",
- type=Path,
- help="Output path for the generated folder. Defaults to './_generated'.",
- default=Path.cwd() / "_generated",
- )
- parser.add_argument(
- "-v",
- "--verbose",
- action="store_true",
- help="Enable verbose logging output.",
- )
- parser.add_argument(
- "--validate-topology",
- action="store_true",
- required=False,
- default=True,
- help="Whether to validate the input topology.yaml file against the preCICE topology schema.",
- )
- return parser
-
-
-def runGenerate(ns):
- try:
- file_generator = FileGenerator(ns.input_file, ns.output_path)
-
- # Clear any previous log state
- file_generator.logger.clear_log_state()
-
- # Generate precice-config.xml, README.md, clean.sh
- file_generator.generate_level_0()
- # Generate configuration for the solvers
- file_generator.generate_level_1()
-
- # Format the generated preCICE configuration
- file_generator.format_precice_config()
-
- file_generator.handle_output(ns)
-
- file_generator.validate_topology(ns)
-
- return 0
- except Exception as e:
- print(e, file=sys.stderr)
- return 1
-
-
-def main():
- args = makeGenerateParser().parse_args()
+def main() -> int:
+ # Parse the command line arguments
+ parser = cli_helper.makeGenerateParser()
+ args = parser.parse_args()
return runGenerate(args)
diff --git a/precicecasegenerate/cli_helper.py b/precicecasegenerate/cli_helper.py
new file mode 100644
index 0000000..496b831
--- /dev/null
+++ b/precicecasegenerate/cli_helper.py
@@ -0,0 +1,63 @@
+"""
+This file contains helper methods and variables for the cli.py file and its main method.
+"""
+
+import argparse
+import logging
+from pathlib import Path
+
+logger = logging.getLogger(__name__)
+
+PRECICE_CONFIG_FILE_NAME: str = "precice-config.xml"
+GENERATED_DIR_NAME: str = "_generated"
+LOG_DIR_NAME: str = ".logs"
+
+
+def makeGenerateParser(add_help: bool = True) -> argparse.ArgumentParser:
+ parser = argparse.ArgumentParser(
+ description="Initialize a preCICE case given a topology file",
+ add_help=add_help,
+ )
+ parser.add_argument(
+ "input_file",
+ type=Path,
+ nargs="?",
+ help="Path to the input YAML topology file.",
+ default=Path.cwd() / "topology.yaml"
+ )
+ parser.add_argument(
+ "-v", "--verbose", action="store_true", help="Enable verbose logging output."
+ )
+ parser.add_argument(
+ "-o", "--output_path",
+ type=Path,
+ default=Path.cwd() / GENERATED_DIR_NAME,
+ help="A custom output path for the generated folder. Already existing folders and files will be overwritten."
+ )
+ return parser
+
+
+def validate_args(args: argparse.Namespace) -> int:
+ """
+ Validate the arguments passed to the CLI.
+ This checks if the input file exists and is a YAML file.
+ :param args: The parsed arguments.
+ :return: 0 if the arguments are valid, 1 otherwise.
+ """
+ logger.debug(f"Arguments parsed. Arguments: {vars(args)}. Checking if given file exists.")
+
+ input_file: Path = Path(args.input_file).resolve()
+
+ # Check if the file exists
+ if not input_file.is_file():
+ logger.critical(f"File {input_file.resolve()} does not exist. Aborting program.")
+ return 1
+ logger.debug(f"File {input_file.resolve()} exists.")
+
+ # Check if the file is a YAML file
+ if input_file.suffix.lower() in [".yaml", ".yml"]:
+ logger.debug(f"File {input_file.resolve()} is a YAML file.")
+ else:
+ logger.critical(f"File {input_file.resolve()} is not a YAML file. Aborting program.")
+ return 1
+ return 0
diff --git a/precicecasegenerate/controller_utils/__init__.py b/precicecasegenerate/controller_utils/__init__.py
deleted file mode 100644
index cfe6446..0000000
--- a/precicecasegenerate/controller_utils/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from .myutils import UT_PCErrorLogging
-from .precice_struct import PS_CouplingScheme
-from .precice_struct import PS_Mesh
-from .precice_struct import PS_ParticipantSolver
-from .precice_struct import PS_PreCICEConfig
-from .precice_struct import PS_QuantityCoupled
-from .ui_struct import UI_Coupling
-from .ui_struct import UI_Participant
-from .ui_struct import UI_SimulationInfo
-from .ui_struct import UI_UserInput
diff --git a/precicecasegenerate/controller_utils/myutils/UT_PCErrorLogging.py b/precicecasegenerate/controller_utils/myutils/UT_PCErrorLogging.py
deleted file mode 100644
index a3ea6e8..0000000
--- a/precicecasegenerate/controller_utils/myutils/UT_PCErrorLogging.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import logging
-
-
-class UT_PCErrorLogging(object):
- """
- This is the main class to record all the loggings during the run of the program
- """
-
- def __init__(self):
- """empty Ctor"""
- pass
-
- def rep_error(self, msg: str):
- # logging.error(msg)
- logging.info(msg)
- pass
-
- def rep_info(self, msg: str):
- logging.info(msg)
- pass
diff --git a/precicecasegenerate/controller_utils/myutils/__init__.py b/precicecasegenerate/controller_utils/myutils/__init__.py
deleted file mode 100644
index d7554b3..0000000
--- a/precicecasegenerate/controller_utils/myutils/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .UT_PCErrorLogging import UT_PCErrorLogging
diff --git a/precicecasegenerate/controller_utils/precice_struct/PS_CouplingScheme.py b/precicecasegenerate/controller_utils/precice_struct/PS_CouplingScheme.py
deleted file mode 100644
index e5c62fe..0000000
--- a/precicecasegenerate/controller_utils/precice_struct/PS_CouplingScheme.py
+++ /dev/null
@@ -1,568 +0,0 @@
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-from precicecasegenerate.controller_utils.precice_struct.PS_ParticipantSolver import (
- PS_ParticipantSolver,
-)
-from precicecasegenerate.controller_utils.ui_struct.UI_UserInput import UI_UserInput
-import xml.etree.ElementTree as etree
-
-
-class PS_CouplingScheme(object):
- """Class to represent the Coupling schemes"""
-
- def __init__(self):
- """Ctor to initialize all the fields"""
- self.firstSolver = None
- self.secondSolver = None
- pass
-
- def init_from_UI(self, ui_config: UI_UserInput, conf): # : PS_PreCICEConfig
- """This method should be overwritten by the subclasses"""
- pass
-
- def write_precice_xml_config(self, tag: etree, config): # config: PS_PreCICEConfig
- """parent function to write out XML file"""
- pass
-
- def write_participants_and_coupling_scheme(
- self, tag: etree, config, coupling_str: str
- ):
- """write out the config XMl file"""
- if len(config.solvers) <= 2:
- # for only
- coupling_scheme = etree.SubElement(tag, "coupling-scheme:" + coupling_str)
- # print the participants, ASSUMPTION! we assume there is at least two
- mylist = ["NONE", "NONE"]
- mycomplexity = [-1, -1]
- myindex = 0
- for participant_name in config.solvers:
- p = config.solvers[participant_name]
- mylist[myindex] = participant_name
- mycomplexity[myindex] = p.solver_domain.value
- myindex = myindex + 1
- pass
- # the solver with the higher complexity should be first
- if mycomplexity[0] < mycomplexity[1]:
- i = etree.SubElement(
- coupling_scheme, "participants", first=mylist[0], second=mylist[1]
- )
- config.couplingScheme_participants = mylist[0], mylist[1]
- else:
- i = etree.SubElement(
- coupling_scheme, "participants", first=mylist[1], second=mylist[0]
- )
- config.couplingScheme_participants = mylist[1], mylist[0]
-
- else:
- # TODO: is "multi" good for all
- coupling_scheme = etree.SubElement(tag, "coupling-scheme:multi")
- # first find the solver with the most meshes and this should be the one who controls the coupling
- nr_max_meshes = -1
- control_participant_name = "NONE"
- for participant_name in config.solvers:
- participant = config.solvers[participant_name]
- # TODO: we should select the solver with the most meshes ?
- if len(participant.meshes) > nr_max_meshes:
- nr_max_meshes = len(participant.meshes)
- control_participant_name = participant.name
- pass
- # second print all the participants
- for participant_name in config.solvers:
- participant = config.solvers[participant_name]
- if participant.name == control_participant_name:
- i = etree.SubElement(
- coupling_scheme,
- "participant",
- name=participant_name,
- control="yes",
- )
- else:
- i = etree.SubElement(
- coupling_scheme, "participant", name=participant_name
- )
- pass
- pass
- pass
- return coupling_scheme
-
- def _find_simplest_solver(self, config):
- """Find the solver with minimal complexity"""
- simple_solver = None
- solver_simplicity = -2
- for q_name in config.coupling_quantities:
- q = config.coupling_quantities[q_name]
- solver = q.source_solver
- if solver_simplicity < solver.solver_domain.value:
- simple_solver = solver
- return simple_solver
-
- def _find_other_solver_for_coupling(self, quantity, solver):
- """Find the other solver involved in the coupling"""
- other_solver_for_coupling = None
- other_mesh_name = None
- for oq in quantity.list_of_solvers:
- other_solver = quantity.list_of_solvers[oq]
- if other_solver.name != solver.name:
- other_solver_for_coupling = other_solver
- for allm in other_solver.meshes:
- if allm != quantity.source_mesh_name:
- other_mesh_name = allm
- return other_solver_for_coupling, other_mesh_name
-
- def write_exchange_and_convergance(
- self, config, coupling_scheme, relative_conv_str: str
- ):
- """Writes to the XML the exchange list"""
- for exchange in config.exchanges:
- from_s = exchange.get("from")
- to_s = exchange.get("to")
- data = exchange.get("data")
-
- # Process mappings
- read_mappings = [m.copy() for m in config.mappings_read]
- write_mappings = [m.copy() for m in config.mappings_write]
-
- read_mapping = next(
- (
- m
- for m in read_mappings
- if (m["from"] == from_s + "-Mesh" and m["to"] == to_s + "-Mesh")
- ),
- None,
- )
- write_mapping = next(
- (
- m
- for m in write_mappings
- if (m["from"] == from_s + "-Mesh" and m["to"] == to_s + "-Mesh")
- ),
- None,
- )
-
- # Choose mesh based on mapping constraint
- if read_mapping and read_mapping["constraint"] == "conservative":
- exchange_mesh_name = read_mapping["to"]
- elif read_mapping and read_mapping["constraint"] == "consistent":
- exchange_mesh_name = read_mapping["from"]
- elif write_mapping and write_mapping["constraint"] == "conservative":
- exchange_mesh_name = write_mapping["to"]
- elif write_mapping and write_mapping["constraint"] == "consistent":
- exchange_mesh_name = write_mapping["from"]
- else:
- exchange_mesh_name = from_s + "-Mesh"
- if exchange_mesh_name not in config.exchange_mesh_names:
- config.exchange_mesh_names.append(exchange_mesh_name)
- e = etree.SubElement(
- coupling_scheme,
- "exchange",
- data=data,
- mesh=exchange_mesh_name,
- from___=from_s,
- to=to_s,
- )
- # Use the same mesh for the relative convergence measure
- if relative_conv_str != "":
- c = etree.SubElement(
- coupling_scheme,
- "relative-convergence-measure",
- limit=relative_conv_str,
- mesh=exchange_mesh_name,
- data=data,
- )
- pass
-
-
-class PS_ExplicitCoupling(PS_CouplingScheme):
- """Explicit coupling scheme"""
-
- def __init__(self):
- self.NrTimeStep = -1
- self.Dt = 1e-4
- pass
-
- def initFromUI(self, ui_config: UI_UserInput, conf): # conf : PS_PreCICEConfig
- # call theinitialization from the UI data structures
- super(PS_ExplicitCoupling, self).init_from_UI(ui_config, conf)
- simulation_conf = ui_config.sim_info
- self.NrTimeStep = simulation_conf.NrTimeStep
- self.Dt = simulation_conf.Dt
- self.display_standard_values = simulation_conf.display_standard_values
- self.coupling = simulation_conf.coupling
- pass
-
- def write_precice_xml_config(self, tag: etree, config): # config: PS_PreCICEConfig
- """write out the config XMl file"""
- coupling_scheme = self.write_participants_and_coupling_scheme(
- tag, config, f"{self.coupling}-explicit"
- )
- config.coupling_scheme = coupling_scheme
- if str(self.display_standard_values).lower() == "true":
- if self.NrTimeStep is None:
- self.NrTimeStep = 1e-3
- if self.Dt is None:
- self.Dt = 1e-3
- i = etree.SubElement(
- coupling_scheme, "max-time", value=str(self.NrTimeStep)
- )
- attr = {"value": str(self.Dt)}
- i = etree.SubElement(coupling_scheme, "time-window-size", attr)
- else:
- if self.NrTimeStep is not None:
- i = etree.SubElement(
- coupling_scheme, "max-time", value=str(self.NrTimeStep)
- )
- if self.Dt is not None:
- attr = {"value": str(self.Dt)}
- i = etree.SubElement(coupling_scheme, "time-window-size", attr)
-
- # write out the exchange but not the convergence (if empty it will not be written)
- self.write_exchange_and_convergance(config, coupling_scheme, "")
-
-
-class PS_ImplicitCoupling(PS_CouplingScheme):
- """Implicit coupling scheme"""
-
- def __init__(self):
-
- # TODO: define here only implicit coupling specific measures
-
- self.NrTimeStep = -1
- self.Dt = 1e-4
- self.maxIteration = 50
- self.relativeConverganceEps = 1e-4
- self.extrapolation_order = 2
- self.acceleration = PS_ImplicitAcceleration() # this is the acceleration
- self.display_standard_values = "false"
- pass
-
- def initFromUI(self, ui_config: UI_UserInput, conf): # conf : PS_PreCICEConfig
- # call theinitialization from the UI data structures
- super(PS_ImplicitCoupling, self).init_from_UI(ui_config, conf)
-
- # TODO: should we add all quantities?
- # later do delete some quantities from the list?
- self.acceleration.post_process_quantities = conf.coupling_quantities
-
- simulation_conf = ui_config.sim_info
-
- self.NrTimeStep = simulation_conf.NrTimeStep
- self.Dt = simulation_conf.Dt
- self.maxIteration = simulation_conf.max_iterations
- self.display_standard_values = simulation_conf.display_standard_values
- self.coupling = simulation_conf.coupling
-
- pass
-
- def write_precice_xml_config(self, tag: etree, config): # config: PS_PreCICEConfig
- """write out the config XMl file"""
- if self.coupling not in ["serial", "parallel"]:
- raise ValueError(
- f"coupling must be 'serial' or 'parallel', but got {self.coupling}"
- )
- coupling_scheme = self.write_participants_and_coupling_scheme(
- tag, config, f"{self.coupling}-implicit"
- )
- config.coupling_scheme = coupling_scheme
-
- if str(self.display_standard_values).lower() == "true":
- if self.NrTimeStep is None:
- self.NrTimeStep = 1e-3
- if self.Dt is None:
- self.Dt = 1e-3
- if self.maxIteration is None:
- self.maxIteration = 50
- i = etree.SubElement(
- coupling_scheme, "max-time", value=str(self.NrTimeStep)
- )
- attr = {"value": str(self.Dt)}
- i = etree.SubElement(coupling_scheme, "time-window-size", attr)
- i = etree.SubElement(
- coupling_scheme, "max-iterations", value=str(self.maxIteration)
- )
- # i = etree.SubElement(coupling_scheme, "extrapolation-order", value=str(self.extrapolation_order))
- else:
- if self.NrTimeStep is not None:
- i = etree.SubElement(
- coupling_scheme, "max-time", value=str(self.NrTimeStep)
- )
- if self.Dt is not None:
- attr = {"value": str(self.Dt)}
- i = etree.SubElement(coupling_scheme, "time-window-size", attr)
- if self.maxIteration is not None:
- i = etree.SubElement(
- coupling_scheme, "max-iterations", value=str(self.maxIteration)
- )
- # if self.extrapolation_order is not None:
- # i = etree.SubElement(coupling_scheme, "extrapolation-order", value=str(self.extrapolation_order))
-
- # write out the exchange and the convergence rate
- self.write_exchange_and_convergance(
- config, coupling_scheme, str(self.relativeConverganceEps)
- )
-
- # finally we write out the post processing...
- self.acceleration.write_precice_xml_config(coupling_scheme, config, self)
-
- pass
-
-
-class PS_ImplicitAcceleration(object):
- """Class to model the post-processing part of the implicit coupling"""
-
- def __init__(self):
- """Ctor for the acceleration"""
- self.name = "IQN-ILS"
- self.precondition_type = "residual-sum"
- self.post_process_quantities = {} # The quantities that are in the acceleration
-
- def write_precice_xml_config(self, tag: etree.Element, config, parent):
- """Write out the config XML file of the acceleration in case of implicit coupling
- Only for explicit coupling (one directional) this should not write out anything"""
-
- self.name = (
- config.acceleration["name"]
- if config.acceleration is not None
- else "IQN-ILS"
- )
- self.display_standard_values = (
- config.acceleration["display_standard_values"]
- if config.acceleration is not None
- else "false"
- )
-
- post_processing = etree.SubElement(tag, "acceleration:" + self.name)
-
- # Identify unique solvers and their meshes
- solver_meshes = {}
- for q_name, q in config.coupling_quantities.items():
- solver = q.source_solver
- if solver.name not in solver_meshes:
- solver_meshes[solver.name] = set()
- solver_meshes[solver.name].add(q.source_mesh_name)
-
- # Attempt to find the 'simplest' solver (e.g., solid solver)
- # This is a heuristic and might need adjustment based on specific use cases
- def solver_complexity(solver_name):
- complexity_map = {"solid": 1, "structural": 1, "fluid": 2, "thermal": 3}
- return complexity_map.get(solver_name.lower(), 10)
-
- sorted_solvers = sorted(solver_meshes.keys(), key=solver_complexity)
-
- # Choose the simplest solver's mesh
- simple_solver = sorted_solvers[0] if sorted_solvers else None
-
- from_s = "___"
- to_s = "__"
- exchange_mesh_name = ""
- read_mappings = [m.copy() for m in config.mappings_read]
- write_mappings = [m.copy() for m in config.mappings_write]
-
- acceleration = config.acceleration
- if acceleration is not None and self.display_standard_values:
- for a, b in acceleration.items():
- if b is not None:
- if self.name == "IQN-ILS":
- if a == "initial-relaxation":
- if b.get("enforce") is not None:
- i = etree.SubElement(
- post_processing,
- a,
- value=str(b.get("value")),
- enforce=str(b.get("enforce")),
- )
- else:
- i = etree.SubElement(
- post_processing, a, value=str(b.get("value"))
- )
- elif a == "max-used-iterations" or a == "time-windows-reused":
- i = etree.SubElement(post_processing, a, value=str(b))
- elif a == "filter":
- if b.get("type") is not None:
- i = etree.SubElement(
- post_processing,
- a,
- limit=str(b.get("limit")),
- type=str(b.get("type")),
- )
- else:
- i = etree.SubElement(
- post_processing, a, limit=str(b.get("limit"))
- )
- elif a == "preconditioner":
- if b.get("type") is not None:
- i = etree.SubElement(
- post_processing, a, type=str(b.get("type"))
- )
- i.set("freeze-after", str(b.get("freeze-after")))
- else:
- i = etree.SubElement(
- post_processing,
- a,
- freeze_after=str(b.get("freeze-after")),
- )
- if self.name == "aitken":
- if a == "initial-relaxation":
- if b.get("enforce") is not None:
- i = etree.SubElement(
- post_processing,
- a,
- value=str(b.get("value")),
- enforce=str(b.get("enforce")),
- )
- else:
- i = etree.SubElement(
- post_processing, a, value=str(b.get("value"))
- )
- elif a == "preconditioner":
- if b.get("type") is not None:
- i = etree.SubElement(
- post_processing, a, type=str(b.get("type"))
- )
- i.set("freeze-after", str(b.get("freeze-after")))
- else:
- i = etree.SubElement(
- post_processing,
- a,
- freeze_after=str(b.get("freeze-after")),
- )
- if self.name == "IQN-IMVJ":
- if a == "initial-relaxation":
- if b.get("enforce") is not None:
- i = etree.SubElement(
- post_processing,
- a,
- value=str(b.get("value")),
- enforce=str(b.get("enforce")),
- )
- else:
- i = etree.SubElement(
- post_processing, a, value=str(b.get("value"))
- )
- elif a == "max-used-iterations" or a == "time-windows-reused":
- i = etree.SubElement(post_processing, a, value=str(b))
- elif a == "filter":
- if b.get("type") is not None:
- i = etree.SubElement(
- post_processing,
- a,
- limit=str(b.get("limit")),
- type=str(b.get("type")),
- )
- else:
- i = etree.SubElement(
- post_processing, a, limit=str(b.get("limit"))
- )
- elif a == "preconditioner":
- if b.get("type") is not None:
- i = etree.SubElement(
- post_processing, a, type=str(b.get("type"))
- )
- i.set("freeze-after", str(b.get("freeze-after")))
- else:
- i = etree.SubElement(
- post_processing,
- a,
- freeze_after=str(b.get("freeze-after")),
- )
- elif a == "imvj-restart-mode":
- i = etree.SubElement(
- post_processing,
- a,
- {
- "truncation-threshold": str(
- b.get("truncation-threshold")
- ),
- "chunk-size": str(b.get("chunk-size")),
- "reused-time-windows-at-restart": str(
- b.get("reused-time-windows-at-restart")
- ),
- "type": str(b.get("type")),
- },
- )
-
- if simple_solver:
- for q_name, q in config.coupling_quantities.items():
- # Use the first mesh from the simplest solver
- mesh_name = list(solver_meshes[simple_solver])[0]
-
- # i = etree.SubElement(post_processing, "data",
- # name=q.instance_name,
- # mesh=mesh_name)
-
- # Copy over logic from _determine_exchange_mesh to determine right mesh
- for exchange in config.exchanges:
- # print("Exchange data: " + exchange.get('data'))
- # print("Quantity name: " + q.instance_name)
- if exchange.get("data").lower() == q.instance_name.lower():
- from_s = exchange.get("from")
- to_s = exchange.get("to")
-
- # Process mappings
- read_mapping = next(
- (
- m
- for m in read_mappings
- if (
- m["from"] == from_s + "-Mesh"
- and m["to"] == to_s + "-Mesh"
- )
- ),
- None,
- )
- write_mapping = next(
- (
- m
- for m in write_mappings
- if (
- m["from"] == from_s + "-Mesh"
- and m["to"] == to_s + "-Mesh"
- )
- ),
- None,
- )
-
- # Choose mesh based on mapping constraint
- if (
- read_mapping
- and read_mapping["constraint"] == "conservative"
- ):
- exchange_mesh_name = read_mapping["to"]
- elif (
- read_mapping and read_mapping["constraint"] == "consistent"
- ):
- exchange_mesh_name = read_mapping["from"]
- elif (
- write_mapping
- and write_mapping["constraint"] == "conservative"
- ):
- exchange_mesh_name = write_mapping["to"]
- elif (
- write_mapping
- and write_mapping["constraint"] == "consistent"
- ):
- exchange_mesh_name = write_mapping["from"]
- else:
- exchange_mesh_name = q.source_mesh_name
-
- # print(exchange_mesh_name)
- if exchange_mesh_name != "":
- if config.couplingScheme.coupling == "serial":
- if (
- exchange_mesh_name
- == config.couplingScheme_participants[1] + "-Mesh"
- ):
- i = etree.SubElement(
- post_processing,
- "data",
- name=q.instance_name,
- mesh=exchange_mesh_name,
- )
- # parallel coupling
- else:
- i = etree.SubElement(
- post_processing,
- "data",
- name=q.instance_name,
- mesh=exchange_mesh_name,
- )
diff --git a/precicecasegenerate/controller_utils/precice_struct/PS_Mesh.py b/precicecasegenerate/controller_utils/precice_struct/PS_Mesh.py
deleted file mode 100644
index 26dc0fb..0000000
--- a/precicecasegenerate/controller_utils/precice_struct/PS_Mesh.py
+++ /dev/null
@@ -1,45 +0,0 @@
-from precicecasegenerate.controller_utils.precice_struct.PS_QuantityCoupled import *
-
-# from .PS_ParticipantSolver import PS_ParticipantSolver #-> this would result in circular reference
-
-
-class PS_Mesh(object):
- """The mesh object that is assigned to one or more solver"""
-
- def __init__(self):
- self.name = "" # name of the mesh
- self.quantities = {} # list of the quantities that are stored here
- self.list_of_solvers = (
- {}
- ) # dictionary with all the solver (names) that use this mesh
- self.source_solver = None # The solver that provides this mesh
- pass
-
- def add_source_solver(self, source_solver):
- """Sets the source solver that provides this mesh"""
- self.source_solver = source_solver
- pass
-
- def add_solver(self, solver): # solver: PS_ParticipantSolver
- """adds a solver to the list of solver"""
- self.list_of_solvers[solver.solver_name] = solver
- pass
-
- def add_quantity(self, quantity: QuantityCouple):
- """adds a quantity for coupling"""
- self.quantities[quantity.instance_name] = quantity
- pass
-
- def get_solver(self, solver_name: str):
- """returns the solver"""
- if solver_name in self.list_of_solvers:
- return self.list_of_solvers[solver_name]
- else:
- return None
-
- def get_quantity(self, quantity_name: str):
- """returns the quantity"""
- if quantity_name in self.quantities:
- return self.quantities[quantity_name]
- else:
- return None
diff --git a/precicecasegenerate/controller_utils/precice_struct/PS_ParticipantSolver.py b/precicecasegenerate/controller_utils/precice_struct/PS_ParticipantSolver.py
deleted file mode 100644
index a414cf8..0000000
--- a/precicecasegenerate/controller_utils/precice_struct/PS_ParticipantSolver.py
+++ /dev/null
@@ -1,251 +0,0 @@
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-from precicecasegenerate.controller_utils.ui_struct.UI_Participant import UI_Participant
-from precicecasegenerate.controller_utils.ui_struct.UI_Coupling import UI_Coupling
-from precicecasegenerate.controller_utils.precice_struct.PS_QuantityCoupled import (
- QuantityCouple,
-)
-from precicecasegenerate.controller_utils.precice_struct.PS_Mesh import PS_Mesh
-from enum import Enum
-
-
-class SolverDomain(Enum):
- """enum type to represent the physical domain of the solver"""
-
- Fluid = 0
- Solid = 1
- Heat = 2
- NotDefined = -1
-
-
-class SolverDimension(Enum):
- """The dimension of the solver, the interface could be one dimension less
- but must not be"""
-
- p1D = 1
- p2D = 2
- p3D = 3
-
-
-class SolverNature(Enum):
- """
- Enum type to show if we have a transient problem or a stationary one
- Mostly they will be transient problems
- """
-
- STATIONARY = 0
- TRANSIENT = 1
-
-
-class PS_ParticipantSolver(object):
- """Class to represent a participant in the preCICE data structure"""
-
- dim: SolverDimension
- dimensionality: int
-
- # TODO: one solver might have more than one couplings!!!
-
- def __init__(self, participant: UI_Participant): # conf:PS_PreCICEConfig
- """Ctor"""
- self.solver_domain = SolverDomain.NotDefined
-
- # Use set_dimensionality method to set solver dimension
- self.set_dimensionality(participant.dimensionality)
-
- self.nature = SolverNature.STATIONARY
- self.quantities_read = {} # list of quantities that are read by this solver
- self.quantities_write = {} # list of quantities that are written by this solver
-
- self.meshes = {} # we have each mesh for each coupling
- self.coupling_participants = (
- {}
- ) # for each coupling we also store the name of the participant
-
- self.solver_name = participant.solver_name
- self.name = participant.name
-
- pass
-
- def set_dimensionality(self, dim: int):
- """sets the dimensionality of the solver"""
- if dim == 1:
- self.dim = SolverDimension.p1D
- self.dimensionality = 1
- if dim == 2:
- self.dim = SolverDimension.p2D
- self.dimensionality = 2
- elif dim == 3:
- self.dim = SolverDimension.p3D
- self.dimensionality = 3
- else:
- raise Exception(f"dimensionality must be 1, 2 or 3")
-
- def create_mesh_for_coupling(self, conf, other_solver_name: str):
- """generates the mesh for the coupling"""
- # IMPORTANT: we call the function with the source participant first and then the rest
- coupling_mesh = conf.get_mesh_by_participant_names(self.name, other_solver_name)
- # IMPORTANT: store the current mesh name such that later we a
- # print("!!! Mesh = ", coupling_mesh.name)
- self.meshes[coupling_mesh.name] = conf.get_mesh_by_participant_names(
- self.name, other_solver_name
- )
- self.coupling_participants[other_solver_name] = 1
- pass
-
- def add_quantities_for_coupling(
- self,
- conf,
- boundary_code1: str,
- boundary_code2: str,
- other_solver_name: str,
- r_list: list,
- w_list: list,
- ):
- # there should be at least one coupling quantity, therefore no early exit
- # add mesh
- source_mesh_name = conf.get_mesh_name_by_participants(
- self.name, other_solver_name
- )
- other_mesh_name = conf.get_mesh_name_by_participants(
- other_solver_name, self.name
- )
- self.create_mesh_for_coupling(conf, other_solver_name)
-
- # Determine reading and writing quantities based on exchanges
- for exchange in conf.exchanges:
- if exchange["from"] == self.name:
- # This participant is writing data
- if exchange["data"] in w_list:
- w = conf.get_coupling_quantity(
- exchange["data"], source_mesh_name, boundary_code2, self, True
- )
- conf.add_quantity_to_mesh(other_mesh_name, w)
- conf.add_quantity_to_mesh(source_mesh_name, w)
- self.quantities_write[w.instance_name] = w
- elif exchange["to"] == self.name:
- # This participant is reading data
- if exchange["data"] in r_list:
- r = conf.get_coupling_quantity(
- exchange["data"], other_mesh_name, boundary_code1, self, False
- )
- conf.add_quantity_to_mesh(other_mesh_name, r)
- conf.add_quantity_to_mesh(source_mesh_name, r)
- self.quantities_read[r.instance_name] = r
- pass
-
- def make_participant_fsi_fluid(
- self, conf, boundary_code1: str, boundary_code2: str, other_solver_name: str
- ):
- """This method should set up the participant as a fluid solver for FSI"""
- self.add_quantities_for_coupling(
- conf,
- boundary_code1,
- boundary_code2,
- other_solver_name,
- ["Displacement"],
- ["Force"],
- )
- # set the type of the solver/participant
- self.solver_domain = SolverDomain.Fluid
- self.nature = SolverNature.TRANSIENT
- pass
-
- def make_participant_fsi_structure(
- self, conf, boundary_code1: str, boundary_code2: str, other_solver_name: str
- ):
- """This method should set up the participant as a structure solver for FSI"""
- self.add_quantities_for_coupling(
- conf,
- boundary_code1,
- boundary_code2,
- other_solver_name,
- ["Force"],
- ["Displacement"],
- )
- # set the type of the participant
- self.solver_domain = SolverDomain.Solid
- self.nature = SolverNature.TRANSIENT
- pass
-
- def make_participant_f2s_fluid(
- self, conf, boundary_code1: str, boundary_code2: str, other_solver_name: str
- ):
- """This method should set up the participant as a fluid solver for F2S"""
- self.add_quantities_for_coupling(
- conf, boundary_code1, boundary_code2, other_solver_name, [], ["Force"]
- )
- # set the type of the solver/participant
- self.solver_domain = SolverDomain.Fluid
- self.nature = SolverNature.TRANSIENT
- pass
-
- def make_participant_f2s_structure(
- self, conf, boundary_code1: str, boundary_code2: str, other_solver_name: str
- ):
- """This method should set up the participant as a structure solver for F2S"""
- self.add_quantities_for_coupling(
- conf, boundary_code1, boundary_code2, other_solver_name, ["Force"], []
- )
- # set the type of the participant
- self.solver_domain = SolverDomain.Solid
- self.nature = SolverNature.TRANSIENT
- pass
-
- def make_participant_cht_fluid(
- self,
- conf,
- boundary_code1: str,
- boundary_code2: str,
- other_solver_name: str,
- data_forward: str,
- data_backward: str,
- ):
- """makes a change heat fluid solver from the participant"""
- # print("CHT FLUID")
- heat_str = data_forward if "HeatTransfer" in data_forward else data_backward
- temperature_str = (
- data_forward if "Temperature" in data_forward else data_backward
- )
-
- self.add_quantities_for_coupling(
- conf,
- boundary_code1,
- boundary_code2,
- other_solver_name,
- [heat_str, temperature_str],
- [heat_str, temperature_str],
- )
- # set the type of the participant
- self.solver_domain = SolverDomain.Fluid
- self.nature = SolverNature.TRANSIENT
- pass
-
- def make_participant_cht_structure(
- self,
- conf,
- boundary_code1: str,
- boundary_code2: str,
- other_solver_name: str,
- data_forward: str,
- data_backward: str,
- ):
- """makes a change heat structure solver from the participant"""
- # print("CHT STRUCTURE")
- heat_str = data_forward if "HeatTransfer" in data_forward else data_backward
- temperature_str = (
- data_forward if "Temperature" in data_forward else data_backward
- )
-
- self.add_quantities_for_coupling(
- conf,
- boundary_code1,
- boundary_code2,
- other_solver_name,
- [heat_str, temperature_str],
- [heat_str, temperature_str],
- )
- # set the type of the participant
- self.solver_domain = SolverDomain.Solid
- self.nature = SolverNature.TRANSIENT
- pass
diff --git a/precicecasegenerate/controller_utils/precice_struct/PS_PreCICEConfig.py b/precicecasegenerate/controller_utils/precice_struct/PS_PreCICEConfig.py
deleted file mode 100644
index 13baaea..0000000
--- a/precicecasegenerate/controller_utils/precice_struct/PS_PreCICEConfig.py
+++ /dev/null
@@ -1,632 +0,0 @@
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-from precicecasegenerate.controller_utils.ui_struct.UI_UserInput import UI_UserInput
-from precicecasegenerate.controller_utils.ui_struct.UI_Coupling import *
-from precicecasegenerate.controller_utils.precice_struct.PS_Mesh import *
-from precicecasegenerate.controller_utils.precice_struct.PS_ParticipantSolver import (
- PS_ParticipantSolver,
-)
-from precicecasegenerate.controller_utils.precice_struct.PS_CouplingScheme import *
-import xml.etree.ElementTree as etree
-import xml.dom.minidom as my_minidom
-
-
-class PS_PreCICEConfig(object):
- """Top main class for the preCICE config"""
-
- def __init__(self):
- """Ctor"""
- # the overall coupling scheme
- # this contains all the coupling information between the solvers
- self.couplingScheme = None
- # here we enlist all the solvers including their meshes
- self.solvers = {} # empty dictionary with the solvers
- self.meshes = {} # dictionary with the meshes of the coupling scenario
- self.coupling_quantities = {} # dictionary with the coupling quantities
- self.exchanges = [] # list to store full exchange details
- self.mappings_read = []
- self.mappings_write = []
- self.couplingScheme_participants = None
- self.couplingScheme = None
- self.exchange_mesh_names = []
- pass
-
- def get_coupling_quantity(
- self, quantity_name: str, source_mesh_name: str, bc: str, solver, read: bool
- ):
- """returns the coupling quantity specified by name,
- the name is a combination of mesh_name + quantity name"""
- # there could be more than one pressure or temperature therefore we
- # add always as a prefix the name of the mesh such that it will become unique
- # IMPORTANT: we need to specify the source mesh of the quantity not other mesh
-
- concat_quantity_name = quantity_name
- # print(" Q=", quantity_name, " T=", concat_quantity_name)
- if concat_quantity_name in self.coupling_quantities:
- ret = self.coupling_quantities[concat_quantity_name]
- ret.list_of_solvers[solver.name] = solver
- # print(" 1 source mesh = ", source_mesh_name, " read= ", read)
- # if this is the solver how reads it then we store it in a special way
- if read == True:
- # see which solver is set as source of this quantity
- # print(" Set Solver name ", solver.name, " for i=", concat_quantity_name)
- ret.source_solver = solver
- ret.source_mesh_name = source_mesh_name
- return ret
- ret = get_quantity_object(quantity_name, bc, concat_quantity_name)
- self.coupling_quantities[concat_quantity_name] = ret
- ret.list_of_solvers[solver.name] = solver
- # print(" 2 source mesh = ", source_mesh_name, " read= " , read)
- # if this is the solver how reads it then we store it in a special way
- if read == True:
- # see which solver is set as source of this quantity
- # print(" Set Solver name ", solver.name, " for i=", concat_quantity_name)
- ret.source_solver = solver
- ret.source_mesh_name = source_mesh_name
- return ret
-
- def get_mesh_by_name(self, mesh_name: str):
- """returns the mesh specified by name"""
- # VERY IMPORTANT: the naming convention of the mesh !!!
- # Therefore, the mesh name should be constructed only by the methods from this class
- if mesh_name in self.meshes:
- return self.meshes[mesh_name]
- # create a new mesh and add it to the dictionary
- new_mesh = PS_Mesh()
- new_mesh.name = mesh_name
- self.meshes[mesh_name] = new_mesh
- return self.meshes[mesh_name]
-
- def get_mesh_name_by_participants(self, source_participant: str, participant2: str):
- """constructs the mash name out of the two participant names."""
- # IMPORTANT: "ParticipantSource_Participant2_Mesh" -> naming convention is that the
- # first participant is the source (provider) of the mesh
- # list = [ participant1, participant2]
- # list.sort()
- mesh_name = source_participant + "-Mesh"
- return mesh_name
-
- def get_mesh_by_participant_names(self, source_participant: str, participant2: str):
- """returns the mesh specified by the two participant names"""
- mesh_name = self.get_mesh_name_by_participants(source_participant, participant2)
- mesh = self.get_mesh_by_name(mesh_name)
- return mesh
-
- def add_quantity_to_mesh(self, mesh_name: str, quantity: QuantityCouple):
- """Adds the quantity to a given mesh"""
- if mesh_name in self.meshes:
- mesh = self.meshes[mesh_name]
- mesh.add_quantity(quantity)
- pass
-
- def get_solver(self, solver_name: str):
- """returns the solver if exists"""
- if solver_name in self.solvers:
- return self.solvers[solver_name]
- # TODO: create solver ... ?
- return None
-
- def create_config(self, user_input: UI_UserInput):
- """Creates the main preCICE config from the UI structure."""
-
- self.exchanges = user_input.exchanges.copy()
- self.acceleration = user_input.acceleration
- # participants
- for participant_name in user_input.participants:
- participant_obj = user_input.participants[participant_name]
- list = participant_obj.list_of_couplings
- self.solvers[participant_name] = PS_ParticipantSolver(participant_obj)
-
- # should we do something for the couplings?
- # the couplings are added to the participants already
- max_coupling_value = 100
- for coupling in user_input.couplings:
- # for all couplings, configure the solvers properly
- participant1_name = coupling.participant1.name
- participant2_name = coupling.participant2.name
- participant1_solver = self.solvers[participant1_name]
- participant2_solver = self.solvers[participant2_name]
- max_coupling_value = min(max_coupling_value, coupling.coupling_type.value)
-
- temp_d = {}
- data_forward = ""
- data_backward = ""
-
- for d in self.exchanges:
- if d["from"] == participant1_name and d["to"] == participant2_name:
- temp_d = d
- data_forward = d["data"]
- if d["to"] == participant1_name and d["from"] == participant2_name:
- temp_d = d
- data_backward = d["data"]
-
- # ========== FSI =========
- if coupling.coupling_type == UI_CouplingType.fsi:
- # VERY IMPORTANT: we rely here on the fact that the participants are sorted alphabetically
- participant1_solver.make_participant_fsi_fluid(
- self,
- coupling.boundaryC1,
- coupling.boundaryC2,
- participant2_solver.name,
- )
- participant2_solver.make_participant_fsi_structure(
- self,
- coupling.boundaryC1,
- coupling.boundaryC2,
- participant1_solver.name,
- )
- pass
- # ========== F2S =========
- if coupling.coupling_type == UI_CouplingType.f2s:
- # VERY IMPORTANT: we rely here on the fact that the participants are sorted alphabetically
- participant1_solver.make_participant_f2s_fluid(
- self,
- coupling.boundaryC1,
- coupling.boundaryC2,
- participant2_solver.name,
- )
- participant2_solver.make_participant_f2s_structure(
- self,
- coupling.boundaryC1,
- coupling.boundaryC2,
- participant1_solver.name,
- )
- pass
- # ========== CHT =========
- if coupling.coupling_type == UI_CouplingType.cht:
- # VERY IMPORTANT: we rely here on the fact that the participants are sorted alphabetically
- participant1_solver.make_participant_cht_fluid(
- self,
- coupling.boundaryC1,
- coupling.boundaryC2,
- participant2_solver.name,
- data_forward,
- data_backward,
- )
- participant2_solver.make_participant_cht_structure(
- self,
- coupling.boundaryC1,
- coupling.boundaryC2,
- participant1_solver.name,
- data_forward,
- data_backward,
- )
- pass
- pass
-
- # Determine coupling scheme based on new coupling type logic or existing max_coupling_value
- if (
- hasattr(user_input, "coupling_type")
- and user_input.coupling_type is not None
- ):
- if user_input.coupling_type == "strong":
- self.couplingScheme = PS_ImplicitCoupling()
- elif user_input.coupling_type == "weak":
- self.couplingScheme = PS_ExplicitCoupling()
- else:
- # Fallback to existing logic if invalid type
- self.couplingScheme = (
- PS_ImplicitCoupling()
- if max_coupling_value < 2
- else PS_ExplicitCoupling()
- )
- else:
- # Use existing logic if no coupling type specified
- self.couplingScheme = (
- PS_ImplicitCoupling()
- if max_coupling_value < 2
- else PS_ExplicitCoupling()
- )
- # throw an error if no coupling type is specified and the coupling scheme is not compatible with the coupling type
- # raise ValueError("No coupling type specified and coupling scheme is not compatible with the coupling type " + ("explicit" if self.couplingScheme is PS_ExplicitCoupling() else "implicit"))
-
- # Initialize coupling scheme with user input
- self.couplingScheme.initFromUI(user_input, self)
-
- pass
-
- def write_precice_xml_config(
- self, filename: str, log: UT_PCErrorLogging, sync_mode: str, mode: str
- ):
- """This is the main entry point to write preCICE config into an XML file"""
-
- self.sync_mode = sync_mode # Store sync_mode
- self.mode = mode # Store mode
-
- nsmap = {
- "data": "data",
- "mapping": "mapping",
- "coupling-scheme": "coupling-scheme",
- "post-processing": "post-processing",
- "m2n": "m2n",
- "master": "master",
- }
-
- precice_configuration_tag = etree.Element("precice-configuration", nsmap=nsmap)
-
- # write out:
- # first get the dimensionality of the coupling
- dimensionality = 0
- for solver_name in self.solvers:
- solver = self.solvers[solver_name]
- dimensionality = max(dimensionality, solver.dimensionality)
-
- # 1 quantities
- data_from_exchanges = []
-
- for exchange in self.exchanges:
- data_key = exchange.get("data")
- data_type = exchange.get("data-type")
-
- # Safely get coupling_quantity
- coupling_quantity = self.coupling_quantities.get(data_key)
- dim = getattr(coupling_quantity, "dim", 1)
-
- data_from_exchanges.append((data_key, dim, data_type))
-
- # Track created data entries to prevent duplicates
- created_data = set()
- for data, dim, data_type in data_from_exchanges:
- mystr = "scalar"
- if data_type is not None:
- mystr = data_type
- if dim > 1:
- if data_type == "scalar":
- log.rep_info(
- f"Data {data} is a vector, but data-type is set to scalar."
- )
- mystr = "vector"
-
- if data not in created_data:
- data_tag = etree.SubElement(
- precice_configuration_tag, etree.QName("data:" + mystr), name=data
- )
- created_data.add(data)
-
- # 2 meshes
- for mesh_name in self.meshes:
- mesh = self.meshes[mesh_name]
- mesh_tag = etree.SubElement(
- precice_configuration_tag,
- "mesh",
- name=mesh.name,
- dimensions=str(dimensionality),
- )
- for quantities_name in mesh.quantities:
- quant = mesh.quantities[quantities_name]
- quant_tag = etree.SubElement(
- mesh_tag, "use-data", name=quant.instance_name
- )
-
- # Initialize dictionaries to store provide and receive meshes
- self.solver_provide_meshes = {}
- self.solver_receive_meshes = {}
- # 3 participants
- m2n_pairs_added = set()
- self.solver_tags = {}
- for solver_name in self.solvers:
- solver = self.solvers[solver_name]
- solver_tag = etree.SubElement(
- precice_configuration_tag, "participant", name=solver.name
- )
- self.solver_tags[solver_name] = solver_tag
-
- # Initialize lists for this solver's provide and receive meshes
- self.solver_provide_meshes[solver_name] = []
- self.solver_receive_meshes[solver_name] = []
-
- # there are more than one meshes per participant
- for solvers_mesh_name in solver.meshes:
- # print("Mesh=", solvers_mesh_name)
- solver_mesh_tag = etree.SubElement(
- solver_tag, "provide-mesh", name=solvers_mesh_name
- )
- # Save provided meshes
- self.solver_provide_meshes[solver_name].append(solvers_mesh_name)
-
- list_of_solvers_with_higher_complexity = {}
- type_of_the_mapping = {} # for each solver for the mapping
- # we also save the type of mapping (conservative / consistent)
- list_of_solvers_with_higher_complexity_read = {}
- type_of_the_mapping_read = {}
- list_of_solvers_with_higher_complexity_write = {}
- type_of_the_mapping_write = {}
- # write out the quantities that are either read or written
- # -------------------------------------------------
- # | Collect all the solvers and mappings from the coupling
- # -------------------------------------------------
- used_meshes = {}
- for q_name in solver.quantities_read:
- q = solver.quantities_read[q_name]
- read_tag = etree.SubElement(
- solver_tag,
- "read-data",
- name=q.instance_name,
- mesh=solvers_mesh_name,
- )
- for other_solvers_name in q.list_of_solvers:
- other_solver = q.list_of_solvers[other_solvers_name]
- # consistent only read
- if other_solvers_name != solver_name and q.is_consistent:
- # print(" other solver:", other_solvers_name, " solver", solver_name)
- list_of_solvers_with_higher_complexity[
- other_solvers_name
- ] = other_solver
- type_of_the_mapping[other_solvers_name] = q.mapping_string
- list_of_solvers_with_higher_complexity_read[
- other_solvers_name
- ] = other_solver
- type_of_the_mapping_read[
- other_solvers_name
- ] = q.mapping_string
- # within one participant put the "use-mesh" only once there
- if (
- solvers_mesh_name != q.source_mesh_name
- and q.source_mesh_name not in used_meshes
- ):
- solver_mesh_tag = etree.SubElement(
- solver_tag,
- "receive-mesh",
- name=q.source_mesh_name,
- from___=q.source_solver.name,
- )
- # Save received meshes
- if solver_name not in self.solver_receive_meshes:
- self.solver_receive_meshes[solver_name] = []
- if (
- q.source_mesh_name
- not in self.solver_receive_meshes[solver_name]
- ):
- self.solver_receive_meshes[solver_name].append(
- q.source_mesh_name
- )
- used_meshes[q.source_mesh_name] = 1
- pass
- pass
- for q_name in solver.quantities_write:
- q = solver.quantities_write[q_name]
- write_tag = etree.SubElement(
- solver_tag,
- "write-data",
- name=q.instance_name,
- mesh=solvers_mesh_name,
- )
- for other_solvers_name in q.list_of_solvers:
- other_solver = q.list_of_solvers[other_solvers_name]
- # conservative only write
- if other_solvers_name != solver_name and not q.is_consistent:
- # print(" other solver:", other_solvers_name, " solver", solver_name)
- list_of_solvers_with_higher_complexity[
- other_solvers_name
- ] = other_solver
- type_of_the_mapping[other_solvers_name] = q.mapping_string
- list_of_solvers_with_higher_complexity_write[
- other_solvers_name
- ] = other_solver
- type_of_the_mapping_write[
- other_solvers_name
- ] = q.mapping_string
- pass
-
- # do the mesh mapping on the more "complex" side of the computations, to avoid data intensive traffic
- # for each mesh we look if the belonging solver has higher complexity
-
- # READS
- for other_solver_name in list_of_solvers_with_higher_complexity_read:
- other_solver = list_of_solvers_with_higher_complexity_read[
- other_solver_name
- ]
- mapping_string = type_of_the_mapping_read[other_solver_name]
- other_solver_mesh_name = self.get_mesh_name_by_participants(
- other_solver_name, solver_name
- )
- mapped_tag = etree.SubElement(
- solver_tag,
- "mapping:nearest-neighbor",
- direction="read",
- from___=other_solver_mesh_name,
- to=solvers_mesh_name,
- constraint=mapping_string,
- )
- self.mappings_read.append(
- {
- "other_solver_name": other_solver_name,
- "from": other_solver_mesh_name,
- "to": solvers_mesh_name,
- "constraint": mapping_string,
- }
- )
-
- # WRITES
- for other_solver_name in list_of_solvers_with_higher_complexity_write:
- other_solver = list_of_solvers_with_higher_complexity_write[
- other_solver_name
- ]
- mapping_string = type_of_the_mapping_write[other_solver_name]
- other_solver_mesh_name = self.get_mesh_name_by_participants(
- other_solver_name, solver_name
- )
-
- # Always add receive mesh for the participant specifying a mapping if it does not already exist
- if (
- other_solver_mesh_name
- not in self.solver_receive_meshes[solver_name]
- ):
- solver_mesh_tag = etree.SubElement(
- solver_tag,
- "receive-mesh",
- name=other_solver_mesh_name,
- from___=other_solver_name,
- )
- self.solver_receive_meshes[solver_name].append(
- other_solver_mesh_name
- )
-
- # Add write mapping
- mapped_tag = etree.SubElement(
- solver_tag,
- "mapping:nearest-neighbor",
- direction="write",
- from___=solvers_mesh_name,
- to=other_solver_mesh_name,
- constraint=mapping_string,
- )
- self.mappings_write.append(
- {
- "other_solver_name": other_solver_name,
- "from": solvers_mesh_name,
- "to": other_solver_mesh_name,
- "constraint": mapping_string,
- }
- )
- # treat M2N communications with other solver
- for other_solver_name in list_of_solvers_with_higher_complexity:
- if solver_name == other_solver_name:
- continue
- # we also add the M2N construct that is mandatory for the configuration
- # Check if this pair or its reverse has already been added
- m2n_pair = tuple(sorted([solver_name, other_solver_name]))
- if m2n_pair not in m2n_pairs_added:
- m2n_tag = etree.SubElement(
- precice_configuration_tag,
- "m2n:sockets",
- acceptor=solver_name,
- connector=other_solver_name,
- exchange___directory="..",
- )
- m2n_pairs_added.add(m2n_pair)
- pass
-
- # 4 coupling scheme
- # TODO: later this might be more complex !!!
- self.couplingScheme.write_precice_xml_config(precice_configuration_tag, self)
-
- # Validate mesh exchanges for convergence measures
- self.validate_convergence_measure_mesh_exchange(self, self.exchange_mesh_names)
- # =========== generate XML ===========================
-
- xml_string = etree.tostring(
- precice_configuration_tag, # pretty_print=True, xml_declaration=True,
- encoding="UTF-8",
- )
- # Remove xmlns:* attributes which are not recognized by preCICE
- # print( " STR: ", xml_string)
- from_index = xml_string.decode("ascii").find("", from_index)
- xml_string = (
- xml_string.decode("ascii")[0:from_index]
- + ""
- + xml_string.decode("ascii")[to_index + 1 :]
- )
- # just a workaround of how to avoid problems with the parser
- # TODO: later we should find a more elegant solution
- replace_only_list = [
- ("from___", "from"),
- ("exchange___directory", "exchange-directory"),
- ]
- for a, b in replace_only_list:
- xml_string = xml_string.replace(a, b)
- replace_list = [
- ("data:", "data___"),
- ("mapping:nearest", "mapping___nearest"),
- ("m2n:", "m2n___"),
- ("coupling-scheme:", "coupling-scheme___"),
- ("acceleration:", "acceleration___"),
- ]
- for a, b in replace_list:
- xml_string = xml_string.replace(a, b)
-
- # reformat the XML and add indents
- replaced_str = my_minidom.parseString(xml_string)
- xml_string = replaced_str.toprettyxml(indent=" ")
-
- for a, b in replace_list:
- xml_string = xml_string.replace(b, a)
-
- output_xml_file = open(filename, "w")
- output_xml_file.write(xml_string)
- output_xml_file.close()
-
- log.rep_info("Output XML file: " + filename)
-
- pass
-
- def validate_convergence_measure_mesh_exchange(self, config, exchange_mesh_names):
- """
- Validate that meshes used in convergence measures are properly exchanged in multi-coupling schemes.
-
- Args:
- config (PS_PreCICEConfig): The configuration to validate
- exchange_mesh_names (list): List of mesh names exchanged during configuration
-
- Raises:
- ValueError: If a mesh used in convergence measure is not exchanged to the control participant
- """
- # Only validate for multi-coupling schemes with more than 2 solvers
- if len(config.solvers) <= 2:
- return
-
- # Find the control participant (the one with the most meshes)
- control_participant = max(
- config.solvers, key=lambda p: len(config.solvers[p].meshes)
- )
-
- # Combine provided and received meshes for the control participant
- control_participant_meshes = set(config.solvers[control_participant].meshes)
- control_participant_meshes.update(
- self.solver_receive_meshes.get(control_participant, [])
- )
-
- exchanged_data_on_control = []
- for exchange in config.exchanges:
- if exchange.get("to").lower() == control_participant.lower():
- exchanged_data_on_control.append(exchange.get("data"))
-
- # Check if each exchanged mesh is present in the control participant's meshes
- for mesh in exchange_mesh_names:
- # Find which participant provides this mesh
- providing_participants = [
- p_name for p_name, p in config.solvers.items() if mesh in p.meshes
- ]
-
- # If no participant provides the mesh, raise an error
- if not providing_participants:
- raise ValueError(
- f"Mesh '{mesh}' used in configuration is not available to any participant"
- )
-
- # get data via topology
- for exchange in config.exchanges:
- if providing_participants[0].lower() == exchange.get("from").lower():
- data = exchange.get("data")
- if (data not in exchanged_data_on_control) and (
- exchange.get("from").lower() != control_participant.lower()
- ):
- exchanged_data_on_control.append(data)
- e = etree.SubElement(
- self.coupling_scheme,
- "exchange",
- data=data,
- mesh=mesh,
- from___=providing_participants[0],
- to=control_participant,
- )
- config.exchanges.append(
- {
- "data": data,
- "mesh": mesh,
- "from": providing_participants[0],
- "to": control_participant,
- }
- )
-
- if mesh not in control_participant_meshes:
- # Add the mesh to the control participant as receive and add an exchange for it
- solver_tag = self.solver_tags[control_participant]
- solver_mesh_tag = etree.SubElement(
- solver_tag,
- "receive-mesh",
- name=mesh,
- from___=providing_participants[0],
- )
diff --git a/precicecasegenerate/controller_utils/precice_struct/PS_QuantityCoupled.py b/precicecasegenerate/controller_utils/precice_struct/PS_QuantityCoupled.py
deleted file mode 100644
index 3d9cae2..0000000
--- a/precicecasegenerate/controller_utils/precice_struct/PS_QuantityCoupled.py
+++ /dev/null
@@ -1,125 +0,0 @@
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-from enum import Enum
-
-
-class QuantityCouple(object):
- """the quantity that is coupled"""
-
- def __init__(self):
- self.name = "None" # the name of the quantity as it is called physically
- self.instance_name = "None" # this will be the solver name "-" quantity name, example: "InnerSolver-Pressure"
- self.unit = "None" # unit of the quantity
- self.BC = -1 # boundary code for the coupling
- self.relative_tolerance = 1e-4 # the relative convergence for coupling
- self.list_of_solvers = (
- {}
- ) # list of solvers that use this quantity (either read or write)
- self.source_solver = (
- None # the origin of this quantity the solver how creates it
- )
- self.source_mesh_name = "None" # the source mesh name
- self.mapping_string = "ERROR" # conservative or consistent
- self.dim = 3 # the dimension of the quantity
- self.is_consistent = (
- True # True if this quantity is consistent False if it is conservative
- )
- pass
-
-
-def get_quantity_object(name: str, bc: str, instance_name: str):
- """Function to create coupling quantity"""
- ret = None
- if name.startswith("Force"):
- ret = Force()
- if name.startswith("Displacement"):
- ret = Displacement()
- if name.startswith("Velocity"):
- ret = Velocity()
- if name.startswith("Pressure"):
- ret = Pressure()
- if name.startswith("Temperature"):
- ret = Temperature()
- if name.startswith("HeatTransfer"):
- ret = HeatTransfer()
- if ret is None:
- # TODO: report error
- return QuantityCouple()
- else:
- # set the boundary code at the source solver
- ret.BC = bc
- # the instance name is like "InnerSolver-Pressure" (a combination of solver name and quantity name)
- # print(" Instance Name = ",instance_name)
- ret.instance_name = instance_name
- return ret
-
-
-class Force(QuantityCouple):
- """Forces"""
-
- def __init__(self):
- super().__init__()
- self.name = "Force"
- self.unit = "N"
- self.mapping_string = "conservative"
- self.is_consistent = False
- pass
-
-
-class Displacement(QuantityCouple):
- """Displacements"""
-
- def __init__(self):
- super().__init__()
- self.name = "Displacement"
- self.unit = "m"
- self.mapping_string = "consistent"
- pass
-
-
-class Velocity(QuantityCouple):
- """Velocities"""
-
- def __init__(self):
- super().__init__()
- self.name = "Velocity"
- self.unit = "m/s"
- self.mapping_string = "consistent"
- pass
-
-
-class Pressure(QuantityCouple):
- """Pressures"""
-
- def __init__(self):
- super().__init__()
- self.name = "Pressure"
- self.unit = "N/m^2"
- self.mapping_string = "consistent"
- self.dim = 1
- pass
-
-
-class Temperature(QuantityCouple):
- """temperature"""
-
- def __init__(self):
- super().__init__()
- self.name = "Temperature"
- self.unit = "C"
- self.mapping_string = "consistent"
- self.dim = 1
- pass
-
-
-class HeatTransfer(QuantityCouple):
- """heat transfer"""
-
- def __init__(self):
- super().__init__()
- self.name = "HeatTransfer"
- self.unit = "?"
- self.mapping_string = "consistent"
- self.dim = 1
- pass
diff --git a/precicecasegenerate/controller_utils/precice_struct/__init__.py b/precicecasegenerate/controller_utils/precice_struct/__init__.py
deleted file mode 100644
index 8f28a53..0000000
--- a/precicecasegenerate/controller_utils/precice_struct/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from .PS_ParticipantSolver import PS_ParticipantSolver
-from .PS_ParticipantSolver import SolverDomain
-from .PS_ParticipantSolver import SolverDimension
-from .PS_ParticipantSolver import SolverNature
-from .PS_QuantityCoupled import QuantityCouple
-from .PS_PreCICEConfig import PS_PreCICEConfig
-from .PS_CouplingScheme import PS_ImplicitCoupling
-from .PS_CouplingScheme import PS_ExplicitCoupling
diff --git a/precicecasegenerate/controller_utils/ui_struct/UI_Coupling.py b/precicecasegenerate/controller_utils/ui_struct/UI_Coupling.py
deleted file mode 100644
index 243d5b8..0000000
--- a/precicecasegenerate/controller_utils/ui_struct/UI_Coupling.py
+++ /dev/null
@@ -1,89 +0,0 @@
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-from enum import Enum
-
-
-class UI_CouplingType(Enum):
- """enum type to represent the different coupling types"""
-
- fsi = 0
- cht = 1
- f2s = 2
-
-
-class UI_Coupling(object):
- """
- This class contains information on the user input level
- regarding the coupling of two participants
- """
-
- def __init__(self):
- """The constructor."""
- self.boundaryC1 = -1
- self.boundaryC2 = -1
- self.participant1 = None
- self.participant2 = None
- self.coupling_type = None
- pass
-
- def init_from_yaml(
- self, name_coupling: str, etree, participants: dict, mylog: UT_PCErrorLogging
- ):
- """Method to initialize fields from a parsed YAML file node"""
-
- # new coupling info
- # name of the coupling is the type "fsi" or "chi"
-
- if name_coupling == "fsi":
- # fsi coupling, meaning we have "fluid" and "structure", implicit coupling
- self.coupling_type = UI_CouplingType.fsi
- pass
- elif name_coupling == "f2s":
- # fsi coupling, meaning we have "fluid" and "structure", explicit coupling
- self.coupling_type = UI_CouplingType.f2s
- pass
- elif name_coupling == "cht":
- # conjugate heat transfer -> there we also have fluid and structure
- self.coupling_type = UI_CouplingType.cht
- pass
- else:
- # Throw an error
- mylog.rep_error("Unknown coupling type:" + name_coupling)
-
- # print("parse participants")
- # parse all the participants within a coupling
- try:
- # TODO: we assume that we will have only fluids and structures?
- # TODO: we should all all of this to a single list
- participants_loop = {"fluid": etree["fluid"]}
- participants_loop.update({"structure": etree["structure"]})
-
- # VERY IMPORTANT: we sort here the keys alphabetically!!!
- # this is an important assumption also in other parts of the code, that the participant1
- # and participant2 are in alphabetical order. example 1) fluid 2) structure at fsi
- for participant_name in sorted(participants_loop):
-
- participant_el = participants_loop[participant_name]
- participant_real_name = participant_el["name"]
- participant_interface = participant_el["interface"]
-
- participant = participants[participant_real_name]
- participant.solver_domain = participant_name # this might be fuild or structure or something else
- # add only to the first participant the coupling
- participant.list_of_couplings.append(self)
- # now link this to one of the participants
- if self.participant1 is None:
- self.participant1 = participant
- self.boundaryC1 = participant_interface
- else:
- self.participant2 = participant
- self.boundaryC2 = participant_interface
-
- except:
- mylog.rep_error(
- "Error in YAML initialization of the Coupling name="
- + name_coupling
- + " data:"
- )
- pass
diff --git a/precicecasegenerate/controller_utils/ui_struct/UI_Participant.py b/precicecasegenerate/controller_utils/ui_struct/UI_Participant.py
deleted file mode 100644
index d351f91..0000000
--- a/precicecasegenerate/controller_utils/ui_struct/UI_Participant.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-from precicecasegenerate.controller_utils.ui_struct.UI_Coupling import UI_Coupling
-
-
-class UI_Participant(object):
- """
- This class represents one participant as it is declared on the user input level
- """
-
- def __init__(
- self,
- name: str = "",
- solver_name: str = "",
- list_of_couplings=None,
- solver_domain: str = "",
- data_type: str = "scalar",
- dimensionality: int = None,
- ):
- if list_of_couplings is None:
- list_of_couplings = []
-
- self.name = name
- self.solver_name = solver_name
- self.list_of_couplings = list_of_couplings # list of empty couplings
- self.solver_domain = solver_domain # this shows if this participant is a fluid or structure or else solver
- self.data_type = data_type
- self.dimensionality = dimensionality
-
- pass
-
- @classmethod
- def from_yaml(cls, etree, participant_name: str, mylog: UT_PCErrorLogging):
- """Method to initialize fields from a parsed YAML file node"""
- self = cls()
-
- try:
- self.name = participant_name
- self.solver_name = etree["solver"]
- self.data_type = etree["data-type"]
- except:
- mylog.rep_error("Error in YAML initialization of the Participant.")
- pass
-
- return self
diff --git a/precicecasegenerate/controller_utils/ui_struct/UI_SimulationInfo.py b/precicecasegenerate/controller_utils/ui_struct/UI_SimulationInfo.py
deleted file mode 100644
index f0e8013..0000000
--- a/precicecasegenerate/controller_utils/ui_struct/UI_SimulationInfo.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-
-
-class UI_SimulationInfo(object):
- """
- This class contains information on the user input level regarding the
- general simulation information
- """
-
- def __init__(self):
- """The constructor."""
- self.steady = False
- self.NrTimeStep = -1
- self.Dt = 1e-3
- self.max_iterations = 50
- self.accuracy = "medium"
- self.mode = "on"
- self.sync_mode = "fundamental"
- self.display_standard_values = "false"
- self.coupling = "parallel"
- pass
-
- def init_from_yaml(self, etree, mylog: UT_PCErrorLogging):
- """Method to initialize fields from a parsed YAML file node"""
- # catch exceptions if these items are not in the list
- try:
- self.steady = etree.get("steady-state")
- self.NrTimeStep = etree.get("timesteps")
- self.Dt = etree.get("time-window-size")
- self.display_standard_values = etree.get("display_standard_values", "false")
- self.max_iterations = etree.get("max-iterations")
- self.accuracy = etree.get("accuracy")
- self.sync_mode = etree.get("synchronize")
- self.mode = etree.get("mode")
- self.coupling = etree.get("coupling")
- except:
- mylog.rep_error("Error in YAML initialization of the Simulator info.")
- pass
diff --git a/precicecasegenerate/controller_utils/ui_struct/UI_UserInput.py b/precicecasegenerate/controller_utils/ui_struct/UI_UserInput.py
deleted file mode 100644
index b4acbc5..0000000
--- a/precicecasegenerate/controller_utils/ui_struct/UI_UserInput.py
+++ /dev/null
@@ -1,272 +0,0 @@
-from precicecasegenerate.controller_utils.ui_struct.UI_SimulationInfo import (
- UI_SimulationInfo,
-)
-from precicecasegenerate.controller_utils.ui_struct.UI_Participant import UI_Participant
-from precicecasegenerate.controller_utils.ui_struct.UI_Coupling import UI_Coupling
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-from precicecasegenerate.controller_utils.ui_struct.UI_Coupling import UI_CouplingType
-
-
-class UI_UserInput(object):
- """
- This class represents the main object that contains either one YAML file
- or a user input through a GUI
-
- The main components are:
- - the list of participants
- - general simulation information
- """
-
- def __init__(self):
- """The constructor, dummy initialization of the fields"""
- self.sim_info = UI_SimulationInfo()
- self.participants = {} # empty participants stored as a dictionary
- self.couplings = [] # empty coupling list
- self.exchanges = [] # empty exchanges list
- pass
-
- def init_from_yaml(self, etree, mylog: UT_PCErrorLogging):
- # Check if using new topology structure
- if (
- "coupling-scheme" in etree
- and "participants" in etree
- and "exchanges" in etree
- ):
- # --- Parse simulation info from 'coupling-scheme' ---
- simulation_info = etree["coupling-scheme"]
- self.sim_info.NrTimeStep = simulation_info.get("max-time")
- self.sim_info.Dt = simulation_info.get("time-window-size")
- self.sim_info.max_iterations = simulation_info.get("max-iterations")
- self.sim_info.display_standard_values = simulation_info.get(
- "display_standard_values", "false"
- )
- self.sim_info.coupling = simulation_info.get("coupling", "parallel")
-
- # Initialize coupling type+ acceleration to None
- self.coupling_type = None
- self.acceleration = None
-
- # Extract coupling type from exchanges
- if "exchanges" in etree:
- exchanges = etree["exchanges"]
- exchange_types = [
- exchange.get("type") for exchange in exchanges if "type" in exchange
- ]
-
- # Validate exchange types
- if exchange_types:
- # If all types are the same, set that as the coupling type
- if len(set(exchange_types)) == 1:
- if exchange_types[0] == "strong" or exchange_types[0] == "weak":
- self.coupling_type = exchange_types[0]
- else:
- mylog.rep_error(
- f"Invalid exchange type: {exchange_types[0]}. Must be 'strong' or 'weak'."
- )
- self.coupling_type = None
- else:
- # Mixed types, default to weak
- # mylog.rep_error("Mixed exchange types detected. Defaulting to 'weak'.")
- self.coupling_type = "weak"
-
- # --- Parse Acceleration ---
- if "acceleration" in etree:
- acceleration = etree["acceleration"]
- display_standard_values = acceleration.get(
- "display_standard_values", "false"
- )
- if display_standard_values.lower() not in ["true", "false"]:
- mylog.rep_error(
- f"Invalid display_standard_values value: {display_standard_values}. Must be 'true' or 'false'."
- )
- if display_standard_values.lower() == "true":
- self.acceleration = {
- "name": acceleration.get("name", "IQN-ILS"),
- "initial-relaxation": {
- "value": acceleration.get("initial-relaxation", {}).get(
- "value", 0.1
- ),
- "enforce": acceleration.get("initial-relaxation", {}).get(
- "enforce", "false"
- ),
- },
- "preconditioner": {
- "freeze-after": acceleration.get("preconditioner", {}).get(
- "freeze-after", -1
- ),
- "type": acceleration.get("preconditioner", {}).get(
- "type", None
- ),
- },
- "filter": {
- "limit": acceleration.get("filter", {}).get("limit", 1e-16),
- "type": acceleration.get("filter", {}).get("type", None),
- },
- "max-used-iterations": acceleration.get(
- "max-used-iterations", None
- ),
- "time-windows-reused": acceleration.get(
- "time-windows-reused", None
- ),
- "imvj-restart-mode": {
- "truncation-threshold": acceleration.get(
- "imvj-restart-mode", {}
- ).get("truncation-threshold", None),
- "chunk-size": acceleration.get("imvj-restart-mode", {}).get(
- "chunk-size", None
- ),
- "reused-time-windows-at-restart": acceleration.get(
- "imvj-restart-mode", {}
- ).get("reused-time-windows-at-restart", None),
- "type": acceleration.get("imvj-restart-mode", {}).get(
- "type", None
- ),
- }
- if any(acceleration.get("imvj-restart-mode", {}).values())
- else None,
- "display_standard_values": acceleration.get(
- "display_standard_values", "false"
- ),
- }
- # If display_standard_values is false, set default values to none so they are not displayed
- else:
- self.acceleration = {
- "name": acceleration.get("name", "IQN-ILS"),
- "initial-relaxation": acceleration.get(
- "initial-relaxation", None
- ),
- "preconditioner": {
- "freeze-after": acceleration.get("preconditioner", {}).get(
- "freeze-after", None
- ),
- "type": acceleration.get("preconditioner", {}).get(
- "type", None
- ),
- }
- if any(acceleration.get("preconditioner", {}).values())
- else None,
- "initial-relaxation": {
- "value": acceleration.get("initial-relaxation", {}).get(
- "value", None
- ),
- "enforce": acceleration.get("initial-relaxation", {}).get(
- "enforce", None
- ),
- }
- if any(acceleration.get("initial-relaxation", {}).values())
- else None,
- "filter": {
- "limit": acceleration.get("filter", {}).get("limit", None),
- "type": acceleration.get("filter", {}).get("type", None),
- }
- if any(acceleration.get("filter", {}).values())
- else None,
- "max-used-iterations": acceleration.get(
- "max-used-iterations", None
- ),
- "time-windows-reused": acceleration.get(
- "time-windows-reused", None
- ),
- "imvj-restart-mode": {
- "truncation-threshold": acceleration.get(
- "imvj-restart-mode", {}
- ).get("truncation-threshold", None),
- "chunk-size": acceleration.get("imvj-restart-mode", {}).get(
- "chunk-size", None
- ),
- "reused-time-windows-at-restart": acceleration.get(
- "imvj-restart-mode", {}
- ).get("reused-time-windows-at-restart", None),
- "type": acceleration.get("imvj-restart-mode", {}).get(
- "type", None
- ),
- }
- if any(acceleration.get("imvj-restart-mode", {}).values())
- else None,
- "display_standard_values": acceleration.get(
- "display_standard_values", "false"
- ),
- }
-
- # --- Parse participants ---
- self.participants = {}
- participants_data = etree["participants"]
- for participant in participants_data:
- # Handle new list of dictionaries format
- if isinstance(participant, dict):
- name = participant.get("name")
- solver_info = participant
-
- if name is None:
- mylog.rep_error(
- f"Participant missing 'name' key: {participant}"
- )
- continue
-
- solver_name = solver_info.get("solver", name)
- dimensionality = solver_info.get("dimensionality", 3)
-
- new_participant = UI_Participant(
- name, solver_name, dimensionality=dimensionality
- )
- self.participants[new_participant.name] = new_participant
- else:
- # Unsupported format
- mylog.rep_error(
- f"Unsupported participant configuration: {participant}"
- )
- continue
-
- # --- Parse couplings from exchanges ---
- exchanges_list = etree["exchanges"]
- # Save full exchange details
- self.exchanges = exchanges_list.copy()
-
- # Group exchanges by unique participant pairs
- groups = {}
- for exchange in exchanges_list:
- exchanges = (
- exchange.get("data-type").lower()
- if exchange.get("data-type") is not None
- else "scalar"
- )
- pair = tuple(sorted([exchange["from"], exchange["to"]]))
- groups.setdefault(pair, []).append(exchange)
-
- self.couplings = []
- for pair, ex_list in groups.items():
- coupling = UI_Coupling()
- p1_name, p2_name = pair
- coupling.participant1 = self.participants[p1_name]
- coupling.participant2 = self.participants[p2_name]
-
- # Determine coupling type based on exchanged data
- data_names = {ex["data"] for ex in ex_list}
- if any(name.startswith("Force") for name in data_names) and any(
- name.startswith("Displacement") for name in data_names
- ):
- coupling.coupling_type = UI_CouplingType.fsi
- elif any(name.startswith("Force") for name in data_names):
- coupling.coupling_type = UI_CouplingType.f2s
- # elif any("temperature" in name.lower() or "heat" in name.lower() for name in data_names):
- elif any(
- name.startswith("Temperature") or name.startswith("HeatTransfer")
- for name in data_names
- ):
- coupling.coupling_type = UI_CouplingType.cht
- else:
- # TODO: Handle Velocity, Pressure
- raise NameError(
- "Found Velocity, Pressure or an invalid coupling type that is unsupported."
- )
-
- # Use the first exchange's patches as boundary interfaces (simple heuristic)
- first_ex = ex_list[0]
- coupling.boundaryC1 = first_ex.get("from-patch", "")
- coupling.boundaryC2 = first_ex.get("to-patch", "")
-
- self.couplings.append(coupling)
- coupling.participant1.list_of_couplings.append(coupling)
- coupling.participant2.list_of_couplings.append(coupling)
diff --git a/precicecasegenerate/controller_utils/ui_struct/__init__.py b/precicecasegenerate/controller_utils/ui_struct/__init__.py
deleted file mode 100644
index e6335d9..0000000
--- a/precicecasegenerate/controller_utils/ui_struct/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .UI_Coupling import *
-from .UI_UserInput import UI_UserInput
diff --git a/precicecasegenerate/file_creators/adapter_config_creator.py b/precicecasegenerate/file_creators/adapter_config_creator.py
new file mode 100644
index 0000000..93b8ccc
--- /dev/null
+++ b/precicecasegenerate/file_creators/adapter_config_creator.py
@@ -0,0 +1,116 @@
+import json
+import logging
+from pathlib import Path
+import preciceadapterschema
+
+from precice_config_graph import nodes as n
+
+import precicecasegenerate.helper as helper
+
+logger = logging.getLogger(__name__)
+
+
+class AdapterConfigCreator:
+ """
+ A class that handles creating adapter configuration files for participants in the correct directory.
+ """
+
+ def __init__(self, participant_solver_map: dict[n.ParticipantNode, str],
+ mesh_patch_map: dict[n.MeshNode, set[str]], precice_config_filename: str = "precice-config.xml"):
+ """
+ Initialize an AdapterConfigCreator object, which creates adapter configuration files for each participant.
+ :param participant_solver_map: A dict mapping participant nodes to their solver names.
+ :param mesh_patch_map: A map mapping meshes to sets of patches.
+ :param precice_config_filename: The name of the precice-config.xml file.
+ """
+ self.participant_solver_map = participant_solver_map
+ self.patch_map = mesh_patch_map
+ self.precice_config_filename = precice_config_filename
+
+ def _create_adapter_config_dict(self, participant: n.ParticipantNode,
+ mesh_patch_map: dict[n.MeshNode, set[str]]) -> dict[str, str | list[str]]:
+ """
+ Create a dictionary representing the adapter configuration file for the given participant.
+ :param participant: The participant for which the adapter configuration is created.
+ :param mesh_patch_map: A map mapping meshes to sets of patches that they use.
+ :return: A dict representing the adapter configuration file for the given participant.
+ """
+ interfaces: list[dict[str, str | list[str]]] = []
+
+ # Create an entry for each mesh
+ for mesh in participant.provide_meshes:
+ # Get the patches used by the current mesh
+ patches: list[str] = sorted(list(mesh_patch_map.get(mesh, [])))
+
+ # Get the read-data of the participant and the current mesh
+ read_data: list[str] = [
+ rd.data.name for rd in participant.read_data
+ if rd.mesh == mesh
+ ]
+
+ # Get the write-data of the participant and the current mesh
+ write_data: list[str] = [
+ wd.data.name for wd in participant.write_data
+ if wd.mesh == mesh
+ ]
+
+ # The mesh entry is a dictionary containing the mesh name and the patches used by it
+ mesh_entry: dict[str, str | list[str]] = {
+ "mesh_name": mesh.name,
+ "patches": patches
+ }
+
+ # Only add read-/write-data keys if the mesh reads/writes data
+ if read_data:
+ mesh_entry["read_data_names"] = sorted(read_data)
+ if write_data:
+ mesh_entry["write_data_names"] = sorted(write_data)
+
+ # Add the mesh entry to the list of interfaces
+ interfaces.append(mesh_entry)
+ logger.debug(f"Created adapter configuration entry for mesh {mesh.name} "
+ f"in participant {participant.name}'s adapter-config.")
+
+ # The adapter-config file is a dictionary containing the participant name,
+ # the path to the precice-config.xml file and the list of interfaces
+ return {
+ "participant_name": participant.name,
+ "precice_config_file_path": f"../{self.precice_config_filename}",
+ "interfaces": interfaces
+ }
+
+ def _create_adapter_config_file(self, adapter_config_dict: dict[str, str | list[str]],
+ directory: Path = "./", filename: str = "adapter-config.json"):
+ """
+ Write an adapter-config.json file for the given participant to the given directory.
+ :param adapter_config_dict: The dict representing the adapter configuration file.
+ :param directory: The directory to save the file in.
+ :param filename: The name of the file.
+ """
+ # Convert to Path object just in case
+ directory = Path(directory)
+ file_path: Path = directory / filename
+ with open(file_path, "w") as f:
+ json.dump(adapter_config_dict, f, indent=4)
+ logger.info(f"Adapter configuration file written to {file_path}")
+
+ def create_adapter_configs(self, parent_directory: Path = "./"):
+ """
+ Create adapter-config.json files for all participants and directly validate them afterward.
+ The files are saved from the given parent-directory, in subdirectories of the form "participant-solver/".
+ """
+ # Convert to Path object just in case
+ parent_directory = Path(parent_directory)
+ for participant in self.participant_solver_map:
+ logger.debug(f"Creating adapter configuration file for participant {participant.name}.")
+ directory: Path = helper.get_participant_solver_directory(parent_directory, participant.name,
+ self.participant_solver_map[participant])
+ adapter_config_dict: dict[str, str | list[str]] = self._create_adapter_config_dict(participant,
+ self.patch_map)
+ try:
+ preciceadapterschema.validate(adapter_config_dict)
+ logger.debug(f"Adapter config file {directory} adheres to the schema.")
+ except Exception as e:
+ logger.error(f"Adapter config file {directory} does not adhere to the schema "
+ f"as specified by the precice-adapter-schema: {e}. This is likely an error within the program.")
+ self._create_adapter_config_file(adapter_config_dict, directory=directory)
diff --git a/precicecasegenerate/file_creators/config_creator.py b/precicecasegenerate/file_creators/config_creator.py
new file mode 100644
index 0000000..be2ba94
--- /dev/null
+++ b/precicecasegenerate/file_creators/config_creator.py
@@ -0,0 +1,67 @@
+import logging
+import subprocess
+from pathlib import Path
+from precice_config_graph import nodes as n
+import precice_config_graph.graph.operations as operations
+
+import precicecasegenerate.helper as helper
+
+logger = logging.getLogger(__name__)
+
+
+class ConfigCreator:
+ """
+ A class that handles creating preCICE configuration files.
+ """
+
+ def __init__(self, config_topology: dict[str, list[n.ParticipantNode] | list[n.DataNode] | list[n.MeshNode]
+ | list[n.CouplingSchemeNode] | list[n.MultiCouplingSchemeNode]
+ | list[n.M2NNode]]):
+ """
+ Initialize a ConfigCreator object with a dict that specifies how the preCICE configuration should be created.
+ :param config_topology: A dict that contains participants, data nodes, meshes, coupling-schemes and M2N nodes.
+ """
+ self.config_topology = config_topology
+
+ def validate_config_file(self, filepath: Path = "./precice-config.xml") -> None:
+ """
+ Validate the preCICE configuration file at the given filepath using precice-config-check.
+ The subprocess precice-config-check only checks for logical errors and will crash if there are syntactic errors.
+ This means, however, that to pass this check, the configuration file must be syntactically and logically correct :)
+ :param filepath: The path to the preCICE configuration file.
+ """
+ result = subprocess.run(
+ ["precice-config-check", filepath],
+ capture_output=True, # capture stdout/stderr
+ text=True # return strings instead of bytes
+ )
+ # Output = 0 means everything went fine
+ if result.returncode == 0:
+ logger.debug("preCICE configuration file has been validated with precice-config-check.")
+ # Output = 1 means the file was not parsed correctly
+ elif result.returncode == 1:
+ logger.error(
+ f"The generated preCICE configuration file failed to validate with precice-config-check due to syntactic errors:\n"
+ f"{''.join('> ' + line for line in result.stderr.splitlines(keepends=True))}\n"
+ f"This is likely an error within this program. Please visit {helper.case_generate_repository_url} for more help.")
+ # Output = 2 means the file was parsed correctly but contains logical errors
+ elif result.returncode == 2:
+ logger.error(
+ f"The generated preCICE configuration file failed to validate with precice-config-check due to logical errors:\n"
+ f"{''.join('> ' + line for line in result.stdout.splitlines(keepends=True))}\n"
+ f"This is likely an error within this program. "
+ f"You can either try to fix the configuration file yourself or visit "
+ f"{helper.case_generate_repository_url} for more help.")
+
+ def create_config_file(self, directory: Path = ".", filename: str = "precice-config.xml") -> None:
+ """
+ Create a configuration file.
+ The file is saved in the given directory with the given filename.
+ :param directory: The directory to save the file in.
+ :param filename: The filename of the file.
+ """
+ # Convert to Path object just in case
+ directory = Path(directory)
+ file_path: Path = directory / filename
+ operations.create_config_file_from_dict(self.config_topology, path=directory, filename=filename)
+ logger.info(f"preCICE configuration file written to {file_path}")
diff --git a/precicecasegenerate/file_creators/utility_file_creator.py b/precicecasegenerate/file_creators/utility_file_creator.py
new file mode 100644
index 0000000..93daa5f
--- /dev/null
+++ b/precicecasegenerate/file_creators/utility_file_creator.py
@@ -0,0 +1,222 @@
+import logging
+import shutil
+from pathlib import Path
+from importlib.resources import files, as_file
+
+from precice_config_graph import nodes as n
+
+import precicecasegenerate.helper as helper
+
+logger = logging.getLogger(__name__)
+
+
+class UtilityFileCreator:
+ """
+ A class to create utility files for the generated project.
+ """
+
+ def __init__(self, participant_solver_map: dict[n.ParticipantNode, str]):
+ """
+ Initialize a UtilityFileCreator object, which creates a global clean.sh file, a global README.md file
+ and a run.sh file for each participant-solver pair.
+ :param participant_solver_map: A dict mapping participants to their solver names.
+ """
+ self.participant_solver_map = participant_solver_map
+
+ def create_utility_files(self, parent_directory: Path = "./") -> None:
+ """
+ Create all utility files for the generated project:
+ clean.sh, README.md and run.sh for each participant-solver pair.
+ :param parent_directory: The directory from which to save the files.
+ :return: None
+ """
+ # Convert to Path object just in case
+ parent_directory = Path(parent_directory)
+ self._create_clean_file(parent_directory)
+ # Create a run file for each participant
+ for participant in self.participant_solver_map:
+ participant_directory = helper.get_participant_solver_directory(parent_directory, participant.name,
+ self.participant_solver_map[participant])
+ self._create_run_file(participant_directory)
+ self._create_readme_file(parent_directory)
+
+ def _create_clean_file(self, directory: Path = "./") -> None:
+ """
+ Create a clean-file for the simulation in the given directory.
+ This copies the template file `precicecasegenerate/templates/clean.sh` to the specified directory.
+ :param directory: The directory to save the file in.
+ :return: None
+ """
+ # Convert to Path object just in case
+ directory = Path(directory)
+ # Get the file to copy
+ src = files("precicecasegenerate.templates") / "clean.sh"
+ # Create directory if it does not exist
+ directory.mkdir(parents=True, exist_ok=True)
+
+ file_path: Path = directory / src.name
+
+ # Use as_file to get a real path even if the file is zipped
+ with as_file(src) as real_src_path:
+ shutil.copy2(real_src_path, file_path)
+ logger.debug(f"File clean.sh written to {file_path.resolve()}")
+
+ def _create_run_file(self, directory: Path = "./") -> None:
+ """
+ Create a run file for a participant in the given directory.
+ This copies the template file `precicecasegenerate/templates/run.sh` to the specified directory.
+ :param directory: The directory to save the file in.
+ :return: None
+ """
+ # Convert to Path object just in case
+ directory = Path(directory)
+ # Get the file to copy
+ src = files("precicecasegenerate.templates") / "run.sh"
+ # Create directory if it does not exist
+ directory.mkdir(parents=True, exist_ok=True)
+
+ file_path: Path = directory / src.name
+
+ # Use as_file to get a real path
+ with as_file(src) as real_src_path:
+ shutil.copy2(real_src_path, file_path)
+ logger.debug(f"File run.sh written to {file_path.resolve()}")
+
+ def _create_readme_file(self, directory: Path = "./", filename: str = "README.md") -> None:
+ """
+ Create a README file in the given directory.
+ :param directory: The directory to save the file in.
+ :return: None
+ """
+ # Convert to Path object just in case
+ directory = Path(directory)
+ # Create directory if it does not exist
+ directory.mkdir(parents=True, exist_ok=True)
+
+ file_path: Path = directory / filename
+ with open(file_path, "w") as f:
+ f.write(self._create_readme_str())
+ logger.info(f"README file written to {file_path.resolve()}")
+
+ def _create_readme_str(self) -> str:
+ """
+ Create a string representing the README file.
+ return: A string representing the README file.
+ """
+ # Creates a vertical line between topics
+ topic_separator: str = "\n---\n\n"
+
+ readme_str: str = (
+ "# Multiphysics Simulation Project\n"
+ "\n"
+ "> [!NOTE] This `README.md` file was auto-generated by preCICE case-generate.\n"
+ "\n"
+ )
+ readme_str += topic_separator
+
+ readme_str += (
+ "## Project Overview\n"
+ "\n"
+ "This project uses **preCICE** for a multiphysics simulation involving:\n"
+ "\n"
+ )
+ for participant in self.participant_solver_map:
+ solver: str = self.participant_solver_map[participant]
+ readme_str += (
+ f"- Solver `{solver}` with participant `{participant.name}`\n"
+ )
+ readme_str += "\n"
+ readme_str += (
+ "### Project Structure\n"
+ "\n"
+ "Global files that are generated are: `precice-config.xml`, `README.md` and `clean.sh`. "
+ "Additionally, for each participant, a folder with an `adapter-config.json` and a `run.sh` file are created.\n"
+ "The folder structure is as follows:\n"
+ "\n"
+ "```\n"
+ "_generated/\n"
+ " ├── README.md\t\t\t# This file\n"
+ " ├── clean.sh\t\t\t# Clean up script\n"
+ )
+ for participant in self.participant_solver_map:
+ readme_str += (
+ f" ├──{participant.name.lower()}-{self.participant_solver_map[participant].lower()}/\n"
+ " │ ├── adapter-config.json\n"
+ " │ └── run.sh\t\t\t# Not yet implemented\n"
+ )
+ readme_str += (
+ " └── precice-config.xml\t\t# Global precice-config.xml file\n"
+ "```\n"
+ "\n"
+ "\n"
+ )
+ # Explanation of precice-config.xml
+ readme_str += (
+ "- `precice-config.xml` is the global preCICE configuration file which defines the parameters "
+ "and communication of participants\n"
+ )
+ # Explanation of adapter-config.json
+ readme_str += (
+ "- `adapter-config.json` is a configuration file to couple the solvers with preCICE.\n"
+ )
+ # Explanation of run.sh
+ readme_str += (
+ "- `run.sh` is a script that is meant to execute a participant. Note, however, that since "
+ "different solvers are executed differently, this file is not implemented yet.\n"
+ )
+ # Explanation of clean.sh
+ readme_str += (
+ "- `clean.sh` removes any files in the current root directory that were not created by preCICE case-generate "
+ "(and moves them to a backup folder).\n"
+ "Execution:\n"
+ "\n"
+ "```bash\n"
+ "./clean.sh [--force] [--dry-run]\n"
+ "```\n"
+ "\n"
+ "- `--force` Deletes the files and any backup folders\n"
+ "- `--dry-run` Does not delete any files, but prints files that would be deleted\n")
+
+ readme_str += topic_separator
+
+ readme_str += (
+ "## Prerequisites\n"
+ "\n"
+ "Before running the simulation, ensure you have the following installed:\n"
+ "\n"
+ "- The preCICE coupling library\n"
+ )
+ for participant in self.participant_solver_map:
+ solver: str = self.participant_solver_map[participant]
+ readme_str += (
+ f"- Solver `{solver}` and its dependencies\n"
+ )
+
+ readme_str += topic_separator
+
+ readme_str += (
+ "## Running the Simulation\n"
+ "\n"
+ "### Quick Start\n"
+ "\n"
+ "```bash\n"
+ "# Navigate to the `_generated` folder\n"
+ "cd _generated/\n"
+ "\n"
+ "# Implement the run script\n"
+ "\n"
+ "# Make the run script executable \n"
+ "chmod +x run.sh\n"
+ "\n"
+ "# Execute the simulation\n"
+ "./run.sh\n"
+ "```\n"
+ )
+
+ readme_str += topic_separator
+
+ readme_str += (
+ "For more information, see the [preCICE documentation](https://precice.org/docs.html) and "
+ "[precice-case-generate](https://github.com/precice/case-generate)."
+ )
+ return readme_str
diff --git a/precicecasegenerate/generation_utils/__init__.py b/precicecasegenerate/generation_utils/__init__.py
deleted file mode 100644
index eedca39..0000000
--- a/precicecasegenerate/generation_utils/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from .structure_handler import StructureHandler
-from .logger import Logger
-from .adapter_config_generator import AdapterConfigGenerator
-from .format_precice_config import PrettyPrinter
-from .other_files_generator import OtherFilesGenerator
-from .config_generator import ConfigGenerator
-from .readme_generator import ReadmeGenerator
-from .file_generator import FileGenerator
diff --git a/precicecasegenerate/generation_utils/adapter_config_generator.py b/precicecasegenerate/generation_utils/adapter_config_generator.py
deleted file mode 100644
index e99fe94..0000000
--- a/precicecasegenerate/generation_utils/adapter_config_generator.py
+++ /dev/null
@@ -1,194 +0,0 @@
-from pathlib import Path
-from . import Logger
-from lxml import etree
-import json
-from ruamel.yaml import YAML
-from importlib.resources import files
-
-
-class AdapterConfigGenerator:
- def __init__(
- self,
- adapter_config_path: Path,
- precice_config_path: Path,
- topology_path: Path,
- target_participant: str,
- ) -> None:
- """
- Initializes the AdapterConfigGenerator with paths to the adapter config, precice config, and topology file.
-
- Args:
- adapter_config_path (Path): Path to the output adapter-config.json file.
- precice_config_path (Path): Path to the input precice-config.xml file.
- topology_path (Path): Path to the topology YAML file.
- target_participant (str): Name of the target participant.
- """
- self.adapter_config_path = adapter_config_path
- self.adapter_config_schema = json.loads(
- files("precicecasegenerate.templates")
- .joinpath("adapter-config-template.json")
- .read_text("utf-8")
- )
- self.logger = Logger()
- self.precice_config_path = precice_config_path
- self.topology_path = topology_path
- self.target_participant = target_participant
-
- def _get_generated_precice_config(self):
- """
- Parses the precice-config.xml file, removes namespaces, and stores the root element.
- """
- try:
- with open(
- self.precice_config_path, "r", encoding="utf-8"
- ) as precice_config_file:
- precice_config = precice_config_file.read()
- except FileNotFoundError:
- self.logger.error(
- f"PreCICE config file not found at {self.precice_config_path}"
- )
- raise
-
- # Parse with lxml and clean namespaces
- parser = etree.XMLParser(ns_clean=True, recover=True)
- try:
- doc = etree.fromstring(precice_config.encode("utf-8"), parser=parser)
- except etree.XMLSyntaxError as e:
- self.logger.error(f"Error parsing XML: {e}")
- raise
-
- # Strip namespace prefixes from tags
- for elem in doc.iter():
- if isinstance(elem.tag, str) and "}" in elem.tag:
- elem.tag = elem.tag.split("}", 1)[1]
-
- self.root = doc
- self.logger.info("Parsed precice-config.xml successfully.")
-
- def _load_topology(self):
- """
- Loads the topology YAML file and extracts patch information for the target participant.
-
- Returns:
- dict: Patch information for the target participant.
- """
- try:
- with open(self.topology_path, "r", encoding="utf-8") as topology_file:
- topology = YAML().load(topology_file)
-
- # Find the exchange for the target participant
- for exchange in topology.get("exchanges", []):
- if exchange.get("to") == self.target_participant:
- return {
- "from_participant": exchange.get("from"),
- "from_patch": exchange.get("from-patch"),
- "to_patch": exchange.get("to-patch"),
- }
-
- self.logger.warning(
- f"No exchange found for participant {self.target_participant}"
- )
- return None
-
- except FileNotFoundError:
- self.logger.error(f"Topology file not found at {self.topology_path}")
- return None
- except yaml.YAMLError as e:
- self.logger.error(f"Error parsing topology YAML: {e}")
- return None
-
- def _fill_out_adapter_schema(self):
- """
- Fills out the adapter configuration schema based on the precice-config.xml and topology data.
- """
- self._get_generated_precice_config()
-
- # Load topology information
- topology_info = self._load_topology()
-
- participant_elem = None
- for participant in self.root.findall(".//participant"):
- if participant.get("name") == self.target_participant:
- participant_elem = participant
- break
-
- if participant_elem is None:
- self.logger.error(
- f"Participant '{self.target_participant}' not found in precice-config.xml."
- )
- return
-
- # Attempt to find read-data and write-data elements
- read_data_elem = participant_elem.find("read-data")
- write_data_elem = participant_elem.find("write-data")
-
- # Log warnings if certain elements are missing
- if read_data_elem is None:
- self.logger.warning(
- f"Participant '{self.target_participant}' is missing a 'read-data' element."
- )
- if write_data_elem is None:
- self.logger.warning(
- f"Participant '{self.target_participant}' is missing a 'write-data' element."
- )
-
- # Update the adapter_config_schema dictionary according to the new template
- self.adapter_config_schema["participant_name"] = self.target_participant
-
- # Access the first interface in the interfaces list
- interface_dict = self.adapter_config_schema["interfaces"][0]
-
- # Initialize write_data_names and read_data_names lists
- interface_dict["write_data_names"] = []
- interface_dict["read_data_names"] = []
-
- # If read_data_elem exists, set mesh_name and read_data_names
- if read_data_elem is not None:
- interface_dict["mesh_name"] = read_data_elem.get("mesh")
- read_data_name = read_data_elem.get("name")
- if read_data_name:
- interface_dict["read_data_names"].append(read_data_name)
-
- # If write_data_elem exists, set write_data_names
- if write_data_elem is not None:
- write_data_name = write_data_elem.get("name")
- if write_data_name:
- interface_dict["write_data_names"].append(write_data_name)
-
- # Add patch information from topology if available
- if topology_info:
- # Use the to-patch value from topology
- interface_dict["patches"] = [topology_info.get("to_patch")]
-
- # Remove previously added keys
- interface_dict.pop("from_participant", None)
- interface_dict.pop("from_patch", None)
- interface_dict.pop(
- "patch_name", None
- ) # Remove patch_name if it was added previously
-
- # Remove keys if their lists are empty
- if not interface_dict["write_data_names"]:
- interface_dict.pop("write_data_names")
- if not interface_dict["read_data_names"]:
- interface_dict.pop("read_data_names")
-
- self.logger.info("Adapter configuration schema filled out successfully.")
-
- def write_to_file(self) -> None:
- """
- Writes the filled adapter configuration schema to the specified JSON file.
- """
- self._fill_out_adapter_schema()
-
- try:
- with open(
- self.adapter_config_path, "w", encoding="utf-8"
- ) as adapter_config_file:
- json.dump(self.adapter_config_schema, adapter_config_file, indent=4)
- self.logger.success(
- f"Adapter configuration written to {self.adapter_config_path}"
- )
- except IOError as e:
- self.logger.error(f"Failed to write adapter configuration to file: {e}")
- raise
diff --git a/precicecasegenerate/generation_utils/config_generator.py b/precicecasegenerate/generation_utils/config_generator.py
deleted file mode 100644
index 760550d..0000000
--- a/precicecasegenerate/generation_utils/config_generator.py
+++ /dev/null
@@ -1,71 +0,0 @@
-from ruamel.yaml import YAML
-
-
-class ConfigGenerator:
- @staticmethod
- def is_utf8_encoded(file_path):
- """Check if a file is UTF-8 encoded without BOM"""
- try:
- with open(file_path, "rb") as f:
- # Read the first few bytes to check for BOM
- bom = f.read(3)
- if bom == b"\xef\xbb\xbf":
- return False # File has a BOM, so it's not considered pure UTF-8
-
- # Try to decode the rest of the file
- f.seek(0) # Go back to the start of the file
- f.read().decode("utf-8")
- return True
- except UnicodeDecodeError:
- return False
-
- def generate_precice_config(self, file_generator):
- """Generates the precice-config.xml file based on the topology.yaml file."""
- # Check if the topology YAML file is UTF-8 encoded
- topology_file_path = file_generator.input_file
- logger = file_generator.logger
-
- if not self.is_utf8_encoded(topology_file_path):
- logger.error(f"Input YAML file {topology_file_path} is not UTF-8 encoded.")
- return None
-
- # Try to open the yaml file and get the configuration
- try:
- with open(topology_file_path, "r") as config_file:
- config = YAML().load(config_file)
- logger.info(f"Input YAML file: {topology_file_path}")
- except FileNotFoundError:
- logger.error(f"Input YAML file {topology_file_path} not found.")
- return None
- except Exception as e:
- logger.error(f"Error reading input YAML file: {str(e)}")
- return None
-
- # Build the ui
- logger.info("Building the user input info...")
- user_ui = file_generator.user_ui
- user_ui.init_from_yaml(config, file_generator.mylog)
-
- # Generate the precice-config.xml file
- logger.info("Generating preCICE config...")
- precice_config = file_generator.precice_config
- precice_config.create_config(user_ui)
-
- # Set the target of the file and write out to it
- structure = file_generator.structure
- target = str(structure.precice_config)
-
- try:
- logger.info(f"Writing preCICE config to {target}...")
- precice_config.write_precice_xml_config(
- target,
- file_generator.mylog,
- sync_mode=user_ui.sim_info.sync_mode,
- mode=user_ui.sim_info.mode,
- )
- except Exception as e:
- logger.error(f"Failed to write preCICE XML config: {str(e)}")
- return None
-
- logger.success(f"XML generation completed successfully: {target}")
- return target
diff --git a/precicecasegenerate/generation_utils/file_generator.py b/precicecasegenerate/generation_utils/file_generator.py
deleted file mode 100644
index 7716437..0000000
--- a/precicecasegenerate/generation_utils/file_generator.py
+++ /dev/null
@@ -1,133 +0,0 @@
-from pathlib import Path
-
-import json
-import jsonschema
-from ruamel.yaml import YAML
-
-from precicecasegenerate.controller_utils.myutils.UT_PCErrorLogging import (
- UT_PCErrorLogging,
-)
-from precicecasegenerate.controller_utils.precice_struct import PS_PreCICEConfig
-from precicecasegenerate.controller_utils.ui_struct.UI_UserInput import UI_UserInput
-from .config_generator import ConfigGenerator
-from .format_precice_config import PrettyPrinter
-from .logger import Logger
-from .other_files_generator import OtherFilesGenerator
-from .readme_generator import ReadmeGenerator
-from .structure_handler import StructureHandler
-
-from importlib.resources import files
-
-
-class FileGenerator:
- def __init__(self, input_file: Path, output_path: Path) -> None:
- """Class which takes care of generating the content of the necessary files
- :param input_file: Input yaml file that is needed for generation of the precice-config.xml file
- :param output_path: Path to the folder where the case will be generated"""
- self.input_file = input_file
- self.precice_config = PS_PreCICEConfig()
- self.mylog = UT_PCErrorLogging()
- self.user_ui = UI_UserInput()
- self.logger = Logger()
- self.structure = StructureHandler(output_path)
- self.config_generator = ConfigGenerator()
- self.readme_generator = ReadmeGenerator()
- self.other_files_generator = OtherFilesGenerator()
-
- if not self.input_file.exists():
- import errno
- import os
-
- raise FileNotFoundError(
- errno.ENOENT, os.strerror(errno.ENOENT), str(self.input_file)
- )
-
- def generate_level_0(self) -> None:
- """Fills out the files of level 0 (everything in the root folder)."""
- self.other_files_generator.generate_clean(clean_sh=self.structure.clean)
- self.config_generator.generate_precice_config(self)
- self.readme_generator.generate_readme(self)
-
- def _extract_participants(self) -> list[str]:
- """Extracts the participants from the topology.yaml file."""
- try:
- with open(self.input_file, "r") as config_file:
- config = YAML().load(config_file)
- self.logger.info(f"Input YAML file: {self.input_file}")
- except FileNotFoundError:
- self.logger.error(f"Input YAML file {self.input_file} not found.")
- return None
- except Exception as e:
- self.logger.error(f"Error reading input YAML file: {str(e)}")
- return None
-
- # Extract participant names from the new list format
- return [participant["name"] for participant in config.get("participants", [])]
-
- def generate_level_1(self) -> None:
- """Generates the files of level 1 (everything in the generated sub-folders)."""
-
- participants = self._extract_participants()
- for participant in participants:
- target_participant = self.structure.create_level_1_structure(
- participant, self.user_ui
- )
- adapter_config = target_participant[1]
- run_sh = target_participant[2]
- self.other_files_generator.generate_adapter_config(
- target_participant=participant,
- adapter_config=adapter_config,
- precice_config=self.structure.precice_config,
- topology_path=self.input_file,
- )
- self.other_files_generator.generate_run(run_sh)
-
- def format_precice_config(self) -> None:
- """Formats the generated preCICE configuration file."""
-
- precice_config_path = self.structure.precice_config
- # Create an instance of PrettyPrinter.
- printer = PrettyPrinter(indent=" ", max_width=120)
- # Specify the path to the XML file you want to prettify.
- try:
- printer.prettify_file(precice_config_path)
- self.logger.success(f"Successfully prettified preCICE configuration XML")
- except Exception as prettify_exception:
- self.logger.error(
- "An error occurred during XML prettification: "
- + str(prettify_exception)
- )
-
- def handle_output(self, args):
- """
- Handle output based on verbose mode and log state
- """
- if not args.verbose:
- if not self.logger.has_errors():
- self.logger.clear_messages()
- # No errors, show success message
- self.logger.success(
- "Everything worked. You can find the generated files at: "
- + str(self.structure.generated_root)
- )
- # Always show warnings if any exist
- if self.logger.has_warnings():
- for warning in self.logger.get_warnings():
- self.logger.warning(warning)
- self.logger.print_all()
-
- @staticmethod
- def validate_topology(args):
- """Validate the topology.yaml file against the JSON schema."""
- if args.validate_topology:
- schema = json.loads(
- files("precicecasegenerate.schemas")
- .joinpath("topology-schema.json")
- .read_text()
- )
- with open(args.input_file) as input_file:
- data = YAML().load(input_file)
- try:
- jsonschema.validate(instance=data, schema=schema)
- except jsonschema.exceptions.ValidationError as e:
- print(f"Validation of {args.input_file} failed: {e}")
diff --git a/precicecasegenerate/generation_utils/format_precice_config.py b/precicecasegenerate/generation_utils/format_precice_config.py
deleted file mode 100644
index 8e108c6..0000000
--- a/precicecasegenerate/generation_utils/format_precice_config.py
+++ /dev/null
@@ -1,467 +0,0 @@
-#!/usr/bin/env python3
-import io
-import shutil
-import sys
-
-from lxml import etree
-
-
-def is_empty_tag(element):
- """
- Check if an XML element is empty (has no children).
- """
- return not element.getchildren()
-
-
-def is_comment(element):
- """
- Check if the given element is an XML comment.
- """
- return isinstance(element, etree._Comment)
-
-
-def attrib_length(element):
- """
- Calculate the total length of the attributes in an element.
- For each attribute, count the key, quotes, equals sign, and value.
- """
- total = 0
- for k, v in element.items():
- # Format: key="value"
- total += len(k) + 2 + len(v) + 1
- # Add spaces between attributes (if more than one attribute exists)
- total += len(element.attrib) - 1
- return total
-
-
-def element_len(element):
- """
- Estimate the length of an element's start tag (including its attributes).
- This is used to decide whether to print attributes inline or vertically.
- """
- total = 2 # For the angle brackets "<" and ">"
- total += len(element.tag)
- if element.attrib:
- total += 1 + attrib_length(element)
- if is_empty_tag(element):
- total += 2 # For the space and slash in an empty tag ""
- return total
-
-
-class PrettyPrinter:
- """
- Class to handle the prettification of XML content.
- This class not only provides methods for printing XML elements
- in a prettified format, but also methods to parse and reformat
- an XML file directly.
- """
-
- def __init__(
- self, stream=sys.stdout, indent=" ", max_width=100, max_group_level=1
- ):
- self.stream = stream # Output stream (can be a file, StringIO, etc.)
- self.indent = indent # String used for indentation (2 spaces)
- self.max_width = max_width # Maximum width for a single line
- self.max_group_level = (
- max_group_level # Maximum depth to group elements on one line
- )
- self.global_newline_between_groups = (
- True # Add newline between top-level groups
- )
-
- # Specific ordering for top-level elements
- self.top_level_order = [
- "data:vector",
- "mesh",
- "participant",
- "m2n:sockets",
- "coupling-scheme:",
- ]
-
- def print(self, text="", end="\n"):
- """
- Write text to the output stream with optional end character.
- """
- self.stream.write(text + end)
-
- def fmt_attr_h(self, element):
- """
- Format element attributes for inline (horizontal) display.
- """
- return " ".join(['{}="{}"'.format(k, v) for k, v in element.items()])
-
- def fmt_attr_v(self, element, level):
- """
- Format element attributes for vertical display, with indentation.
- """
- prefix = self.indent * (level + 1)
- return "\n".join(['{}{}="{}"'.format(prefix, k, v) for k, v in element.items()])
-
- def print_xml_declaration(self, root):
- """
- Print the XML declaration at the beginning of the file.
- """
- self.print(
- ''.format(
- root.docinfo.xml_version, root.docinfo.encoding
- )
- )
-
- def print_root(self, root):
- """
- Print the entire XML document starting from the root element.
- """
- self.print_xml_declaration(root)
- self.print() # Add an extra newline after XML declaration
- self.print_element(root.getroot(), level=0)
-
- def print_tag_start(self, element, level):
- """
- Print the start tag of an element with precise formatting.
- """
- assert isinstance(element, etree._Element)
- # Always use self-closing tags for empty elements
- if not element.getchildren() and element.attrib:
- self.print(
- "{}<{} {}/>".format(
- self.indent * level, element.tag, self.fmt_attr_h(element)
- )
- )
- elif not element.getchildren():
- self.print("{}<{} />".format(self.indent * level, element.tag))
- else:
- # For non-empty elements, use traditional open/close tags
- if element.attrib:
- self.print(
- "{}<{} {}>".format(
- self.indent * level, element.tag, self.fmt_attr_h(element)
- )
- )
- else:
- self.print("{}<{}>".format(self.indent * level, element.tag))
-
- def print_tag_end(self, element, level):
- """
- Print the end tag of an element.
- """
- assert isinstance(element, etree._Element)
- # Only print end tag for non-empty elements
- if element.getchildren():
- self.print("{}{}>".format(self.indent * level, element.tag))
-
- def print_tag_empty(self, element, level):
- """
- Print an empty element with precise self-closing tag formatting.
- """
- assert isinstance(element, etree._Element)
- if element.attrib:
- self.print(
- "{}<{} {}/>".format(
- self.indent * level, element.tag, self.fmt_attr_h(element)
- )
- )
- else:
- self.print("{}<{} />".format(self.indent * level, element.tag))
-
- def print_comment(self, element, level):
- """
- Print an XML comment.
- """
- assert isinstance(element, etree._Comment)
- self.print(self.indent * level + str(element))
-
- def print_element(self, element, level):
- """
- Recursively print an XML element and its children in prettified format.
- """
- # If the element is a comment, print it and return.
- if isinstance(element, etree._Comment):
- self.print_comment(element, level=level)
- return
-
- if is_empty_tag(element):
- self.print_tag_empty(element, level=level)
- else:
- self.print_tag_start(element, level=level)
- self.print_children(element, level=level + 1)
- self.print_tag_end(element, level=level)
-
- def print_children(self, element, level):
- if level > self.max_group_level:
- for child in element.getchildren():
- self.print_element(child, level=level)
- return
-
- # Custom sorting for top-level elements
- def custom_sort_key(elem):
- tag = str(elem.tag)
- # Predefined order for top-level elements with prefix matching
- order = {
- "data:": 1, # Matches data:vector, data:scalar, etc.
- "mesh": 2,
- "participant": 3,
- "m2n:": 4,
- "coupling-scheme:": 5,
- }
- # Find the first matching key
- for prefix, rank in order.items():
- if tag.startswith(prefix):
- return rank
- return 6 # Unknown elements appear last
-
- # Sort children based on the predefined order
- sorted_children = sorted(element.getchildren(), key=custom_sort_key)
-
- last = len(sorted_children)
- for i, group in enumerate(sorted_children, start=1):
- # Special handling for participants to reorder child elements
- if "participant" in str(group.tag):
- # Define order for participant child elements with more generalized matching
- participant_order = {
- "provide-mesh": 1,
- "receive-mesh": 2,
- "write-data": 3,
- "read-data": 4,
- "mapping:": 5, # Matches mapping:nearest-neighbor, mapping:rbf, etc.
- }
-
- # Sort participant's children based on the defined order
- sorted_participant_children = sorted(
- group.getchildren(),
- key=lambda child: next(
- (
- rank
- for prefix, rank in participant_order.items()
- if str(child.tag).startswith(prefix)
- ),
- 6, # Unknown elements appear last
- ),
- )
-
- # Separate different types of elements
- mesh_elements = []
- data_elements = []
- mapping_elements = []
-
- for child in sorted_participant_children:
- if str(child.tag) in ["provide-mesh", "receive-mesh"]:
- mesh_elements.append(child)
- elif str(child.tag) in ["write-data", "read-data"]:
- data_elements.append(child)
- elif str(child.tag).startswith("mapping:"):
- mapping_elements.append(child)
-
- # Construct participant tag with attributes
- participant_tag = "<{}".format(group.tag)
- for attr, value in group.items():
- participant_tag += ' {}="{}"'.format(attr, value)
- participant_tag += ">"
-
- # Print participant opening tag
- self.print(self.indent * level + participant_tag)
-
- # Print mesh elements
- for child in mesh_elements:
- self.print_element(child, level + 1)
-
- # Add newline between mesh and data
- if mesh_elements and data_elements:
- self.print()
-
- # Print data elements
- for child in data_elements:
- self.print_element(child, level + 1)
-
- # Add newline before mapping
- if data_elements and mapping_elements:
- self.print()
-
- # Print mapping elements with multi-line formatting
- for mapping_elem in mapping_elements:
- # Check if the mapping element has multiple attributes
- if len(mapping_elem.items()) > 2:
- self.print(
- "{}<{}".format(self.indent * (level + 1), mapping_elem.tag)
- )
- for k, v in mapping_elem.items():
- self.print(
- '{}{}="{}"'.format(self.indent * (level + 2), k, v)
- )
- self.print("{} />".format(self.indent * (level + 1)))
- else:
- # Single-line formatting for simple mappings
- self.print_element(mapping_elem, level + 1)
-
- # Close participant tag
- self.print("{}".format(self.indent * level))
-
- # Add newline after participant if not the last element
- if i < last:
- self.print()
-
- continue
-
- # Special handling for coupling-scheme elements
- elif "coupling-scheme" in str(group.tag):
- # Sort children of coupling-scheme
- sorted_scheme_children = sorted(
- group.getchildren(),
- key=lambda child: 0
- if str(child.tag) == "relative-convergence-measure"
- else 1
- if str(child.tag) == "exchange"
- else 2,
- )
-
- # Separate different types of elements
- other_elements = []
- exchange_elements = []
- convergence_elements = []
- acceleration_elements = []
-
- for child in sorted_scheme_children:
- tag = str(child.tag)
- if tag == "exchange":
- exchange_elements.append(child)
- elif tag == "relative-convergence-measure":
- convergence_elements.append(child)
- elif tag.startswith("acceleration"):
- acceleration_elements.append(child)
- else:
- other_elements.append(child)
-
- # Print coupling-scheme opening tag
- self.print(self.indent * level + "<{}>".format(group.tag))
-
- # Print initial elements
- initial_elements = [
- elem
- for elem in other_elements
- if str(elem.tag)
- in ["participants", "participant", "max-time", "time-window-size"]
- ]
- for child in initial_elements:
- self.print_element(child, level + 1)
-
- # Print convergence measures first
- if convergence_elements:
- if initial_elements:
- self.print()
- for conv in convergence_elements:
- self.print_element(conv, level + 1)
-
- # Print exchanges
- if exchange_elements:
- if initial_elements or convergence_elements:
- self.print()
- for exchange in exchange_elements:
- self.print_element(exchange, level + 1)
-
- # Print max-iterations if present
- max_iterations = [
- elem for elem in other_elements if str(elem.tag) == "max-iterations"
- ]
- if max_iterations:
- if exchange_elements or convergence_elements or initial_elements:
- self.print()
- for child in max_iterations:
- self.print_element(child, level + 1)
-
- # Print acceleration elements
- if acceleration_elements:
- if (
- exchange_elements
- or convergence_elements
- or max_iterations
- or initial_elements
- ):
- self.print()
- for child in acceleration_elements:
- self.print_element(child, level + 1)
-
- # Close coupling-scheme tag
- self.print("{}{}>".format(self.indent * level, group.tag))
-
- # Add newline after coupling-scheme if not the last element
- if i < last:
- self.print()
-
- continue
-
- # Print the element normally
- self.print_element(group, level=level)
-
- # Add an extra newline between top-level groups
- if i < last:
- self.print()
-
- @staticmethod
- def parse_xml(content):
- """
- Parse XML content into a lxml ElementTree, with recovery and whitespace cleanup.
-
- Parameters:
- content (bytes): The XML content in bytes.
-
- Returns:
- An lxml ElementTree object.
- """
- parser = etree.XMLParser(
- recover=True, remove_comments=False, remove_blank_text=True
- )
- return etree.fromstring(content, parser).getroottree()
-
- def prettify_file(self, file_path):
- """
- Prettify the XML file at the given path and overwrite the file with the prettified content.
-
- Parameters:
- file_path (str): Path to the XML file.
-
- Returns:
- bool: True if the file was processed (even if no changes were made), False if an error occurred.
- """
- try:
- # Open and read the file as bytes.
- with open(file_path, "rb") as xml_file:
- content = xml_file.read()
- except Exception as e:
- print(f'Unable to open file: "{file_path}"')
- print(e)
- return False
-
- try:
- # Parse the XML content using the static method.
- xml_tree = PrettyPrinter.parse_xml(content)
- except Exception as e:
- print(f'Error occurred while parsing file: "{file_path}"')
- print(e)
- return False
-
- # Create an in-memory text stream to hold the prettified XML.
- buffer = io.StringIO()
- # Use a temporary PrettyPrinter instance with the buffer as output.
- temp_printer = PrettyPrinter(
- stream=buffer,
- indent=self.indent,
- max_width=self.max_width,
- max_group_level=self.max_group_level,
- )
- temp_printer.print_root(xml_tree)
-
- # Get the prettified content from the buffer.
- new_content = buffer.getvalue()
- # Compare with the original content (decoded from bytes).
- if new_content != content.decode("utf-8"):
- try:
- # Overwrite the original file with the prettified content.
- with open(file_path, "w") as xml_file:
- buffer.seek(0)
- shutil.copyfileobj(buffer, xml_file)
- except Exception as e:
- print(f'Failed to write prettified content to file: "{file_path}"')
- print(e)
- return False
- else:
- print(f'No changes required for file: "{file_path}"')
- return True
diff --git a/precicecasegenerate/generation_utils/logger.py b/precicecasegenerate/generation_utils/logger.py
deleted file mode 100644
index 6079dd2..0000000
--- a/precicecasegenerate/generation_utils/logger.py
+++ /dev/null
@@ -1,71 +0,0 @@
-from pathlib import Path
-from termcolor import colored
-from datetime import datetime
-
-
-class Logger:
- def __init__(self) -> None:
- """Custom logger"""
- self.root_generated = Path(__file__).parent
- self._errors = []
- self._warnings = []
- self._messages = []
-
- def _log(self, msg: str, level: str, color: str, symbol: str) -> None:
- """
- Internal method to log a message with a specified level, color, and symbol.
- :param msg: The log message.
- :param level: The log level (e.g., INFO, SUCCESS, ERROR).
- :param color: The color for terminal output.
- :param symbol: Symbol to display alongside the log.
- """
- timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- formatted_msg = f"{timestamp} {symbol} [{level}] {msg}"
- self._messages.append((formatted_msg, color))
-
- def print_all(self) -> None:
- """Prints all logged messages and clears the log state."""
- for message, color in self._messages:
- print(colored(message, color))
- self._messages.clear()
-
- def clear_messages(self) -> None:
- """Clears all logged messages."""
- self._messages.clear()
-
- def success(self, msg: str) -> None:
- """Logs a success message."""
- self._log(msg, "SUCCESS", "green", "✅")
-
- def info(self, msg: str) -> None:
- """Logs an informational message."""
- self._log(msg, "INFO", "blue", "ℹ️")
-
- def warning(self, msg: str) -> None:
- """Logs a warning message."""
- if not msg in self._warnings:
- self._warnings.append(msg)
- self._log(msg, "WARNING", "yellow", "⚠️")
-
- def error(self, msg: str) -> None:
- """Logs an error message."""
- if not msg in self._errors:
- self._errors.append(msg)
- self._log(msg, "ERROR", "red", "❌")
-
- def has_errors(self) -> bool:
- """Check if any errors have been logged."""
- return len(self._errors) > 0
-
- def has_warnings(self) -> bool:
- """Check if any warnings have been logged."""
- return len(self._warnings) > 0
-
- def get_warnings(self) -> list:
- """Retrieve logged warnings."""
- return self._warnings
-
- def clear_log_state(self) -> None:
- """Clear all logged errors and warnings."""
- self._errors.clear()
- self._warnings.clear()
diff --git a/precicecasegenerate/generation_utils/other_files_generator.py b/precicecasegenerate/generation_utils/other_files_generator.py
deleted file mode 100644
index 46b266d..0000000
--- a/precicecasegenerate/generation_utils/other_files_generator.py
+++ /dev/null
@@ -1,80 +0,0 @@
-from pathlib import Path
-from . import Logger
-from . import AdapterConfigGenerator
-from importlib.resources import files
-
-
-class OtherFilesGenerator:
- def __init__(self) -> None:
- """
- Initialize OtherFilesGenerator with optional logger.
-
- :param logger: Optional Logger instance. If not provided, a new Logger will be created.
- """
- self.logger = Logger()
-
- def _generate_static_files(self, target: Path, name: str) -> None:
- """Generate static files from templates
- :param target: target file path
- :param name: name of the function"""
- try:
- template = files("precicecasegenerate.templates").joinpath(
- f"template_{name}"
- )
- self.logger.info(f"Reading in the template file for {name}")
-
- # Check if the template file exists
- if not template.exists():
- raise FileNotFoundError(f"Template file not found: {template}")
-
- # Read the template content
- template_content = template.read_text(encoding="utf-8")
-
- self.logger.info(f"Writing the template to the target: {str(target)}")
-
- # Write content to the target file
- with open(target, "w", encoding="utf-8") as template:
- template.write(template_content)
-
- self.logger.success(
- f"Successfully written {name} content to: {str(target)}"
- )
-
- except FileNotFoundError as fileNotFoundException:
- self.logger.error(f"File not found: {fileNotFoundException}")
- except PermissionError as permissionErrorException:
- self.logger.error(f"Permission error: {permissionErrorException}")
- except Exception as generalException:
- self.logger.error(f"An unexpected error occurred: {generalException}")
-
- def generate_run(self, run_sh: Path) -> None:
- """Generates the run.sh file
- :param run_sh: Path to the run.sh file"""
- self._generate_static_files(target=run_sh, name="run.sh")
-
- def generate_clean(self, clean_sh: Path) -> None:
- """Generates the clean.sh file.
- :param clean_sh: Path to the clean.sh file"""
- self._generate_static_files(target=clean_sh, name="clean.sh")
-
- def generate_adapter_config(
- self,
- adapter_config: Path,
- precice_config: Path,
- topology_path: Path,
- target_participant: str,
- ) -> None:
- """Generates the adapter-config.json file.
-
- :param adapter_config: Path to the output adapter-config.json file
- :param precice_config: Path to the precice-config.xml file
- :param topology_path: Path to the topology YAML file
- :param target_participant: Name of the target participant
- """
- adapter_config_generator = AdapterConfigGenerator(
- adapter_config_path=adapter_config,
- precice_config_path=precice_config,
- topology_path=topology_path,
- target_participant=target_participant,
- )
- adapter_config_generator.write_to_file()
diff --git a/precicecasegenerate/generation_utils/readme_generator.py b/precicecasegenerate/generation_utils/readme_generator.py
deleted file mode 100644
index 9d98094..0000000
--- a/precicecasegenerate/generation_utils/readme_generator.py
+++ /dev/null
@@ -1,124 +0,0 @@
-from pathlib import Path
-
-from importlib.resources import files
-
-
-class ReadmeGenerator:
- SOLVER_DOCS = {
- # CFD Solvers
- "openfoam": "https://www.openfoam.com/documentation",
- "su2": "https://su2code.github.io/docs/home/",
- "foam-extend": "https://sourceforge.net/p/foam-extend/",
- # Structural Solvers
- "calculix": "https://www.calculix.de/",
- "elmer": "https://www.elmersolver.com/documentation/",
- "code_aster": "https://www.code-aster.org/V2/doc/default/en/index.php",
- # Other Solvers
- "fenics": "https://fenicsproject.org/docs/",
- "dealii": "https://dealii.org/current/doxygen/deal.II/index.html",
- # Fallback
- "default": "https://precice.org/adapter-list.html",
- }
-
- def generate_readme(self, file_generator):
- """Generates the README.md file with dynamic content based on simulation configuration"""
- logger = file_generator.logger
- user_ui = file_generator.user_ui
-
- # Read the template README with explicit UTF-8 encoding
- readme_content = (
- files("precicecasegenerate.templates")
- .joinpath("template_README.md")
- .read_text("utf-8")
- )
-
- # Extract participants and their solvers
- participants_list = []
- solvers_list = []
- solver_links = {}
- original_solver_names = {}
-
- # Ensure participants exist before processing
- if not hasattr(user_ui, "participants") or not user_ui.participants:
- logger.warning("No participants found. Using default placeholders.")
- participants_list = ["DefaultParticipant"]
- solvers_list = ["DefaultSolver"]
- original_solver_names = {"defaultparticipant": "DefaultSolver"}
- else:
- for participant_name, participant_info in user_ui.participants.items():
- # Preserve original solver name
- original_solver_name = getattr(
- participant_info, "solverName", "UnknownSolver"
- )
- solver_name = original_solver_name.lower()
-
- participants_list.append(participant_name)
- solvers_list.append(original_solver_name)
- original_solver_names[participant_name.lower()] = original_solver_name
-
- # Get solver documentation link, use default if not found
- solver_links[solver_name] = self.SOLVER_DOCS.get(
- solver_name, self.SOLVER_DOCS["default"]
- )
-
- # Determine coupling strategy
- coupling_strategy = (
- "Partitioned" if len(participants_list) > 1 else "Single Solver"
- )
-
- # Replace placeholders
- readme_content = readme_content.replace(
- "{PARTICIPANTS_LIST}", "\n ".join(f"- {p}" for p in participants_list)
- )
- readme_content = readme_content.replace(
- "{SOLVERS_LIST}", "\n ".join(f"- {s}" for s in solvers_list)
- )
- readme_content = readme_content.replace(
- "{COUPLING_STRATEGY}", coupling_strategy
- )
-
- # Explicitly replace solver-specific placeholders
- readme_content = readme_content.replace(
- "{SOLVER1_NAME}", solvers_list[0] if solvers_list else "Solver1"
- )
- readme_content = readme_content.replace(
- "{SOLVER2_NAME}", solvers_list[1] if len(solvers_list) > 1 else "Solver2"
- )
-
- # Generate adapter configuration paths for all participants
- adapter_config_paths = []
-
- for participant in participants_list:
- # Find the corresponding solver name for this participant
- solver_name = original_solver_names.get(participant.lower(), "solver")
- adapter_config_paths.append(
- f"- **{participant}**: `{participant}-{solver_name}/adapter-config.json`"
- )
-
- # Replace adapter configuration section
- readme_content = readme_content.replace(
- "- **Adapter Configuration**: `{PARTICIPANT_NAME}/adapter-config.json`",
- "**Adapter Configurations**:\n" + "\n".join(adapter_config_paths),
- )
-
- # Explicitly replace solver links
- readme_content = readme_content.replace(
- "[Link1]",
- f"[{solvers_list[0] if solvers_list else 'Solver1'}]({solver_links.get(solvers_list[0].lower(), '#') if solvers_list else '#'})",
- )
- readme_content = readme_content.replace(
- "[Link2]",
- f"[{solvers_list[1] if len(solvers_list) > 1 else 'Solver2'}]({solver_links.get(solvers_list[1].lower(), '#') if len(solvers_list) > 1 else '#'})",
- )
-
- # Write the README
- structure = file_generator.structure
-
- try:
- with open(structure.README, "w", encoding="utf-8") as readme_file:
- readme_file.write(readme_content)
- logger.success(f"README.md generated successfully at {structure.README}")
- return structure.README
- except Exception as e:
- logger.error(f"Failed to write README.md: {str(e)}")
- return None
diff --git a/precicecasegenerate/generation_utils/structure_handler.py b/precicecasegenerate/generation_utils/structure_handler.py
deleted file mode 100644
index b2c8923..0000000
--- a/precicecasegenerate/generation_utils/structure_handler.py
+++ /dev/null
@@ -1,110 +0,0 @@
-from pathlib import Path
-from precicecasegenerate.generation_utils.logger import Logger
-import shutil
-
-
-class StructureHandler:
- def __init__(self, output_path: Path, clean_generated: bool = True) -> None:
- """Creates the files and folders in a structure.
- :param clean_generated: If set to True, clean the _generated dir before the files are created.
- Can be useful if you added or adjusted files yourself, and you are not sure what you changed."""
- # Objects
- self.run = None
- self.generated_root = output_path
- self.logger = Logger()
-
- # Create level 0 structure (everything in the root folder)
- if clean_generated:
- self._cleaner()
- self._create_folder_structure()
- self._create_level_0_structure()
-
- def _create_folder_structure(self) -> None:
- """Creates the structure needed for generated files"""
- try:
- self.generated_root.mkdir(parents=True, exist_ok=True)
- self.logger.success(f"Created folder: {self.generated_root}")
- except Exception as create_folder_structure_exception:
- self.logger.error(
- f"Failed to create folder structure. Error: {create_folder_structure_exception}"
- )
-
- def _create_level_0_structure(self) -> None:
- """Creates the necessary files of level 0 (everything in the root folder)."""
-
- # Files that need to be created
- files = [
- self.generated_root / "clean.sh",
- self.generated_root / "README.md",
- self.generated_root / "precice-config.xml",
- ]
-
- self.clean, self.README, self.precice_config = files
-
- for file in files:
- try:
- file.touch(exist_ok=True)
- self.logger.success(f"Created file: {file}")
- except Exception as create_files_exception:
- self.logger.error(
- f"Failed to create file {file}. Error: {create_files_exception}"
- )
-
- def create_level_1_structure(self, participant: str, user_ui=None) -> list[Path]:
- """Creates the necessary files of level 1 (everything in the generated sub-folders).
- :param participant: The participant for which the files should be created.
- :param user_ui: Optional UI_UserInput instance to retrieve participant information
- :return: participant_folder, adapter_config, run"""
- try:
- # Validate that user_ui is provided
- if user_ui is None:
- raise ValueError("user_ui must be provided to create level 1 structure")
-
- # Get the solver name from the participants
- solver_name = user_ui.participants[participant].solver_name.lower()
- # folder name starts with lowercase
- participant = participant.lower()
-
- # Create the participant folder with name-solver format
- participant_folder = self.generated_root / f"{participant}-{solver_name}"
- participant_folder.mkdir(parents=True, exist_ok=True)
- self.logger.success(f"Created folder: {participant_folder}")
-
- # Create the adapter-config.json file
- adapter_config = participant_folder / "adapter-config.json"
- adapter_config.touch(exist_ok=True)
- self.logger.success(f"Created file: {adapter_config}")
-
- # Create the run.sh file
- self.run = participant_folder / "run.sh"
- self.run.touch(exist_ok=True)
- self.logger.success(f"Created file: {self.run}")
-
- return [participant_folder, adapter_config, self.run]
- except Exception as create_participant_folder_exception:
- # Define participant_folder before logging error
- participant_folder = self.generated_root / participant
- self.logger.error(
- f"Failed to create folder/file for participant: {participant_folder}. Error: {create_participant_folder_exception}"
- )
-
- def _cleaner(self) -> None:
- """
- Removes the entire `self.generated_root` directory and its contents.
- If `self.generated_root` exists, it deletes everything inside it.
- """
- if self.generated_root.exists():
- try:
- # Remove the directory and all its contents
- shutil.rmtree(self.generated_root)
- self.logger.success(
- f"Successfully removed directory and all contents: {self.generated_root}"
- )
- except Exception as cleaner_exception:
- self.logger.error(
- f"Failed to remove directory: {self.generated_root}. Error: {cleaner_exception}"
- )
- else:
- self.logger.info(
- f"Directory {self.generated_root} does not exist. Nothing to clean."
- )
diff --git a/precicecasegenerate/helper.py b/precicecasegenerate/helper.py
new file mode 100644
index 0000000..e184b40
--- /dev/null
+++ b/precicecasegenerate/helper.py
@@ -0,0 +1,159 @@
+import random
+from enum import Enum
+from pathlib import Path
+from precice_config_graph import nodes as n
+from precice_config_graph import enums as e
+
+"""
+Helper items and classes for the NodeCreator.
+"""
+# Indent for config
+INDENT: str = " " * 4
+
+# Link to the precice/case-generate repository
+case_generate_repository_url: str = "https://github.com/precice/case-generate"
+
+# Set defaults here to be able to change them easily
+DEFAULT_DATA_TYPE: e.DataType = e.DataType.VECTOR
+DEFAULT_PARTICIPANT_DIMENSIONALITY: int = 3
+DEFAULT_MAPPING_METHOD: e.MappingMethod = e.MappingMethod.NEAREST_NEIGHBOR
+DEFAULT_ACCELERATION_TYPE: e.AccelerationType = e.AccelerationType.IQN_ILS
+DEFAULT_M2N_TYPE: e.M2NType = e.M2NType.SOCKETS
+DEFAULT_EXPLICIT_COUPLING_TYPE: e.CouplingSchemeType = e.CouplingSchemeType.PARALLEL_EXPLICIT
+DEFAULT_IMPLICIT_COUPLING_TYPE: e.CouplingSchemeType = e.CouplingSchemeType.PARALLEL_IMPLICIT
+DEFAULT_CONVERGENCE_MEASURE_TYPE: e.ConvergenceMeasureType = e.ConvergenceMeasureType.RELATIVE
+DEFAULT_DATA_KIND: str = "intensive"
+DEFAULT_MAPPING_KIND: str = "read"
+
+EXTENSIVE_DATA: list[str] = [
+ "force",
+ "heat-transfer",
+ "heattransfer",
+]
+
+INTENSIVE_DATA: list[str] = [
+ "displacement",
+ "temperature",
+ "pressure",
+ "velocity",
+ "heat-flux",
+ "heatflux"
+]
+
+class DataKind(Enum):
+ EXTENSIVE = "extensive"
+ INTENSIVE = "intensive"
+ DEFAULT = DEFAULT_DATA_KIND
+
+def get_data_label(data_name: str) -> DataKind:
+ """
+ Return the kind / label of data based on the name of the data:
+ either "extensive" or "intensive"; with default "intensive".
+ :param data_name: The name of the data.
+ :return: Enum type of the data kind.
+ """
+ if _is_extensive(data_name):
+ return DataKind.EXTENSIVE
+ elif _is_intensive(data_name):
+ return DataKind.INTENSIVE
+ else:
+ return DataKind.DEFAULT
+
+def _is_extensive(data_name: str) -> bool:
+ """
+ Checks if the given data name is associated with an extensive data.
+ :param data_name: The name of the data to check.
+ :return: True, if the data extensive. False otherwise.
+ """
+ return any(data_name.lower().__contains__(extensive_data) for extensive_data in EXTENSIVE_DATA)
+
+def _is_intensive(data_name: str) -> bool:
+ """
+ Checks if the given data name is associated with an intensive data.
+ :param data_name: The name of the data to check.
+ :return: True, if the data intensive. False otherwise.
+ """
+ return any(data_name.lower().__contains__(intensive_data) for intensive_data in INTENSIVE_DATA)
+
+# To make duplicate data names unique
+DATA_UNIQUIFIERS: list[str] = [
+ "adventurous",
+ "alien",
+ "grand",
+ "humungous",
+ "informative",
+ "magnificent",
+ "mischievous",
+ "pretty",
+ "scary",
+ "suspicious",
+ "wonderful",
+]
+
+# A default data type if none is given
+DEFAULT_DATA_TYPES: dict[str, e.DataType] = {
+ "force": e.DataType.VECTOR,
+ "displacement": e.DataType.VECTOR,
+ "temperature": e.DataType.SCALAR,
+ "pressure": e.DataType.SCALAR,
+ "velocity": e.DataType.VECTOR,
+ "heat-flux": e.DataType.VECTOR,
+ "heatflux": e.DataType.VECTOR,
+ "heat-transfer": e.DataType.SCALAR,
+ "heattransfer": e.DataType.SCALAR,
+}
+
+
+def capitalize_name(name: str) -> str:
+ """
+ Capitalize the first letter of each word in a string.
+ :param name: The string to capitalize.
+ :return: A capitalized string.
+ """
+ return "-".join(part[:1].upper() + part[1:] for part in name.split("-"))
+
+
+def get_uniquifier() -> str:
+ """
+ Return a random string from the DATA_UNIQUIFIERS list and remove it from the list.
+ :return: A string to be used as a unique identifier for data names.
+ """
+ unique_number = random.randint(0, len(DATA_UNIQUIFIERS) - 1)
+ return DATA_UNIQUIFIERS.pop(unique_number)
+
+
+def get_participant_solver_directory(parent_directory: Path, participant_name: str, solver_name: str) -> Path:
+ """
+ Return the name of the directory for a participant of the simulation.
+ The adapter-config.json and run.sh files for this participant will be saved in this directory.
+ :param parent_directory: The parent directory of the participant's directory.
+ :param participant_name: The name of the participant.
+ :param solver_name: The name of the solver.
+ :return: A Path representing the directory name.
+ """
+ participant_directory: Path = parent_directory / (participant_name.lower() + "-" + solver_name.lower())
+ return participant_directory
+
+
+class PatchState(Enum):
+ EXTENSIVE = "extensive"
+ INTENSIVE = "intensive"
+
+
+class PatchNode:
+ """
+ A class to represent a patch from a topology.yaml file.
+ """
+
+ def __init__(self, name: str, participant: n.ParticipantNode, mesh: n.MeshNode, label: PatchState):
+ """
+ Initialize a PatchNode.
+ :param name: The name of the patch.
+ :param participant: The participant that owns the patch.
+ :param mesh: The mesh the patch is on.
+ :param label:
+ """
+ self.name = name
+ self.participant = participant
+ self.mesh = mesh
+ self.label = label
diff --git a/precicecasegenerate/input_handler/topology_reader.py b/precicecasegenerate/input_handler/topology_reader.py
new file mode 100644
index 0000000..d1ef78b
--- /dev/null
+++ b/precicecasegenerate/input_handler/topology_reader.py
@@ -0,0 +1,117 @@
+from ruamel.yaml import YAML
+import json
+import jsonschema
+import logging
+from pathlib import Path
+from importlib.resources import files
+from precicecasegenerate import helper
+
+logger = logging.getLogger(__name__)
+
+
+class TopologyReader:
+ """
+ Read a given topology.yaml file and save it as a dict.
+ """
+
+ def __init__(self, path_to_topology_file: Path):
+ # Convert to Path object just in case
+ self.topology_file_path = Path(path_to_topology_file)
+ self.topology = self._read_topology()
+
+ def _read_topology(self) -> dict:
+ """
+ Read the topology file and convert it to a dict.
+ :return: The topology dict.
+ """
+ logger.debug(f"Reading topology file at {self.topology_file_path.resolve()}")
+ yaml = YAML(typ="safe")
+ with open(self.topology_file_path, "r") as topology_file:
+ topology = yaml.load(topology_file)
+ return topology
+
+ def validate_topology(self) -> int:
+ """
+ Check if the topology adheres to the defined schema in schemas/topology-schema.json
+ :return: 0 if topology is valid, 1 otherwise
+ """
+ schema_path = files("precicecasegenerate.schemas") / "topology-schema.json"
+
+ schema = json.loads(schema_path.read_text(encoding="utf-8"))
+ try:
+ jsonschema.validate(self.topology, schema)
+ logger.debug("Topology file adheres to the schema.")
+ except jsonschema.ValidationError as e:
+ logger.critical(f"Topology file {self.topology_file_path.resolve()} does not adhere to the schema "
+ f"as specified in {schema_path}: {e.message}. Aborting program.")
+ return 1
+ return 0
+
+ def check_topology(self) -> int:
+ """
+ Check if the topology is valid.
+ This check includes:
+
+ - Checking if participant names are unique.
+ - Checking if exchanges only contain known "to" and "from" participants.
+ - Checking if exchanges are unique, when ignoring "to-patch", "from-patch" and "type" tags.
+ If any of these checks fail, an error message is printed and the program is aborted.
+ Additionally, it is checked if any of the data names contains one of the uniquifiers defined in
+ helper.DATA_UNIQUIFIERS. If so, this uniquifier is removed from the list of uniquifiers.
+ :return: 0 if topology is valid, 1 otherwise
+ """
+ participant_names: set[str] = set()
+ # Check if participant names are unique
+ for participant in self.topology["participants"]:
+ if participant["name"] in participant_names:
+ logger.critical(
+ f"Duplicate participant name {participant['name']} in topology file {self.topology_file_path}.")
+ return 1
+ participant_names.add(participant["name"])
+ logger.debug("Topology does not contain duplicate participant names.")
+
+ # Check if participants actually appear in exchanges
+ participants_in_exchanges: set[str] = set()
+
+ # Check if exchanges only contain known "to" and "from" participants
+ for exchange in self.topology["exchanges"]:
+ to_participant: str = exchange["to"]
+ from_participant: str = exchange["from"]
+
+ participants_in_exchanges.add(to_participant)
+ participants_in_exchanges.add(from_participant)
+
+ if to_participant not in participant_names:
+ logger.critical(f"Unknown participant {to_participant} in topology file "
+ f"{self.topology_file_path}.")
+ return 1
+ if from_participant not in participant_names:
+ logger.critical(f"Unknown participant {from_participant} in topology file "
+ f"{self.topology_file_path}.")
+ return 1
+ data: str = exchange["data"]
+
+ if from_participant == to_participant:
+ logger.error(f"Participant {from_participant} exchanges {data} with itself.")
+ return 1
+
+ # Remove uniquifiers from the list if they are present in a data name
+ for uniquifier in helper.DATA_UNIQUIFIERS.copy():
+ if uniquifier in data:
+ helper.DATA_UNIQUIFIERS.remove(uniquifier)
+ logger.debug(f"Removed uniquifier {uniquifier} from the list of uniquifiers.")
+
+ for participant in self.topology["participants"]:
+ if participant["name"] not in participants_in_exchanges:
+ logger.warning(f"Removing participant {participant['name']} as it is defined but never used.")
+ self.topology["participants"].remove(participant)
+
+ logger.debug("Topology does not contain any errors.")
+ return 0
+
+ def get_topology(self) -> dict:
+ """
+ Return the topology dict.
+ :return: A dict representing the topology.
+ """
+ return self.topology
diff --git a/precicecasegenerate/logging_setup.py b/precicecasegenerate/logging_setup.py
new file mode 100644
index 0000000..f84dd3b
--- /dev/null
+++ b/precicecasegenerate/logging_setup.py
@@ -0,0 +1,94 @@
+import datetime
+import logging
+from logging import LogRecord, Logger
+from pathlib import Path
+from colored import Style, Fore
+
+from precicecasegenerate import cli_helper
+
+
+class ColorFormatter(logging.Formatter):
+ COLORS = {
+ logging.INFO: Fore.green, # green
+ logging.DEBUG: Fore.blue, # blue
+ logging.WARNING: Fore.yellow, # yellow
+ logging.ERROR: Fore.red, # red
+ logging.CRITICAL: f"{Style.bold}{Fore.red}" # bold red
+ }
+ RESET = Style.reset
+
+ def format(self, record: LogRecord) -> str:
+ """
+ Format a log record, such that the levelname is colored according to the level.
+ :param record: The log record to format.
+ :return: The formatted log record.
+ """
+ # Store original levelname
+ levelname = record.levelname
+ color = self.COLORS.get(record.levelno, "")
+ record.levelname = f"{color}{levelname}{self.RESET}"
+ formatted = super().format(record)
+ # Restore original levelname
+ record.levelname = levelname
+ return formatted
+
+
+def setup_logging(verbose: bool = False) -> Logger:
+ """
+ Create a logger object and set up logging to a file and the console.
+ By default, only warnings and errors are logged to the console, whereas everything is logged to the file.
+ :param verbose: Enables debug logging to the console.
+ :return: A logger object.
+ """
+ log_directory: Path = Path(cli_helper.LOG_DIR_NAME)
+ log_directory.mkdir(parents=True, exist_ok=True)
+
+ # Base level is debug (nothing is ignored)
+ logger = logging.getLogger()
+ logger.setLevel(logging.DEBUG)
+
+ # Delete old log files if there are more than 10 to avoid clutter
+ log_files = sorted(log_directory.glob("precice-case-generate-*.log"))
+ if len(log_files) >= 10:
+ for old_file in log_files[:-9]:
+ try:
+ # This deletes the file
+ old_file.unlink()
+ except OSError as e:
+ logger.error(f"Error deleting old log file {old_file}: {e}")
+
+ timestamp: str = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
+ log_file_path: Path = log_directory / f"precice-case-generate-{timestamp}.log"
+
+ # Prevent duplicate handlers incase this method is called multiple times
+ if logger.hasHandlers():
+ logger.handlers.clear()
+
+ # Write everything to a log file
+ file_handler = logging.FileHandler(log_file_path)
+ file_handler.setLevel(logging.DEBUG)
+ # Only write warnings and errors to the console
+ console_handler = logging.StreamHandler()
+ if not verbose:
+ console_handler.setLevel(logging.INFO)
+ else:
+ console_handler.setLevel(logging.DEBUG)
+
+ file_formatter = logging.Formatter(
+ "[%(asctime)s] [%(levelname)s] [%(name)s]: %(message)s",
+ datefmt="%Y-%m-%d %H:%M:%S"
+ )
+ file_handler.setFormatter(file_formatter)
+ # Use a file_formatter with color
+ console_formatter = ColorFormatter(
+ "[%(asctime)s] [%(levelname)s]: %(message)s",
+ datefmt="%H:%M:%S"
+ )
+ console_handler.setFormatter(console_formatter)
+
+ logger.addHandler(file_handler)
+ logger.addHandler(console_handler)
+
+ logger.debug(f"Logs can be found in {log_directory.resolve()}")
+
+ return logger
diff --git a/precicecasegenerate/node_creator.py b/precicecasegenerate/node_creator.py
new file mode 100644
index 0000000..4be4fe0
--- /dev/null
+++ b/precicecasegenerate/node_creator.py
@@ -0,0 +1,1193 @@
+import logging
+
+from precice_config_graph import nodes as n
+from precice_config_graph import enums as e
+import precicecasegenerate.helper as helper
+
+logger = logging.getLogger(__name__)
+
+
+class NodeCreator:
+
+ def __init__(self, topology: dict):
+ self.topology = topology
+ self.participants: list[n.ParticipantNode] = []
+ self.data: list[n.DataNode] = []
+ self.meshes: list[n.MeshNode] = []
+ self.coupling_schemes: list[n.CouplingSchemeNode | n.MultiCouplingSchemeNode] = []
+ self.m2ns: list[n.M2NNode] = []
+ # Patches are important for adapter configs
+ self.patches: list[helper.PatchNode] = []
+
+ # Containers for temporary values
+ # Dimensionality is needed for meshes
+ self.participant_dimensionality: dict[n.ParticipantNode, int] = {}
+ self.exchange_types: dict[n.ExchangeNode, str] = {}
+
+ self._create_nodes()
+
+ def get_mesh_patch_map(self) -> dict[n.MeshNode, set[str]]:
+ """
+ Create a dict mapping mesh nodes to sets of patches that they use / contain.
+ :return: A dict mapping mesh nodes to sets of patches.
+ """
+ mesh_patch_map: dict[n.MeshNode, set[str]] = {}
+ for patch in self.patches:
+ mesh: n.MeshNode = patch.mesh
+ # Create a a new entry if necessary
+ if mesh not in mesh_patch_map:
+ mesh_patch_map[mesh] = set()
+ mesh_patch_map[mesh].add(patch.name)
+
+ return mesh_patch_map
+
+ def get_participant_solver_map(self) -> dict[n.ParticipantNode, str]:
+ """
+ Create a dict mapping participant nodes to their solver.
+ :return: A dict mapping participant nodes to their solver.
+ """
+ participant_solver_map: dict[n.ParticipantNode, str] = {}
+ for participant_dict in self.topology["participants"]:
+ # Get the participant node corresponding to the participant mentioned in the topology
+ participant: n.ParticipantNode = next(p for p in self.participants if p.name == participant_dict["name"])
+ # Assign the solver to the participant node
+ participant_solver_map[participant] = participant_dict["solver"]
+ return participant_solver_map
+
+ def get_nodes(self) -> dict:
+ """
+ Return all nodes created from the topology.
+ The returned dictionary has entries for the five major preCICE configuration elements, namely:
+ Participants, Data, Meshes, CouplingSchemes and M2Ns.
+ :return: A dictionary mapping node-names to lists of node-objects.
+ """
+ return {"participants": self.participants, "data": self.data, "meshes": self.meshes,
+ "coupling-schemes": self.coupling_schemes, "m2n": self.m2ns}
+
+ def _create_nodes(self) -> None:
+ """
+ Create ConfigGraph nodes based on the topology.
+ """
+ # Topology will only have tags "participants" and "exchanges"
+
+ # Initialize participants from participants tag
+ participant_map: dict[str, n.ParticipantNode] = self._initialize_participants()
+ logger.debug(f"Created {len(set(participant_map.values()))} participant nodes.")
+
+ # Update patches
+ # IMPORTANT: This updates the topology dict.
+ # Anything using a "frozenset" of topology items needs to be done after this method!
+ # (such as "initialize_data")
+ participant_patch_label_map: dict[tuple[n.ParticipantNode, n.ParticipantNode], dict[str, set[str]]] = (
+ self._patch_preprocessing(participant_map))
+
+ # Update non-unique data names depending on from-/to-patches of the involved participants
+ # IMPORTANT: This updates the topology dict (see warning above)
+ self._data_preprocessing(participant_map)
+
+ # Initialize data from exchanges tag (defined implicitly)
+ # IMPORTANT: This uses the topology dict as keys, so it needs to be done after the patch preprocessing.
+ data_map: dict[frozenset, n.DataNode] = self._initialize_data(participant_map)
+ logger.debug(f"Created {len(set(data_map.values()))} data nodes.")
+
+ # Initialize meshes from the exchanges tag (defined implicitly)
+ mesh_map: dict[
+ tuple[n.ParticipantNode, n.ParticipantNode, str], n.MeshNode] = self._initialize_meshes_and_patches(
+ participant_patch_label_map)
+ logger.debug(f"Created {len(set(mesh_map.values()))} mesh nodes.")
+
+ # Initialize mappings from the exchanges tag (defined implicitly)
+ mapping_map: dict[tuple[n.MeshNode, n.MeshNode], n.MappingNode] = self._initialize_mappings(participant_map,
+ mesh_map, data_map)
+ logger.debug(f"Created {len(set(mapping_map.values()))} mapping nodes.")
+
+ # Initialize exchanges from the exchanges tag
+ potential_couplings: list[dict] = self._initialize_exchanges(participant_map, mesh_map, data_map, mapping_map)
+ logger.debug(f"Created {len(potential_couplings)} exchange nodes.")
+
+ # All potentially strong coupling-schemes
+ strong_couplings: list[dict] = [coupling for coupling in potential_couplings if coupling["type"] == "strong"]
+ logger.debug(f"Found {len(strong_couplings)} strong exchanges.")
+ # All potentially weak coupling-schemes
+ weak_couplings: list[dict] = [coupling for coupling in potential_couplings if coupling["type"] == "weak"]
+ logger.debug(f"Found {len(weak_couplings)} weak exchanges.")
+
+ # Maps participants to coupling schemes.
+ coupling_map: dict[frozenset[n.ParticipantNode], n.CouplingSchemeNode | n.MultiCouplingSchemeNode] = {}
+ # Handle strong couplings
+ if len(strong_couplings) > 0:
+ # This might manipulate weak_couplings, so this method needs to be called before _create_weak ...
+ coupling_map = self._create_strong_coupling_schemes(strong_couplings, weak_couplings)
+
+ # Handle weak couplings
+ if len(weak_couplings) > 0:
+ coupling_map = self._create_weak_coupling_schemes(weak_couplings, coupling_map)
+
+ # Create M2Ns
+ self._create_M2N()
+ logger.debug(f"Created {len(self.m2ns)} M2N nodes.")
+
+ def _create_M2N(self) -> None:
+ """
+ Create M2N nodes based on the coupling-schemes. Each pair of participants only needs one M2N.
+ Inside a multi-coupling-scheme, every participant needs an M2N to the control participant;
+ as well as to any participant they are exchanging data with.
+ """
+ # Map pairs of participants to M2N nodes to avoid duplicates
+ m2n_map: dict[frozenset[n.ParticipantNode], n.M2NNode] = {}
+ for coupling_scheme in self.coupling_schemes:
+ # Treat multi-coupling-schemes separately: More than one M2N is needed here
+ if isinstance(coupling_scheme, n.MultiCouplingSchemeNode):
+ # Create an M2N for every exchange (once per pair of participants)
+ for exchange in coupling_scheme.exchanges:
+ if frozenset((exchange.from_participant, exchange.to_participant)) not in m2n_map:
+ m2n: n.M2NNode = n.M2NNode(type=helper.DEFAULT_M2N_TYPE, acceptor=exchange.from_participant,
+ connector=exchange.to_participant)
+ m2n_map[frozenset((exchange.from_participant, exchange.to_participant))] = m2n
+ self.m2ns.append(m2n)
+ logger.debug(f"Created M2N from {exchange.from_participant.name} to "
+ f"{exchange.to_participant.name}.")
+
+ control_participant: n.ParticipantNode = coupling_scheme.control_participant
+ # Create an M2N from the control participant to every other participant
+ for participant in coupling_scheme.participants:
+ if participant != control_participant:
+ if frozenset((control_participant, participant)) not in m2n_map:
+ m2n: n.M2NNode = n.M2NNode(type=helper.DEFAULT_M2N_TYPE, acceptor=control_participant,
+ connector=participant)
+ m2n_map[frozenset((control_participant, participant))] = m2n
+ self.m2ns.append(m2n)
+ logger.debug(f"Created M2N from control-participant {control_participant.name} "
+ f"to {participant.name}.")
+
+ # Only one M2N is needed for a regular coupling-scheme (since there is only one pair of participants involved)
+ elif isinstance(coupling_scheme, n.CouplingSchemeNode):
+ first_participant: n.ParticipantNode = coupling_scheme.first_participant
+ second_participant: n.ParticipantNode = coupling_scheme.second_participant
+ if frozenset((first_participant, second_participant)) not in m2n_map:
+ m2n: n.M2NNode = n.M2NNode(
+ type=helper.DEFAULT_M2N_TYPE, acceptor=first_participant, connector=second_participant)
+ m2n_map[frozenset((first_participant, second_participant))] = m2n
+ self.m2ns.append(m2n)
+ logger.debug(f"Created M2N from {first_participant.name} to "
+ f"{second_participant.name}.")
+
+ def _create_strong_coupling_schemes(self, strong_couplings: list[dict], weak_couplings: list[dict]) -> (
+ dict[frozenset[n.ParticipantNode], n.CouplingSchemeNode]):
+ """
+ Create coupling-schemes for strong interactions.
+ First, bidirectional strong couplings are determined. If there is more than one bidirectional strong coupling,
+ a multi-coupling-scheme is created. Otherwise, if there is exactly one bidirectional strong coupling, an implicit
+ coupling-scheme is created.
+ Otherwise, no implicit coupling-scheme is created and all "strong" couplings are added to the "weak" couplings
+ list to be handled by the ``_create_weak_coupling_schemes()`` method.
+ Next, all potential couplings (both strong and weak) are inspected for couplings with participants involved in
+ bidirectional strong couplings. Such participants are added to the implicit coupling-scheme.
+ Finally, acceleration and convergence measures are added to every exchange of the implicit coupling-scheme.
+ A dict mapping tuples of participants to coupling-schemes is returned.
+ :param strong_couplings: A list of dicts with potential strong coupling schemes (exchanged of type "strong").
+ :param weak_couplings: A list of dicts with potential weak coupling schemes (exchanged of type "weak").
+ :return: A dict[frozenset[ParticipantNode], CouplingSchemeNode]
+ """
+ coupling_map: dict[frozenset[n.ParticipantNode], n.CouplingSchemeNode | n.MultiCouplingSchemeNode] = {}
+
+ # All pairs of participants involved in bidirectional strong couplings
+ bidirectional_strong_coupling_participant_pairs: set[frozenset[n.ParticipantNode]] = set()
+ bidirectional_strong_couplings: list[dict] = []
+
+ implicit_coupling_scheme: n.MultiCouplingSchemeNode | None = None
+
+ # Iterate over all potential strong couplings
+ for coupling in strong_couplings:
+ # Check all other potential strong couplings to determine whether there are bidirectional strong couplings
+ for other_coupling in strong_couplings:
+ if coupling != other_coupling:
+ if coupling["from"] == other_coupling["to"] and coupling["to"] == other_coupling["from"]:
+ bidirectional_strong_coupling_participant_pairs.add(
+ frozenset((coupling["from"], coupling["to"])))
+ logger.debug(f"Found bidirectional strong coupling between {coupling['from'].name} "
+ f"and {coupling['to'].name}.")
+ logger.debug(f"There are {len(bidirectional_strong_coupling_participant_pairs)} participant pairs involved in "
+ f"bidirectional strong couplings.")
+
+ # Check each potential strong coupling whether its participants are involved in bidirectional strong couplings
+ # If so, add it to bidirectional_strong_couplings
+ for coupling in strong_couplings:
+ if frozenset((coupling["from"], coupling["to"])) in bidirectional_strong_coupling_participant_pairs:
+ bidirectional_strong_couplings.append(coupling)
+
+ unidirectional_strong_couplings: list[dict] = [coupling for coupling in strong_couplings if
+ coupling not in bidirectional_strong_couplings]
+ logger.debug(f"There are {len(unidirectional_strong_couplings)} unidirectional strong exchanges and "
+ f"{len(bidirectional_strong_couplings)} bidirectional strong exchanges.")
+
+ # We need a multi-coupling scheme if there is more than one bidirectional strong coupling
+ if len(bidirectional_strong_coupling_participant_pairs) > 1:
+ # Get all participants involved in bidirectional couplings (set to avoid duplicates)
+ participants: set[n.ParticipantNode] = {participant for pair in
+ bidirectional_strong_coupling_participant_pairs
+ for participant in pair}
+ participants: list[n.ParticipantNode] = list(participants)
+
+ control_participant: n.ParticipantNode = self._determine_control_participant(participants,
+ bidirectional_strong_couplings)
+
+ implicit_coupling_scheme: n.MultiCouplingSchemeNode = n.MultiCouplingSchemeNode(
+ control_participant=control_participant,
+ participants=participants)
+ logger.debug(f"Created multi-coupling-scheme with control participant {control_participant.name} "
+ f"and participants: {', '.join(p.name for p in participants)}.")
+
+ # Add all combinations of participants to the coupling-map
+ for participant in participants:
+ for other_participant in participants:
+ if participant != other_participant:
+ coupling_map[frozenset((participant, other_participant))] = implicit_coupling_scheme
+
+ # Only one bidirectional strong coupling means implicit coupling-scheme
+ elif len(bidirectional_strong_coupling_participant_pairs) == 1:
+ # Get both participants
+ first: n.ParticipantNode = list(list(bidirectional_strong_coupling_participant_pairs)[0])[0]
+ second: n.ParticipantNode = list(list(bidirectional_strong_coupling_participant_pairs)[0])[1]
+ implicit_coupling_scheme: n.CouplingSchemeNode = n.CouplingSchemeNode(first_participant=first,
+ second_participant=second,
+ type=helper.DEFAULT_IMPLICIT_COUPLING_TYPE)
+ coupling_map[frozenset((first, second))] = implicit_coupling_scheme
+ participants: list[n.ParticipantNode] = [first, second]
+ logger.debug(f"Created implicit coupling-scheme between {first.name} and {second.name}.")
+ # No bidirectional strong coupling
+ else:
+ # No implicit coupling-scheme is required.
+ # Add all strong ones to the weak couplings list and let the other method handle them
+ logger.debug("No bidirectional strong couplings found. Adding all strong couplings to weak couplings list.")
+ weak_couplings += strong_couplings
+
+ # If an implicit coupling-scheme was created, exchanges, acceleration and convergence measures are added to it
+ if implicit_coupling_scheme is not None:
+ self.coupling_schemes.append(implicit_coupling_scheme)
+
+ # Add bidirectional strong exchanges to the multi-coupling scheme
+ for coupling in bidirectional_strong_couplings:
+ coupling["exchange"].coupling_scheme = implicit_coupling_scheme
+ implicit_coupling_scheme.exchanges.append(coupling["exchange"])
+
+ # Check all other unidirectional strong couplings
+ # Iterate over a copy to be able to remove elements from the list
+ for coupling in unidirectional_strong_couplings.copy():
+ # Both participants are already involved in the multi-coupling scheme
+ # This means we add their exchange to the multi-coupling scheme
+ if coupling["from"] in participants and coupling["to"] in participants:
+ coupling["exchange"].coupling_scheme = implicit_coupling_scheme
+ implicit_coupling_scheme.exchanges.append(coupling["exchange"])
+ # Remove this coupling from the list of unidirectional couplings, as it has already been added to the multi-coupling scheme
+ unidirectional_strong_couplings.remove(coupling)
+ logger.debug(f"Found unidirectional strong exchange between {coupling['from'].name} "
+ f"and {coupling['to'].name} and added it to the implicit coupling-scheme.")
+
+ # Check all weak couplings
+ for coupling in weak_couplings.copy():
+ # Both participants are already involved in the multi-coupling scheme
+ # This means we add their exchange to the multi-coupling scheme
+ if coupling["from"] in participants and coupling["to"] in participants:
+ coupling["exchange"].coupling_scheme = implicit_coupling_scheme
+ implicit_coupling_scheme.exchanges.append(coupling["exchange"])
+ # Remove this coupling from the list of couplings, as it has already been added to the multi-coupling scheme
+ weak_couplings.remove(coupling)
+ logger.debug(f"Found weak exchange between {coupling['from'].name} and {coupling['to'].name} "
+ f"and added it to the implicit coupling-scheme.")
+
+ # Add all remaining strong couplings to the weak couplings list
+ weak_couplings += unidirectional_strong_couplings
+
+ # Add acceleration and convergence measure for every exchange of the coupling-scheme
+ acceleration: n.AccelerationNode = n.AccelerationNode(coupling_scheme=implicit_coupling_scheme,
+ type=helper.DEFAULT_ACCELERATION_TYPE)
+ implicit_coupling_scheme.acceleration = acceleration
+ # Every exchanged data needs to be accelerated
+ for exchange in implicit_coupling_scheme.exchanges:
+ acceleration_data: n.AccelerationDataNode = n.AccelerationDataNode(acceleration=acceleration,
+ data=exchange.data,
+ mesh=exchange.mesh)
+ acceleration.data.append(acceleration_data)
+ convergence_measure: n.ConvergenceMeasureNode = n.ConvergenceMeasureNode(
+ coupling_scheme=implicit_coupling_scheme,
+ type=helper.DEFAULT_CONVERGENCE_MEASURE_TYPE,
+ data=exchange.data,
+ mesh=exchange.mesh)
+ implicit_coupling_scheme.convergence_measures.append(convergence_measure)
+ logger.debug(f"Added acceleration and convergence-measure for data {exchange.data.name} "
+ f"on mesh {exchange.mesh.name}.")
+
+ return coupling_map
+
+ def _create_weak_coupling_schemes(self, weak_couplings: list[dict],
+ coupling_map: dict[frozenset[n.ParticipantNode], n.CouplingSchemeNode]) -> (
+ dict[frozenset[n.ParticipantNode], n.CouplingSchemeNode]):
+ """
+ Create coupling-schemes for weak interactions.
+ :param weak_couplings: A list of dicts with potential weak coupling schemes.
+ :param coupling_map: A map of already existing coupling-schemes.
+ :return: A dict of all coupling-schemes, including the ones created in this method.
+ """
+ # All strong coupling-schemes have already been handled completely
+ for weak_coupling in weak_couplings:
+ from_participant: n.ParticipantNode = weak_coupling["from"]
+ to_participant: n.ParticipantNode = weak_coupling["to"]
+ # Check if these participants already have a coupling-scheme
+ if frozenset((from_participant, to_participant)) not in coupling_map:
+ # If so, create a new one
+ coupling_scheme: n.CouplingSchemeNode = n.CouplingSchemeNode(type=helper.DEFAULT_EXPLICIT_COUPLING_TYPE,
+ first_participant=from_participant,
+ second_participant=to_participant)
+ coupling_map[frozenset((from_participant, to_participant))] = coupling_scheme
+ self.coupling_schemes.append(coupling_scheme)
+ logger.debug(f"Created coupling-scheme between {from_participant.name} and {to_participant.name}.")
+
+ else:
+ # Otherwise, use the existing one
+ coupling_scheme = coupling_map[frozenset((from_participant, to_participant))]
+ logger.debug(
+ f"Found existing coupling-scheme between {from_participant.name} and {to_participant.name}.")
+ # Add the exchange to the coupling-scheme
+ coupling_scheme.exchanges.append(weak_coupling["exchange"])
+ weak_coupling["exchange"].coupling_scheme = coupling_scheme
+ logger.debug(f"Added exchange of data {weak_coupling['exchange'].data.name} on mesh "
+ f"{weak_coupling['exchange'].mesh.name} to the coupling-scheme.")
+
+ return coupling_map
+
+ def _initialize_exchanges(self, participant_map: dict[str, n.ParticipantNode],
+ mesh_map: dict[tuple[n.ParticipantNode, n.ParticipantNode, str], n.MeshNode],
+ data_map: dict[frozenset, n.DataNode],
+ mapping_map: dict[tuple[n.MeshNode, n.MeshNode], n.MappingNode]) -> list[dict]:
+ """
+ Initialize exchanges based on the exchanges-tag of the topology.
+ Each exchange corresponds to one exchange node.
+ This also creates "potential-couplings", as not every exchange needs a separate coupling-scheme.
+ A potential coupling stores information about the exchange, from- and to-participant, and exchange-type.
+ :param participant_map: A dict mapping participant names to participant nodes.
+ :param mesh_map: A dict mapping participant pairs and "extensive/intensive" to mesh nodes.
+ :param data_map: A dict mapping data names to data nodes.
+ :return: A dict of potential couplings.
+ """
+ potential_couplings: list[dict] = []
+ # An exchange is a dict "from, to, data, type, optional[data-type], from_patch, to_patch"
+ for exchange in self.topology["exchanges"]:
+ from_participant: n.ParticipantNode = participant_map[exchange["from"]]
+ to_participant: n.ParticipantNode = participant_map[exchange["to"]]
+ data: n.DataNode = data_map[frozenset(exchange.items())]
+
+ data_label: str = helper.get_data_label(data.name.lower()).value
+
+ from_mesh: n.MeshNode = mesh_map[(from_participant, to_participant, data_label)]
+ to_mesh: n.MeshNode = mesh_map[(to_participant, from_participant, data_label)]
+
+ # This will prevent exchanges between meshes of different dimensions from being created.
+ # If this ever becomes allowed, this block can be removed.
+ if from_mesh.dimensions != to_mesh.dimensions:
+ max_dim: int = max(from_mesh.dimensions, to_mesh.dimensions)
+ min_dim_mesh: n.MeshNode = from_mesh if from_mesh.dimensions != max_dim else to_mesh
+ min_dim_mesh.dimensions = max_dim
+ logger.warning(f"Mesh {from_mesh.name} and {to_mesh.name} have different dimensions. "
+ f"Mapping to meshes of different dimensions is not allowed. "
+ f"Setting {min_dim_mesh.name}'s dimensions to {max_dim} dimensions.")
+
+ # Check type of mapping to determine which mesh is exchanged
+ if mapping_map[(from_mesh, to_mesh)].direction == e.Direction.WRITE:
+ # In a write-mapping, the exchanged mesh is the "to"-mesh
+ exchange_node: n.ExchangeNode = n.ExchangeNode(coupling_scheme=None, data=data, mesh=to_mesh,
+ from_participant=from_participant,
+ to_participant=to_participant)
+ logger.debug(f"Created exchange from {from_participant.name} to {to_participant.name} "
+ f"for data {data.name} on mesh {to_mesh.name}.")
+ else:
+ # In a read-mapping, the exchanged mesh is the "from"-mesh
+ exchange_node: n.ExchangeNode = n.ExchangeNode(coupling_scheme=None, data=data, mesh=from_mesh,
+ from_participant=from_participant,
+ to_participant=to_participant)
+ logger.debug(f"Created exchange from {from_participant.name} to {to_participant.name} "
+ f"for data {data.name} on mesh {from_mesh.name}.")
+ # Either strong or weak
+ exchange_type: str = exchange["type"]
+ potential_couplings.append(
+ {"exchange": exchange_node, "from": from_participant, "to": to_participant, "type": exchange_type})
+
+ return potential_couplings
+
+ def _initialize_mappings(self, participant_map: dict[str, n.ParticipantNode],
+ mesh_map: dict[tuple[n.ParticipantNode, n.ParticipantNode, str], n.MeshNode],
+ data_map: dict[frozenset, n.DataNode]) -> dict[
+ tuple[n.MeshNode, n.MeshNode], n.MappingNode]:
+ """
+ Initialize all mappings. A mapping is needed between two participants if they exchange data over their meshes.
+ For each pair of sender-mesh, receiver-mesh, a separate mapping is needed.
+ The data (extensive or intensive) that is exchanged determines the type of mapping (write-conservative or read-consistent-mapping).
+ :param participant_map: A dict mapping participant names to participant nodes.
+ :param mesh_map: A dict mapping pairs of participants and "extensive/intensive" to mesh nodes.
+ :param data_map: A dict mapping data names to data nodes.
+ :return: A dict mapping (from-mesh, to-mesh) to mapping nodes.
+ """
+ mapping_map: dict[tuple[n.MeshNode, n.MeshNode], n.MappingNode] = {}
+ # Check for each exchange whether it already has a mapping and if not, create one
+ for exchange in self.topology["exchanges"]:
+ from_participant: n.ParticipantNode = participant_map[exchange["from"]]
+ to_participant: n.ParticipantNode = participant_map[exchange["to"]]
+ data: n.DataNode = data_map[frozenset(exchange.items())]
+
+ data_label: helper.DataKind = helper.get_data_label(data.name.lower())
+ if data_label == helper.DataKind.DEFAULT:
+ logger.info(f"Data \"{data.name}\" is neither extensive nor intensive. Choosing default "
+ f"{helper.DEFAULT_DATA_KIND} with corresponding {helper.DEFAULT_MAPPING_KIND}-mapping.")
+
+ from_mesh: n.MeshNode = mesh_map[(from_participant, to_participant, data_label.value)]
+ to_mesh: n.MeshNode = mesh_map[(to_participant, from_participant, data_label.value)]
+
+ if data not in from_mesh.use_data:
+ logger.debug(f"Adding use-data {data.name} to mesh {from_mesh.name}.")
+ from_mesh.use_data.append(data)
+ if data not in to_mesh.use_data:
+ logger.debug(f"Adding use-data {data.name} to mesh {to_mesh.name}.")
+ to_mesh.use_data.append(data)
+
+ # Extensive data needs a conservative mapping,
+ # so create a write-conservative mapping to allow for parallel participants
+ if data_label == helper.DataKind.EXTENSIVE:
+ logger.debug(f"Data {data.name} is extensive. Creating write-conservative mapping.")
+ # If no mapping between from-mesh and to-mesh exists, create one
+ if (from_mesh, to_mesh) not in mapping_map:
+ logger.debug(
+ f"No mapping between {from_mesh.name} and {to_mesh.name} exists. Creating new mapping.")
+ self._create_write_mapping(from_participant, to_participant, from_mesh, to_mesh, mapping_map)
+
+ # Intensive data needs a consistent mapping,
+ # so create a read-consistent mapping to allow for parallel participants
+ elif data_label == helper.DataKind.INTENSIVE:
+ logger.debug(f"Data {data.name} is intensive. Creating read-consistent mapping.")
+ if (from_mesh, to_mesh) not in mapping_map:
+ logger.debug(
+ f"No mapping between {from_mesh.name} and {to_mesh.name} exists. Creating new mapping.")
+ self._create_read_mapping(from_participant, to_participant, from_mesh, to_mesh, mapping_map)
+ else:
+ logger.debug(f"Data {data.name} is {helper.DEFAULT_DATA_KIND}. Creating read-consistent mapping.")
+ if (from_mesh, to_mesh) not in mapping_map:
+ logger.debug(
+ f"No mapping between {from_mesh.name} and {to_mesh.name} exists. Creating new mapping.")
+ self._create_read_mapping(from_participant, to_participant, from_mesh, to_mesh, mapping_map)
+
+ # If a mapping already exists, then the participants already receive the corresponding meshes.
+ # Regardless of whether a mapping already exists, write- and read-data tags need to be added.
+ write_data: n.WriteDataNode = n.WriteDataNode(participant=from_participant, data=data, mesh=from_mesh)
+ if not self._contains_write_data(write_data, from_participant.write_data):
+ logger.debug(f"Adding write-data {data.name} to participant {from_participant.name}.")
+ from_participant.write_data.append(write_data)
+
+ read_data: n.ReadDataNode = n.ReadDataNode(participant=to_participant, data=data, mesh=to_mesh)
+ if not self._contains_read_data(read_data, to_participant.read_data):
+ logger.debug(f"Adding read-data {data.name} to participant {to_participant.name}.")
+ to_participant.read_data.append(read_data)
+
+ return mapping_map
+
+ def _create_write_mapping(self, from_participant: n.ParticipantNode, to_participant: n.ParticipantNode,
+ from_mesh: n.MeshNode, to_mesh: n.MeshNode,
+ mapping_map: dict[tuple[n.MeshNode, n.MeshNode], n.MappingNode]) -> None:
+ """
+ Create a write-mapping between the given meshes.
+ A write-mapping is located at the from-participant, who writes to their own mesh,
+ receiving the to-mesh from the to-participant.
+ :param from_participant: The participant who writes to the from-mesh and specifies the mapping.
+ :param to_participant: The participant who sends their mesh to the from-participant.
+ :param from_mesh: The mesh the from-participant writes to.
+ :param to_mesh: The mesh the from-participant receives from the to-participant and is mapped to.
+ :param mapping_map: A dict mapping (a-mesh, b-mesh) to mapping nodes.
+ """
+ # A write-mapping is on the from-participant, writing to his own mesh, receiving the to-mesh from the to-participant
+ mapping: n.MappingNode = n.MappingNode(parent_participant=from_participant,
+ direction=e.Direction.WRITE,
+ from_mesh=from_mesh,
+ to_mesh=to_mesh,
+ just_in_time=False,
+ constraint=e.MappingConstraint.CONSERVATIVE,
+ method=helper.DEFAULT_MAPPING_METHOD)
+ mapping_map[(from_mesh, to_mesh)] = mapping
+ from_participant.mappings.append(mapping)
+ # In a write-mapping, the writer has to receive the to-mesh to be able to map to it
+ receive_mesh: n.ReceiveMeshNode = n.ReceiveMeshNode(participant=from_participant,
+ mesh=to_mesh,
+ from_participant=to_participant,
+ api_access=False)
+ from_participant.receive_meshes.append(receive_mesh)
+ logger.debug(f"Added receive-mesh {receive_mesh.mesh.name} to participant {from_participant.name}.")
+ logger.debug(f"Created write-mapping between {from_mesh.name} and {to_mesh.name} "
+ f"for participant {from_participant.name}.")
+
+ def _create_read_mapping(self, from_participant: n.ParticipantNode, to_participant: n.ParticipantNode,
+ from_mesh: n.MeshNode, to_mesh: n.MeshNode,
+ mapping_map: dict[tuple[n.MeshNode, n.MeshNode], n.MappingNode]) -> None:
+ """
+ Creates a read-mapping between the given meshes.
+ A read-mapping is located at the to-participant, who reads from their own mesh,
+ receiving the from-mesh from the from-participant.
+ :param from_participant: The participant who sends their mesh to the to-participant.
+ :param to_participant: The participant who specifies the mapping and reads from their own mesh.
+ :param from_mesh: The mesh that is mapped from the from-participant to the to-participant.
+ :param to_mesh: The mesh the to-participant reads from.
+ :param mapping_map: A dict mapping (a-mesh, b-mesh) to mapping nodes.
+ """
+ # A read-mapping is on the to-participant, reading from his own mesh, receiving the from-mesh from the from-participant
+ mapping: n.MappingNode = n.MappingNode(parent_participant=to_participant,
+ direction=e.Direction.READ,
+ from_mesh=from_mesh,
+ to_mesh=to_mesh,
+ just_in_time=False,
+ constraint=e.MappingConstraint.CONSISTENT,
+ method=helper.DEFAULT_MAPPING_METHOD)
+ mapping_map[(from_mesh, to_mesh)] = mapping
+ to_participant.mappings.append(mapping)
+ # In a read-mapping, the reader has to receive the from-mesh to be able to map from it
+ receive_mesh: n.ReceiveMeshNode = n.ReceiveMeshNode(participant=to_participant,
+ mesh=from_mesh,
+ from_participant=from_participant,
+ api_access=False)
+ to_participant.receive_meshes.append(receive_mesh)
+ logger.debug(f"Added receive-mesh {receive_mesh.mesh.name} to participant {to_participant.name}.")
+ logger.debug(f"Created read-mapping between {from_mesh.name} and {to_mesh.name} "
+ f"for participant {to_participant.name}.")
+
+ def _initialize_meshes_and_patches(self, participant_patch_map: dict[tuple[n.ParticipantNode, n.ParticipantNode],
+ dict[str, set[str]]]) -> dict[tuple[n.ParticipantNode, n.ParticipantNode, str], n.MeshNode]:
+ """
+ Initialize meshes based on the communication of participants and the involved patches.
+ First, communication between participants is counted to be able to determine the number of meshes needed and
+ thus allow for better naming.
+ Next, for each communication pair, mesh(es) are created based on the involved patches.
+ During the mesh creation, while iterating over the patches, corresponding patch nodes are created.
+ :param participant_patch_map: A dict mapping (a-participant, b-participant) to a dict of extensive and intensive patches.
+ :return: A dict mapping (a-participant, b-participant, extensive/intensive) to mesh nodes.
+ """
+ # Count the frequency of a participant appearing in communications to determine the number of meshes to create
+ frequency_map: dict[n.ParticipantNode, int] = {p1: 0 for p1, p2 in participant_patch_map}
+ for (from_participant, to_participant) in participant_patch_map:
+ # Since the map is symmetric, we only need to increment "from"
+ frequency_map[from_participant] += 1
+ # Create a dict mapping participant communication and data label to meshes
+ participant_label_mesh_map: dict[tuple[n.ParticipantNode, n.ParticipantNode, str], n.MeshNode] = {}
+ for (from_participant, to_participant) in participant_patch_map:
+ # Check how many patches the from-participant uses in communication with the to-participant
+ # Since the map is symmetric, it suffices to check the patches of "from"
+ extensives: int = len(participant_patch_map[(from_participant, to_participant)]["extensive"])
+ intensives: int = len(participant_patch_map[(from_participant, to_participant)]["intensive"])
+ participant_dim: int = self.participant_dimensionality[from_participant]
+ # Check if the communication uses both intensive and extensive data
+ if extensives > 0 and intensives > 0:
+ # Check if the from-participant communicates with more than one participant
+ if frequency_map[from_participant] > 1:
+ # In this case, we name meshes "FROM-TO-Extensive/Intensive-Mesh"
+ # Capitalize only the first letter. This allows all-caps names
+ mesh_name: str = (f"{from_participant.name[:1].upper() + from_participant.name[1:]}"
+ f"-{to_participant.name[:1].upper() + to_participant.name[1:]}")
+ extensive_mesh: n.MeshNode = n.MeshNode(name=mesh_name + "-Extensive-Mesh", use_data=[],
+ dimensions=participant_dim)
+ intensive_mesh: n.MeshNode = n.MeshNode(name=mesh_name + "-Intensive-Mesh", use_data=[],
+ dimensions=participant_dim)
+
+ else:
+ # The participant communicates only with one other participant.
+ # The mesh is named "FROM-Extensive/Intensive-Mesh"
+ mesh_name: str = f"{from_participant.name[:1].upper() + from_participant.name[1:]}"
+ extensive_mesh: n.MeshNode = n.MeshNode(name=mesh_name + "-Extensive-Mesh", use_data=[],
+ dimensions=participant_dim)
+ intensive_mesh: n.MeshNode = n.MeshNode(name=mesh_name + "-Intensive-Mesh", use_data=[],
+ dimensions=participant_dim)
+ # Add meshes to the respective containers
+ from_participant.provide_meshes.append(extensive_mesh)
+ from_participant.provide_meshes.append(intensive_mesh)
+ self.meshes.append(extensive_mesh)
+ self.meshes.append(intensive_mesh)
+ participant_label_mesh_map[(from_participant, to_participant, "extensive")] = extensive_mesh
+ participant_label_mesh_map[(from_participant, to_participant, "intensive")] = intensive_mesh
+ logger.debug(f"Created extensive and intensive mesh for communication between "
+ f"{from_participant.name} and {to_participant.name}.")
+ # Create new patch nodes for from_participant
+ # Since only the patches of "from" are contained in
+ # participant_patch_map[(from_participant, to_participant)], we must not create any patches for "to"
+ for extensive_patch in participant_patch_map[(from_participant, to_participant)]["extensive"]:
+ patch_node: helper.PatchNode = helper.PatchNode(name=extensive_patch, participant=from_participant,
+ mesh=extensive_mesh,
+ label=helper.PatchState.EXTENSIVE)
+ self.patches.append(patch_node)
+ for intensive_patch in participant_patch_map[(from_participant, to_participant)]["intensive"]:
+ patch_node: helper.PatchNode = helper.PatchNode(name=intensive_patch, participant=from_participant,
+ mesh=intensive_mesh,
+ label=helper.PatchState.INTENSIVE)
+ self.patches.append(patch_node)
+ # The participant pair only communicates one kind of data
+ else:
+ # Check if the from-participant communicates with more than one participant
+ if frequency_map[from_participant] > 1:
+ # In this case, we name meshes "FROM-TO-Mesh"
+ mesh_name: str = (f"{from_participant.name[:1].upper() + from_participant.name[1:]}"
+ f"-{to_participant.name[:1].upper() + to_participant.name[1:]}")
+ mesh: n.MeshNode = n.MeshNode(name=mesh_name + "-Mesh", use_data=[],
+ dimensions=participant_dim)
+ else:
+ # The participant communicates only with one other participant.
+ # The mesh is named "FROM-Mesh"
+ mesh_name: str = f"{from_participant.name[:1].upper() + from_participant.name[1:]}"
+ mesh: n.MeshNode = n.MeshNode(name=mesh_name + "-Mesh", use_data=[], dimensions=participant_dim)
+ # Add the mesh to the respective container
+ from_participant.provide_meshes.append(mesh)
+ self.meshes.append(mesh)
+ # Determine the kind of mesh this is (extensive or intensive)
+ label: str = "extensive" if extensives > 0 else "intensive"
+ participant_label_mesh_map[(from_participant, to_participant, label)] = mesh
+ logger.debug(f"Created mesh for communication between {from_participant.name} "
+ f"and {to_participant.name}.")
+ # Create new patch nodes for only this label
+ for patch in participant_patch_map[(from_participant, to_participant)][label]:
+ patch_node: helper.PatchNode = helper.PatchNode(name=patch, participant=from_participant,
+ mesh=mesh, label=helper.PatchState(label))
+ self.patches.append(patch_node)
+
+ return participant_label_mesh_map
+
+ def _initialize_participants(self) -> dict[str, n.ParticipantNode]:
+ """
+ Initialize participant nodes and their dimensionality from the topology dict.
+ Return a dictionary mapping participant names to participant nodes.
+ :return: A dict[str, ParticipantNode]
+ """
+ participant_map: dict[str, n.ParticipantNode] = {}
+ for participant in self.topology["participants"]:
+ # The value is a dict that contains "name, solver, optional[dimensionality]"
+ parzival: n.ParticipantNode = n.ParticipantNode(name=participant["name"])
+ participant_map[participant["name"]] = parzival
+ self.participants.append(parzival)
+ dim: int = participant.get("dimensionality", helper.DEFAULT_PARTICIPANT_DIMENSIONALITY)
+ if dim < 2 or dim > 3:
+ logger.warning(f"Dimensionality of participant {parzival.name} is defined as {dim}. "
+ f"Setting it to {helper.DEFAULT_PARTICIPANT_DIMENSIONALITY}.")
+ dim = helper.DEFAULT_PARTICIPANT_DIMENSIONALITY
+ self.participant_dimensionality[parzival] = dim
+ logger.debug(f"Initialized participant {parzival.name} with dimensionality {dim}.")
+ return participant_map
+
+ def _data_preprocessing(self, participant_map: dict[str, n.ParticipantNode]):
+ """
+ Update data names in the topology dict, if they fulfill these conditions:
+ - Data is sent from participant A to participant B with the same name multiple times
+ - The exchanges are of the same type (strong/weak)
+ Then, these exchanges lead to errors, as they are only "unique" in the patch names,
+ which are not included in the precice-config; i.e., they would lead to duplicate exchanges.
+ Such a data name is then "uniquified", directly in the topology dict.
+ :param participant_map: A dict mapping participant names to participant nodes.
+ :return: None
+ """
+ # Map tuples of from-/to-participants, data-name, data-type and exchange-type to the from-/to-patches
+ # that are used in exchanges
+ exchange_patch_map: dict[
+ tuple[n.ParticipantNode, n.ParticipantNode, str, e.DataType, str], dict[str, list[str]]] = {}
+ for exchange in self.topology["exchanges"]:
+ from_participant: n.ParticipantNode = participant_map[exchange["from"]]
+ to_participant: n.ParticipantNode = participant_map[exchange["to"]]
+ data: str = exchange["data"]
+ data_type = self._get_data_type(exchange)
+ type: str = exchange["type"]
+ from_patch: str = exchange["from-patch"]
+ to_patch: str = exchange["to-patch"]
+ if (from_participant, to_participant, data, data_type, type) in exchange_patch_map:
+ exchange_patch_map[from_participant, to_participant, data, data_type, type]["from-patch"].append(
+ from_patch)
+ exchange_patch_map[from_participant, to_participant, data, data_type, type]["to-patch"].append(to_patch)
+ else:
+ exchange_patch_map[from_participant, to_participant, data, data_type, type] = \
+ {"from-patch": [from_patch], "to-patch": [to_patch]}
+
+ # Check every collected tuple for violations
+ for key, patches in exchange_patch_map.items():
+ from_participant, to_participant, data, data_type, type = key
+ from_patches = patches["from-patch"]
+ to_patches = patches["to-patch"]
+ # Check if it is the first occurrence since we want to preserve the original data name
+ initial: bool = True
+ # If a tuple is not unique, its dict will have sets of length greater than 1
+ if len(from_patches) > 1 or len(to_patches) > 1:
+ for from_patch, to_patch in zip(from_patches, to_patches):
+ # Iterate over all exchanges to check if they correspond to this tuple
+ for exchange in self.topology["exchanges"]:
+ # Check that all values match
+ if (from_participant.name == exchange["from"] and to_participant.name == exchange["to"]
+ and data == exchange["data"] and type.lower() == exchange["type"].lower()
+ and data_type == self._get_data_type(exchange)
+ and from_patch == exchange["from-patch"] and to_patch == exchange["to-patch"]):
+ # Do not modify the first occurrence in order to not uniquify all data names
+ if initial:
+ initial = False
+ continue
+ # All values match, and it is not the first violation
+ # Thus, uniquify the data name
+ # Choose a new uniquifier for each violation
+ uniquifier: str = helper.get_uniquifier()
+ new_data_name: str = f"{uniquifier.capitalize()}-{helper.capitalize_name(data)}"
+ exchange["data"] = new_data_name
+
+ def _get_data_type(self, exchange: dict) -> e.DataType:
+ """
+ Get the data-type for the data in the given exchange or choose a default if none is given.
+ E.g., temperature defaults to DataType.SCALAR, whereas force defaults to DataType.VECTOR.
+ :param exchange: The exchange for which the data-type is needed.
+ :return: A data-type for the given exchange, which defaults to the helper.DEFAULT_DATA_TYPE.
+ """
+ data: str = exchange["data"]
+ data_type: e.DataType = exchange.get("data-type")
+ if data_type is None:
+ data_type = helper.DEFAULT_DATA_TYPE
+ # Check if the data has a default type. Sort by key length to have a deterministic order
+ for key in sorted(helper.DEFAULT_DATA_TYPES.keys(), key=len, reverse=True):
+ if key.lower() in data.lower():
+ data_type = helper.DEFAULT_DATA_TYPES[key]
+ break
+ return data_type
+
+ def _patch_preprocessing(self, participant_map: dict[str, n.ParticipantNode]):
+ """
+ Preprocess patch labels in the topology.
+ This is done by first assigning a label ("extensive" or "intensive") to each patch,
+ then splitting them up if necessary; i.e., if they have both labels.
+ Finally, a map (participant_1, participant_2) -> {extensive: {i_j}, intensive: {l_k}} is created,
+ where an entry means that p1 uses extensive patches i_j and intensive patches l_k for communication with p2.
+ :param participant_map: A dict mapping participant names to participant nodes.
+ :return: A dict mapping participant pairs to patches used in communication between them.
+ """
+ participant_patch_label_map: dict[tuple[n.ParticipantNode, str], set[str]] = {}
+
+ for exchange in self.topology["exchanges"]:
+ from_participant: n.ParticipantNode = participant_map[exchange["from"]]
+ to_participant: n.ParticipantNode = participant_map[exchange["to"]]
+ from_patch: str = exchange["from-patch"]
+ to_patch: str = exchange["to-patch"]
+ data_name: str = exchange["data"]
+ # Get data label
+ data_label: str = helper.get_data_label(data_name).value
+ # Create new entries if necessary
+ if (from_participant, from_patch) not in participant_patch_label_map:
+ participant_patch_label_map[(from_participant, from_patch)] = set()
+ if (to_participant, to_patch) not in participant_patch_label_map:
+ participant_patch_label_map[(to_participant, to_patch)] = set()
+ # Add the label to the patch
+ participant_patch_label_map[(from_participant, from_patch)].add(data_label)
+ participant_patch_label_map[(to_participant, to_patch)].add(data_label)
+
+ # Map split-up patches to new patches
+ participant_patch_new_patch_map: dict[tuple[n.ParticipantNode, str], dict[str, str]] = {}
+ # Now, check if a patch has more than one label (i.e., both intensive and extensive)
+ for exchange in self.topology["exchanges"]:
+ from_participant: n.ParticipantNode = participant_map[exchange["from"]]
+ to_participant: n.ParticipantNode = participant_map[exchange["to"]]
+ from_patch: str = exchange["from-patch"]
+ to_patch: str = exchange["to-patch"]
+ data_name: str = exchange["data"]
+ # Extensive or intensive
+ data_label: str = helper.get_data_label(data_name).value
+
+ # Check if this patch has been split up before
+ if (from_participant, from_patch) in participant_patch_new_patch_map:
+ # If so, then we just need to assign the new label to the patch
+ exchange["from-patch"] = participant_patch_new_patch_map[(from_participant, from_patch)][data_label]
+ # Else, if this patch has not been split, check if it needs to be split up
+ elif len(participant_patch_label_map[(from_participant, from_patch)]) > 1:
+ extensive_patch: str = f"{from_patch}-extensive"
+ intensive_patch: str = f"{from_patch}-intensive"
+ participant_patch_new_patch_map[(from_participant, from_patch)] = {"extensive": extensive_patch,
+ "intensive": intensive_patch}
+ logger.warning(f"Split patch \"{from_patch}\" of participant {from_participant.name} into "
+ f"extensive patch \"{extensive_patch}\" and intensive patch \"{intensive_patch}\".")
+ # Assign new patch name to the topology
+ exchange["from-patch"] = extensive_patch if data_label == "extensive" else intensive_patch
+ # If the patch has never been split and is not supposed to be split, nothing needs to be done
+
+ # Check if this patch has been split up before
+ if (to_participant, to_patch) in participant_patch_new_patch_map:
+ # If so, then we just need to assign the new label to the patch
+ exchange["to-patch"] = participant_patch_new_patch_map[(to_participant, to_patch)][data_label]
+ # Else, if this patch has not been split, check if it needs to be split up
+ elif len(participant_patch_label_map[(to_participant, to_patch)]) > 1:
+ extensive_patch: str = f"{to_patch}-extensive"
+ intensive_patch: str = f"{to_patch}-intensive"
+ participant_patch_new_patch_map[(to_participant, to_patch)] = {"extensive": extensive_patch,
+ "intensive": intensive_patch}
+ logger.warning(f"Split patch \"{to_patch}\" of participant {to_participant.name} into "
+ f"extensive patch \"{extensive_patch}\" and intensive patch \"{intensive_patch}\".")
+ # Assign new patch name to the topology
+ exchange["to-patch"] = extensive_patch if data_label == "extensive" else intensive_patch
+ # If the patch has never been split and is not supposed to be split, nothing needs to be done
+
+ # Now create a map for which participant pair uses which patch
+ # This means that for (p_1,p_2) -> {i_1,...,i_n}, p_1 uses i_j in communication with p_2; p_2 might use other patches
+ participant_patch_map: dict[tuple[n.ParticipantNode, n.ParticipantNode], dict[str, set[str]]] = {}
+ for exchange in self.topology["exchanges"]:
+ from_participant: n.ParticipantNode = participant_map[exchange["from"]]
+ to_participant: n.ParticipantNode = participant_map[exchange["to"]]
+ from_patch: str = exchange["from-patch"]
+ to_patch: str = exchange["to-patch"]
+ data_name: str = exchange["data"]
+ data_label: str = helper.get_data_label(data_name).value
+ # Initialize entries if necessary
+ if (from_participant, to_participant) not in participant_patch_map:
+ participant_patch_map[(from_participant, to_participant)] = {"extensive": set(), "intensive": set()}
+ # If this direction does not yet exist, the other direction is also not initialized yet
+ participant_patch_map[(to_participant, from_participant)] = {"extensive": set(), "intensive": set()}
+ # From-participant uses from-patch in communication with to-participant
+ participant_patch_map[(from_participant, to_participant)][data_label].add(from_patch)
+ # To-participant uses to-patch in communication with from-participant
+ participant_patch_map[(to_participant, from_participant)][data_label].add(to_patch)
+
+ return participant_patch_map
+
+ def _participant_patch_map(self, participant_map: dict[str, n.ParticipantNode]) -> dict[
+ n.ParticipantNode, set[str]]:
+ """
+ Create a dictionary mapping each participant to a list of patches it uses.
+ This is done by iterating over each exchange and adding the involved patches to the involved participants.
+ This assumes self.participants and self.topology are already initialized.
+ :param participant_map: A dict mapping participant names to participant nodes.
+ :return: A dictionary dict[ParticipantNode, set[str]]
+ """
+ patch_map: dict[n.ParticipantNode, set[str]] = {p: set() for p in self.participants}
+ for exchange in self.topology["exchanges"]:
+ from_participant: n.ParticipantNode = participant_map[exchange["from"]]
+ patch_map[from_participant].add(exchange["from-patch"])
+
+ to_participant: n.ParticipantNode = participant_map[exchange["to"]]
+ patch_map[to_participant].add(exchange["to-patch"])
+ logger.debug(f"Added entries for participant {from_participant.name} and patch {exchange['from-patch']}; "
+ f"as well as for participant {to_participant.name} and patch {exchange['to-patch']}.")
+ return patch_map
+
+ def _determine_control_participant(self, participants: list[n.ParticipantNode],
+ bidirectional_strong_couplings: list[dict]) -> n.ParticipantNode:
+ """
+ Determine the control participant for a multi-coupling scheme
+ based on the frequency of each participant in bidirectional strong couplings.
+ :param participants: The participants in the multi-coupling scheme.
+ :param bidirectional_strong_couplings: The bidirectional strong couplings of the multi-coupling scheme.
+ :return: The control participant as a ParticipantNode.
+ """
+ # Count how often participants appears in bidirectional couplings to determine the control participant
+ frequency_map: dict[n.ParticipantNode, int] = {participant: 0 for participant in participants}
+ for coupling in bidirectional_strong_couplings:
+ frequency_map[coupling["from"]] += 1
+ frequency_map[coupling["to"]] += 1
+ # On a tie, it will choose the first participant in the list
+ control_participant: n.ParticipantNode = max(frequency_map, key=frequency_map.get)
+ logger.debug(f"Control participant determined to be {control_participant.name} "
+ f"with frequency {frequency_map[control_participant]}.")
+ return control_participant
+
+ def _initialize_data(self, participant_map: dict[str, n.ParticipantNode]) -> dict[frozenset, n.DataNode]:
+ """
+ Initialize data nodes based on the participants and exchanges in the topology.
+ This takes into account the type of the data node, i.e., either scalar or vector.
+ In case a pair of participants A and B exchanges a data "C" in both directions (A->B and B->A),
+ two nodes are created, one with a "uniquified" name.
+ In case participants A and B exchange data "C" with different types (e.g., A->B with scalar data
+ and B->A with vector data), two data nodes "C-Scalar" and "C-Vector" are created.
+ These cases can also be mixed and are thus even more challenging to handle;
+ in particular, with more than four exchanges,
+ there may not be enough information to uniquely identify the data node.
+ Such cases should, however, not occur frequently.
+ :param participant_map: A dict mapping participant names to participant nodes.
+ :return: A dict mapping exchanges to data nodes.
+ """
+ # Map exchanges to data nodes. Use a frozenset since it is hashable and can be used as a key in a dict
+ exchange_data_map: dict[frozenset, n.DataNode] = {}
+ # Map data names to their respective data nodes, differentiating between "vector" and "scalar" data
+ data_name_map: dict[str, dict[e.DataType, list[n.DataNode]]] = {}
+ # Map pairs of participants and data names to data nodes, differentiating between "vector" and "scalar" data
+ participant_data_name_map: dict[
+ tuple[n.ParticipantNode, n.ParticipantNode, str], dict[e.DataType, n.DataNode]] = {}
+ # Keep track of data nodes exchanged by participants
+ participant_data_map: dict[tuple[n.ParticipantNode, str], list[n.DataNode]] = {}
+
+ for exchange in self.topology["exchanges"]:
+ data_name: str = exchange["data"]
+ data_type: e.DataType = exchange.get("data-type")
+ if data_type is None:
+ data_type = helper.DEFAULT_DATA_TYPE
+ # Check if the data has a default type. Sort by key length to have a deterministic order
+ for key in sorted(helper.DEFAULT_DATA_TYPES.keys(), key=len, reverse=True):
+ if key.lower() in data_name.lower():
+ data_type = helper.DEFAULT_DATA_TYPES[key]
+ break
+ logger.warning(f"No data type provided for data \"{data_name}\". "
+ f"Choosing default type \"{data_type.value}\".")
+ else:
+ data_type = e.DataType(data_type)
+
+ from_participant: n.ParticipantNode = participant_map[exchange["from"]]
+ to_participant: n.ParticipantNode = participant_map[exchange["to"]]
+ logger.debug(f"Handling data {data_name} with type {data_type.value} between participants "
+ f"{from_participant.name} and {to_participant.name}")
+
+ # Roughly, there are three possibilities:
+ # 1. The data is not known (great), create a new data node.
+ # 2. The data is known and exchanged from A->B and from B->A. This is not allowed; create a new unique data node.
+ # 3. The data is known and exchanged from A->B. Now there are two more possibilities:
+ # 3.1 Known data has type t; current data has type t (great); we do nothing
+ # 3.2 Known data has type t; current data has type s; we need to create a new unique data node.
+ # Variations of these cases can occur
+ # The more participants are involved, the less unique these cases become,
+ # and the resulting precice-config.xml might depend on the ordering of exchanges.
+ # However, these are all very special cases that only occur in the very rare case that many exchanges have
+ # the same data name.
+
+ # Check if this data is known
+ if data_name in data_name_map:
+ vector_data_node: list[n.DataNode] = data_name_map[data_name].get(e.DataType.VECTOR)
+ scalar_data_node: list[n.DataNode] = data_name_map[data_name].get(e.DataType.SCALAR)
+ log_msg: str = f"Data {data_name} with type {data_type.value} is already known with type"
+ log_msg += f" vector" if vector_data_node is not None else ""
+ log_msg += " and" if vector_data_node is not None and scalar_data_node is not None else ""
+ log_msg += f" scalar" if scalar_data_node is not None else ""
+ logger.debug(log_msg)
+
+ if vector_data_node and scalar_data_node:
+ # Check if the to_participant already exchanged this data with vector and scalar type.
+ # This means that "data-scalar" and "data-vector" already exist.
+ # In this case, we want to create a new unique data node; so unique-data.
+ # This case is again entered, if the from_participant exchanged this data with vector and scalar type.
+ # Then it will generate "unique-data-vector" and "unique-data-scalar".
+ if ((to_participant, data_name) in participant_data_map
+ and any(vdn in participant_data_map[(to_participant, data_name)]
+ for vdn in vector_data_node)
+ and any(sdn in participant_data_map[(to_participant, data_name)]
+ for sdn in scalar_data_node)):
+ # Check the case that the from_participant already exchanged this data with either vector and scalar type (not both).
+ if (from_participant, data_name) in participant_data_map:
+ # This should be == 1, since we only enter this method if B->A exchanges both vector and scalar,
+ # and A->B already exchanges one type. Since it is not allowed/possible to exchange
+ # the same data with the type multiple times, we should not be able to reach this point more than once.
+ assert len(participant_data_map[(from_participant, data_name)]) == 1, "Duplicate entry."
+ # Then, we want to get this data to have "unique-data-vector" and "unique-data-scalar",
+ # instead of "unique1-data" and "unique2-data".
+ old_data_node: n.DataNode = participant_data_map[(from_participant, data_name)][0]
+ old_data_name: str = old_data_node.name
+ new_data_name: str = helper.capitalize_name(old_data_name + "-" + data_type.value)
+ old_data_node.name = helper.capitalize_name(
+ old_data_node.name + "-" + old_data_node.data_type.value)
+ new_data_node: n.DataNode = n.DataNode(name=new_data_name, data_type=data_type)
+
+ exchange_data_map[frozenset(exchange.items())] = new_data_node
+ self.data.append(new_data_node)
+ data_name_map[data_name][data_type].append(new_data_node)
+ if (from_participant, data_name) in participant_data_map:
+ participant_data_map[(from_participant, data_name)].append(new_data_node)
+ else:
+ participant_data_map[(from_participant, data_name)] = [new_data_node]
+ if (from_participant, to_participant, data_name) not in participant_data_name_map:
+ participant_data_name_map[(from_participant, to_participant, data_name)] = {
+ data_type: new_data_node}
+ else:
+ participant_data_name_map[(from_participant, to_participant, data_name)][
+ data_type] = new_data_node
+ logger.warning(f"Split up data \"{old_data_name}\" into \"{old_data_node.name}\" and "
+ f"\"{new_data_node.name}\", since it occurs with different data types.")
+ # The to_participant does not exchange this data with vector or scalar type yet.
+ # This means we uniquify the data name (instead of "splitting").
+ else:
+ uniquifier: str = helper.get_uniquifier()
+ new_data_name: str = f"{uniquifier.capitalize()}-{helper.capitalize_name(data_name)}"
+ logger.warning(
+ f"Data name \"{data_name}\" is exchanged by participants {from_participant.name} "
+ f"and {to_participant.name} in both directions. Using \"{new_data_name}\" "
+ f"for one direction.")
+ new_data_node = n.DataNode(name=new_data_name, data_type=data_type)
+ if (from_participant, to_participant, data_name) not in participant_data_name_map:
+ participant_data_name_map[(from_participant, to_participant, data_name)] = {
+ data_type: new_data_node}
+ else:
+ participant_data_name_map[(from_participant, to_participant, data_name)][
+ data_type] = new_data_node
+ if data_type in data_name_map[data_name]:
+ data_name_map[data_name][data_type].append(new_data_node)
+ else:
+ data_name_map[data_name][data_type] = [new_data_node]
+ self.data.append(new_data_node)
+ if (from_participant, data_name) in participant_data_map:
+ participant_data_map[(from_participant, data_name)].append(new_data_node)
+ else:
+ participant_data_map[(from_participant, data_name)] = [new_data_node]
+ exchange_data_map[frozenset(exchange.items())] = new_data_node
+
+ # Check if this data is already exchanged in the other direction (but not with both types)
+ elif (to_participant, from_participant, data_name) in participant_data_name_map:
+ # If so, the data name needs to be uniquified (and a new data node has to be created),
+ # to avoid both participants writing and reading this data
+ logger.debug(f"{from_participant.name}->{to_participant.name}: {data_name}({data_type.value})")
+ # Here, we also want to check that A -> B does not yet exchange this data;
+ # in that case we should not uniquify but split
+ if (from_participant, to_participant, data_name) in participant_data_name_map:
+ print(participant_data_name_map[(from_participant, to_participant, data_name)])
+ # We know that the other type must be known, since it cannot be known with the current type
+ if data_type == e.DataType.VECTOR:
+ old_data_node: n.DataNode = \
+ participant_data_name_map[(from_participant, to_participant, data_name)][
+ e.DataType.SCALAR]
+ else:
+ old_data_node: n.DataNode = \
+ participant_data_name_map[(from_participant, to_participant, data_name)][
+ e.DataType.VECTOR]
+ new_data_name: str = helper.capitalize_name(old_data_node.name + "-" + data_type.value)
+ old_data_name: str = helper.capitalize_name(
+ old_data_node.name + "-" + old_data_node.data_type.value)
+ old_data_node.name = old_data_name
+ new_data_node: n.DataNode = n.DataNode(name=new_data_name, data_type=data_type)
+ logger.warning(f"Split up data \"{old_data_name}\" into \"{old_data_node.name}\" and "
+ f"\"{new_data_node.name}\", since it occurs with different data types.")
+ else:
+ uniquifier: str = helper.get_uniquifier()
+ new_data_name: str = f"{uniquifier.capitalize()}-{helper.capitalize_name(data_name)}"
+ logger.warning(
+ f"Data name \"{data_name}\" is exchanged by participants {from_participant.name} "
+ f"and {to_participant.name} in both directions. Using \"{new_data_name}\" "
+ f"for one direction.")
+ new_data_node = n.DataNode(name=new_data_name, data_type=data_type)
+ # Add information to the maps
+ if (from_participant, to_participant, data_name) not in participant_data_name_map:
+ participant_data_name_map[(from_participant, to_participant, data_name)] = {
+ data_type: new_data_node}
+ else:
+ participant_data_name_map[(from_participant, to_participant, data_name)][
+ data_type] = new_data_node
+ if data_type in data_name_map[data_name]:
+ data_name_map[data_name][data_type].append(new_data_node)
+ else:
+ data_name_map[data_name][data_type] = [new_data_node]
+ self.data.append(new_data_node)
+ if (from_participant, data_name) in participant_data_map:
+ participant_data_map[(from_participant, data_name)].append(new_data_node)
+ else:
+ participant_data_map[(from_participant, data_name)] = [new_data_node]
+ exchange_data_map[frozenset(exchange.items())] = new_data_node
+ else:
+ # Otherwise, we use a data node already exchanged by the from-participant
+ data_node: n.DataNode = None
+ for key, value in participant_data_name_map.items():
+ # key[0] is the from-participant, key[1] is the to-participant, key[2] is the data name
+ if key[0] == from_participant and key[2] == data_name and data_type in value:
+ data_node = value[data_type]
+ # This should not happen
+ assert data_node is not None, "Data node not found."
+ logger.debug(f"Chose data {data_node.name} with type {data_type.value}.")
+ exchange_data_map[frozenset(exchange.items())] = data_node
+
+ # Either a vector or a scalar variant of the data is already known (not both)
+ else:
+ # This data node is used solely to check the type of the data node,
+ # and, since they all have the same "name", also to determine a new name
+ data_node: n.DataNode = vector_data_node[0] if vector_data_node else scalar_data_node[0]
+
+ # Check if this data is known with another data type
+ # For this, it does not matter which data node out of the list we choose
+ if data_node.data_type != data_type:
+ data_node.name = helper.capitalize_name(data_node.name + "-" + data_node.data_type.value)
+ new_data_node_name: str = helper.capitalize_name(data_name + "-" + data_type.value)
+ new_data_node: n.DataNode = n.DataNode(name=new_data_node_name, data_type=data_type)
+ # Add information to the maps
+ if (from_participant, to_participant, data_name) not in participant_data_name_map:
+ participant_data_name_map[(from_participant, to_participant, data_name)] = {
+ data_type: new_data_node}
+ else:
+ participant_data_name_map[(from_participant, to_participant, data_name)][
+ data_type] = new_data_node
+ if (from_participant, data_name) in participant_data_map:
+ participant_data_map[(from_participant, data_name)].append(new_data_node)
+ else:
+ participant_data_map[(from_participant, data_name)] = [new_data_node]
+ if data_type in data_name_map[data_name]:
+ data_name_map[data_name][data_type].append(new_data_node)
+ else:
+ data_name_map[data_name][data_type] = [new_data_node]
+ self.data.append(new_data_node)
+ logger.warning(f"Split up data \"{data_name}\" into {data_node.name} and {new_data_node.name}, "
+ f"since it occurs with different data types.")
+ exchange_data_map[frozenset(exchange.items())] = new_data_node
+
+ # Check if this data is exchanged in the other direction, which is not allowed
+ elif (to_participant, from_participant, data_name) in participant_data_name_map:
+ uniquifier: str = helper.get_uniquifier()
+ new_data_name: str = f"{uniquifier.capitalize()}-{helper.capitalize_name(data_name)}"
+ logger.warning(
+ f"Data name \"{data_name}\" is exchanged by participants {from_participant.name} "
+ f"and {to_participant.name} in both directions. Using \"{new_data_name}\" "
+ f"for one direction.")
+ new_data_node = n.DataNode(name=new_data_name, data_type=data_type)
+ # Add information to the maps
+ if (from_participant, to_participant, data_name) not in participant_data_name_map:
+ participant_data_name_map[(from_participant, to_participant, data_name)] = {
+ data_type: new_data_node}
+ else:
+ participant_data_name_map[(from_participant, to_participant, data_name)][
+ data_type] = new_data_node
+ if (from_participant, data_name) in participant_data_map:
+ participant_data_map[(from_participant, data_name)].append(new_data_node)
+ else:
+ participant_data_map[(from_participant, data_name)] = [new_data_node]
+ self.data.append(new_data_node)
+ if data_type in data_name_map[data_name]:
+ data_name_map[data_name][data_type].append(new_data_node)
+ else:
+ data_name_map[data_name][data_type] = [new_data_node]
+ exchange_data_map[frozenset(exchange.items())] = new_data_node
+ # Otherwise, record that we observed this data exchange in one direction
+ else:
+ exchange_data_map[frozenset(exchange.items())] = data_node
+ if (from_participant, to_participant, data_name) not in participant_data_name_map:
+ participant_data_name_map[(from_participant, to_participant, data_name)] = {
+ data_type: data_node}
+ else:
+ participant_data_name_map[(from_participant, to_participant, data_name)][
+ data_type] = data_node
+
+ # This data is unknown, so we create a new data node
+ else:
+ data_node: n.DataNode = n.DataNode(name=helper.capitalize_name(data_name), data_type=data_type)
+ data_name_map[data_name] = {data_type: [data_node]}
+ logger.debug(f"Created new data node {data_node.name} for data {data_name} "
+ f"between participants {from_participant.name} and {to_participant.name}")
+ self.data.append(data_node)
+ participant_data_map[(from_participant, data_name)] = [data_node]
+ participant_data_name_map[(from_participant, to_participant, data_name)] = {data_type: data_node}
+ exchange_data_map[frozenset(exchange.items())] = data_node
+
+ return exchange_data_map
+
+ def _contains_write_data(self, write_data: n.WriteDataNode, suspect_list: list[n.WriteDataNode]) -> bool:
+ """
+ Check if the given list of write-data nodes already contains the given write-data node.
+ :param write_data: The write-data node to check for.
+ :param suspect_list: The list of write-data nodes to check in.
+ :return: True, if an equivalent write-data node already exists in the list. False otherwise.
+ """
+ for suspect in suspect_list:
+ if suspect.data == write_data.data and suspect.mesh == write_data.mesh and suspect.participant == write_data.participant:
+ logger.debug(f"Found write-data node with participant {suspect.participant.name}, "
+ f"mesh {suspect.mesh.name} and data {suspect.data.name}.")
+ return True
+ return False
+
+ def _contains_read_data(self, read_data: n.ReadDataNode, suspect_list: list[n.ReadDataNode]) -> bool:
+ """
+ Check if the given list of read-data nodes already contains the given write-data node.
+ :param read_data: The read-data node to check for.
+ :param suspect_list: The list of read-data nodes to check in.
+ :return: True, if an equivalent read-data node already exists in the list. False otherwise.
+ """
+ for suspect in suspect_list:
+ if suspect.data == read_data.data and suspect.mesh == read_data.mesh and suspect.participant == read_data.participant:
+ logger.debug(f"Found read-data node with participant {suspect.participant.name}, "
+ f"mesh {suspect.mesh.name} and data {suspect.data.name}.")
+ return True
+ return False
+
+
diff --git a/precicecasegenerate/schemas/README.md b/precicecasegenerate/schemas/README.md
index 1bc617a..46f95ed 100644
--- a/precicecasegenerate/schemas/README.md
+++ b/precicecasegenerate/schemas/README.md
@@ -1,127 +1,85 @@
-# Multi-Physics Simulation Topology Schema
-
-## Overview
-
-This JSON schema provides a comprehensive configuration mechanism for defining multi-physics simulation topologies, specifically designed for complex coupling scenarios in scientific computing and engineering simulations.
-
-## Schema Structure
-
-The topology schema consists of four main sections:
-
-### 1. Coupling Scheme Configuration
-- Flexible time window and iteration controls
-- Support for parallel and serial coupling modes
-- Optional display of standard values
-- Configurable maximum time and iterations
-
-#### Key Parameters
-- `max-time`: Maximum simulation time (integer or scientific notation)
-- `time-window-size`: Size of time windows (number or scientific notation)
-- `max-iterations`: Maximum coupling iterations
-- `coupling`: Coupling mode (parallel/serial)
-
-### 2. Acceleration Mechanisms
-Advanced coupling acceleration with multiple configuration options:
-
-#### Acceleration Methods
-- Supported methods: `constant`, `aitken`, `IQN-ILS`, `IQN-IMVJ`
-- Initial relaxation factor
-- Preconditioner configuration
-- Filtering options (QR1/QR2)
-
-#### Advanced Features
-- Iteration and time window reuse
-- IMVJ restart mode configuration
-- Singular value truncation
-- Preconditioner freezing
-
-### 3. Participants Configuration
-Define simulation participants with detailed specifications:
-
-- Mandatory fields: `name`, `solver`
-- Optional fields:
- - `dimensionality` (default: 3)
-- Minimum of 2 participant required
-
-### 4. Exchanges Configuration
-Each exchange defines a one-way data transfer between participants:
-
-#### Required Fields
-- `from`: Name of the source participant sending data (must match a name in the participants section)
-- `to`: Name of the target participant receiving data (must match a name in the participants section)
-- `data`: Type of data being exchanged (e.g., Force, Displacement, Velocity)
-- `type`: Coupling type defining the exchange interaction
- - `strong`: Tight coupling with immediate data synchronization
- - `weak`: Loose coupling with less frequent data exchange
-- `from-patch`: Source Interface Surface
- - The physical boundary or interface region on the source participant's mesh where data is extracted.
- - Must correspond to a defined boundary condition in the source solver.
- - For fluids: Typically the fluid-structure interface (e.g., "interface")
- - For solids: Typically the surface contacting the fluid (e.g., "surface")
-- `to-patch`: Target Interface Surface
- - The physical boundary or interface region on the target participant's mesh where data will be applied.
- - Must correspond to a defined boundary condition in the target solver.
-> [!NOTE]
-> `from-patch` and `to-patch` are only relevant for generating `adapter-config.json` files.
-
-
-#### Optional Fields
-- `data-type`: Specifies the data representation
- - `scalar`: Single numeric value (default)
- - `vector`: Multi-dimensional numeric data
-
-#### Data Type Constraints
-- Supported data types: Force, Displacement, Velocity, Pressure, Temperature, HeatTransfer
-- Naming follows the pattern: `[BaseType][OptionalModifier]`
-
-#### Example
+# The Topology File
+
+A `topology.yaml` file is the only file needed to run this program.
+
+> [!NOTE] As a YAML file, the topology is case-, indent- and whitespace-sensitive.
+
+
+The JSON schema of the topology can be found in the `topology-schema.json` file.
+
+It consists of two main elements:
+
+- `participants`: The solvers involved in the simulation.
+- `exchanges`: How the solvers interact with one another.
+
+## Participants
+
+The `participants` element describes the main actors of the simulation through given `name`s and the `solver`s they use.
+It can hold an arbitrary number of elements, which must have pairwise unique names.
+The optional parameter `dimensionality` defines the dimensions of the meshes used by the participant.
+
+There must be at least one participant defined, however, for a successful communication to be possible,
+at least two participants must exist.
+A valid entry might look as follows:
+
```yaml
-coupling-scheme:
- display_standard_values: true
participants:
- - name: Fluid
- solver: SU2
- - name: Solid
- solver: Calculix
-exchanges:
- - from: Fluid
- from-patch: interface # Fluid side boundary where forces are measured
- to: Solid
- to-patch: surface # Solid surface receiving fluid forces
- data: Force
- type: strong
- - from: Solid
- from-patch: surface # Solid surface where displacements occur
- to: Fluid
- to-patch: interface # Fluid boundary that adapts to displacements
- data: Displacement
- type: strong
+ - name: Crocodile # An arbitrary string
+ solver: SeeYouLater # An arbitrary string
+ dimensionality: 3 # Either 2, 3 or not given
+ - name: Alligator
+ solver: InAWhile
+ - ...
```
-## Schema Validation Rules
-
-- Requires `coupling-scheme`, `participants`, and `exchanges`
-- Optional `acceleration` configuration
-- Supports scientific notation for numeric values
-- Strict type and enumeration constraints
+## Exchanges
-## Compatibility and Best Practices
+The `exchanges` element describes how the main actors of the simulation communicate and relate to one-another.
+This means that a single exchange needs to define a source participant `from`, a destination participant `to` and
+patches (interfaces) of these participants through `from-patch` and `to-patch`.
+The data that is exchanges is given as `data` and the type of the exchange (strong (implicit) or weak (explicit))
+is chosen through `type`.
+The optional parameter `data-type` can take either of the two values `scalar` or `vector`.
+If not given, a value might be inferred from the name of the `data`.
-- Designed for preCICE coupling framework
-- Supports complex multi-physics simulations
-- Flexible configuration for various scientific computing scenarios
+At least one exchange must exist for a valid topology. Exchanges must be unique.
+A valid entry might look as follows:
-## Limitations
+```yaml
+exchanges:
+ - from: Crocodile # A string that corresponds to a previously defined participant
+ to: Alligator # A string that corresponds to a previously defined participant
+ from-patch: claw # A patch (interface) of the `from`-participant
+ to-patch: claw # A patch (interface) of the `to`-participant
+ type: strong # The type of the data-exchange; either `strong` (implicit) or `weak` (explicit)
+ data: fish # The data that is being exchanged
+ data-type: vector # The type of the data that is being exchange; either `scalar`,`vector` or not given
+ - ...
+```
-- Predefined acceleration and exchange methods
-- Strict schema validation
+## Example
-## Future Extensions
+A complete example for a valid `topology.yaml` file is the following:
-- Potential expansion of acceleration methods
-- Enhanced data exchange types
-- More flexible validation rules
+```yaml
+participants:
+ - name: Crocodile # An arbitrary string
+ solver: SeeYouLater # An arbitrary string
+ dimensionality: 3 # Either 2, 3 or not given
+ - name: Alligator
+ solver: InAWhile
+exchanges:
+ - from: Crocodile # A string that corresponds to a previously defined participant
+ to: Alligator # A string that corresponds to a previously defined participant
+ from-patch: claw # A patch (interface) of the `from`-participant
+ to-patch: claw # A patch (interface) of the `to`-participant
+ type: strong # The type of the data-exchange; either `strong` (implicit) or `weak` (explicit)
+ data: fish # The data that is being exchanged
+ data-type: vector
+```
-## Contributing
+## Legacy
-Contributions to expand and improve the schema are welcome. Please follow the project's contribution guidelines.
+In version 1 of preCICE Case Generate, the topology had the additional elements `coupling-scheme` and `acceleration`.
+To facilitate the usage of the tool, they were removed and the parameters are now either inferred from the remaining
+two tags or assigned a default value.
\ No newline at end of file
diff --git a/precicecasegenerate/schemas/__init__.py b/precicecasegenerate/schemas/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/precicecasegenerate/schemas/topology-schema.json b/precicecasegenerate/schemas/topology-schema.json
index bbc01d3..14c4524 100644
--- a/precicecasegenerate/schemas/topology-schema.json
+++ b/precicecasegenerate/schemas/topology-schema.json
@@ -1,228 +1,98 @@
{
- "type": "object",
- "properties": {
- "coupling-scheme": {
+ "type": "object",
+ "properties": {
+ "participants": {
+ "type": "array",
+ "description": "List of participants in the simulation",
+ "items": {
"type": "object",
"properties": {
- "max-time": {
- "type": "number",
- "default": 1e-3
+ "name": {
+ "type": "string",
+ "description": "Unique name of the participant"
},
- "time-window-size": {
- "type": "number",
- "default": 1e-3
+ "solver": {
+ "type": "string",
+ "description": "Solver used by the participant"
},
- "max-iterations": {
+ "dimensionality": {
"type": "integer",
- "default": 50
- },
- "display_standard_values": {
- "type": "boolean",
- "default": false
- },
- "coupling": {
- "type": "string",
- "enum": ["parallel", "serial"],
- "default": "parallel"
+ "description": "Dimensionality of the participants meshes.",
+ "default": 3
}
},
- "required": [ ],
- "optional": [ "display_standard_values", "max-time", "time-window-size", "max-iterations", "coupling" ]
+ "required": [
+ "name",
+ "solver"
+ ]
},
- "acceleration": {
+ "minItems": 1,
+ "uniqueItems": true
+ },
+ "exchanges": {
+ "type": "array",
+ "description": "Defines the data exchanges between participants in the simulation.",
+ "items": {
"type": "object",
"properties": {
- "name": {
+ "from": {
"type": "string",
- "description": "Type of acceleration method",
- "enum": ["constant", "aitken", "IQN-ILS", "IQN-IMVJ"],
- "default": "IQN-ILS"
- },
- "initial-relaxation": {
- "type": "object",
- "properties": {
- "value": {
- "type": "number"
- },
- "enforce": {
- "type": "boolean",
- "default": false
- }
- },
- "additionalProperties": false,
- "required": ["value"],
- "description": "Initial under-relaxation factor"
+ "description": "Name of the source participant sending data"
},
- "preconditioner": {
- "type": "object",
- "description": "Preconditioner configuration",
- "properties": {
- "freeze-after": {
- "type": "integer",
- "description": "Time window after which preconditioner stops updating (-1 = never)",
- "minimum": -1,
- "default": -1
- },
- "type": {
- "type": "string",
- "description": "Type of preconditioner",
- "enum": ["constant", "value", "residual", "residual-sum"]
- }
- },
- "required": [ "freeze-after" ],
- "optional": [ "type" ]
+ "from-patch": {
+ "type": "string",
+ "description": "Specific interface patch or surface on the source participant from which data is sent"
},
- "filter": {
- "type": "object",
- "description": "QR1/2 filter configuration",
- "properties": {
- "limit": {
- "type": "number",
- "description": "Threshold for filtering singular values",
- "exclusiveMinimum": 0,
- "default": 1e-16
- },
- "type": {
- "type": "string",
- "description": "Type of filtering",
- "enum": ["QR1", "QR2"]
- }
- },
- "required": [ "limit" ],
- "optional": [ "type" ]
+ "to": {
+ "type": "string",
+ "description": "Name of the target participant receiving data"
},
- "max-used-iterations": {
- "type": "integer",
- "description": "Maximum number of previous iterations used for IQN methods",
- "minimum": 0
+ "to-patch": {
+ "type": "string",
+ "description": "Specific interface patch or surface on the target participant where data is received"
},
- "time-windows-reused": {
- "type": "integer",
- "description": "Number of past time windows reused for IQN methods",
- "minimum": 0
+ "data": {
+ "type": "string",
+ "description": "Type of data being exchanged (e.g., Force, Displacement, Velocity)"
},
- "imvj-restart-mode": {
- "type": "object",
- "description": "Configuration for IMVJ restart mode",
- "properties": {
- "truncation-threshold": {
- "type": "number",
- "description": "Threshold for truncating singular values during restart",
- "exclusiveMinimum": 0,
- "default": 0.0001
- },
- "chunk-size": {
- "type": "integer",
- "description": "Number of time windows between restarts",
- "minimum": 1,
- "default": 8
- },
- "reused-time-windows-at-restart": {
- "type": "integer",
- "description": "Number of time windows reused after restart",
- "minimum": 0,
- "default": 8
- },
- "type": {
- "type": "string",
- "description": "Type of restart mode",
- "enum": ["no-restart", "RS-0", "RS-LS", "RS-SVD", "RS-SLIDE"],
- "default": "RS-SVD"
- }
- },
- "required": [ "truncation-threshold", "chunk-size", "reused-time-windows-at-restart","type" ],
- "optional": [ ]
+ "data-type": {
+ "type": "string",
+ "description": "Specifies whether the data is a scalar or vector quantity",
+ "enum": [
+ "scalar",
+ "vector"
+ ],
+ "default": "scalar"
},
- "display_standard_values": {
- "type": "boolean",
- "default": false
+ "type": {
+ "type": "string",
+ "description": "Defines the coupling type: 'strong' for tight coupling, 'weak' for loose coupling",
+ "enum": [
+ "strong",
+ "weak"
+ ]
}
},
- "required": [ ],
- "optional": [ "name", "initial-relaxation", "preconditioner", "filter", "max-used-iterations", "time-windows-reused", "display_standard_values", "imvj-restart-mode" ]
+ "required": [
+ "from",
+ "from-patch",
+ "to",
+ "to-patch",
+ "data",
+ "type"
+ ],
+ "optional": [
+ "data-type"
+ ]
},
- "participants": {
- "type": "array",
- "description": "List of participants in the coupling simulation",
- "items": {
- "type": "object",
- "properties": {
- "name": {
- "type": "string",
- "description": "Unique name of the participant"
- },
- "solver": {
- "type": "string",
- "description": "Solver used by the participant"
- },
- "dimensionality": {
- "type": "integer",
- "description": "Dimensionality of the participant's problem",
- "default": 3
- }
- },
- "required": ["name", "solver"]
- },
- "minItems": 1,
- "uniqueItems": true
- },
- "exchanges": {
- "type": "array",
- "description": "Defines the data exchanges between participants in the coupling simulation",
- "items": {
- "type": "object",
- "properties": {
- "from": {
- "type": "string",
- "description": "Name of the source participant sending data"
- },
- "from-patch": {
- "type": "string",
- "description": "Specific interface patch or surface on the source participant from which data is sent"
- },
- "to": {
- "type": "string",
- "description": "Name of the target participant receiving data"
- },
- "to-patch": {
- "type": "string",
- "description": "Specific interface patch or surface on the target participant where data is received"
- },
- "data": {
- "type": "string",
- "description": "Type of data being exchanged (e.g., Force, Displacement, Velocity)",
- "pattern": "^(Force|Displacement|Velocity|Pressure|Temperature|HeatTransfer).*$"
- },
- "data-type": {
- "type": "string",
- "description": "Specifies whether the data is a scalar or vector quantity",
- "enum": ["scalar", "vector"],
- "default": "scalar"
- },
- "type": {
- "type": "string",
- "description": "Defines the coupling type: 'strong' for tight coupling, 'weak' for loose coupling",
- "enum": ["strong", "weak"]
- }
- },
- "required": [
- "from",
- "from-patch",
- "to",
- "to-patch",
- "data",
- "type"
- ],
- "optional": [ "data-type" ]
- }
- }
- },
- "required": [
- "coupling-scheme",
- "participants",
- "exchanges"
- ],
- "optional": [ "acceleration" ],
- "title": "preCICE Topology Configuration",
- "description": "JSON schema defining the topology configuration for precice-generator. Specifies participants, exchanges, and their coupling relationships."
+ "minItems": 1,
+ "uniqueItems": true
+ }
+ },
+ "required": [
+ "participants",
+ "exchanges"
+ ],
+ "title": "preCICE Topology Configuration",
+ "description": "JSON schema defining the topology configuration for precice-case-generate. Specifies participants, exchanges, and their coupling relationships."
}
diff --git a/precicecasegenerate/templates/__init__.py b/precicecasegenerate/templates/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/precicecasegenerate/templates/adapter-config-template.json b/precicecasegenerate/templates/adapter-config-template.json
deleted file mode 100644
index 6afc036..0000000
--- a/precicecasegenerate/templates/adapter-config-template.json
+++ /dev/null
@@ -1,12 +0,0 @@
-{
- "participant_name": null,
- "precice_config_file_name": "../precice-config.xml",
- "interfaces": [
- {
- "mesh_name": null,
- "patches": [],
- "write_data_names": [],
- "read_data_names": []
- }
- ]
-}
diff --git a/precicecasegenerate/templates/clean.sh b/precicecasegenerate/templates/clean.sh
new file mode 100644
index 0000000..2954b31
--- /dev/null
+++ b/precicecasegenerate/templates/clean.sh
@@ -0,0 +1,350 @@
+#!/usr/bin/env bash
+
+# -------------------------------------------------------------------
+# Script Name: clean.sh
+# Description: Recursively deletes/moves files/dirs except:
+# - Global preserved filenames anywhere (run.sh, adapter-config.json...)
+# - Specific root files (README.md, precice-config.xml)
+# Usage: ./clean.sh [--dry-run] [--force]
+# --dry-run : show what would happen, don't remove/move
+# --force : permanently delete unpreserved items AND remove existing backups
+# -------------------------------------------------------------------
+
+# Strict mode:
+# -e: exit on error
+# -u: exit on undefined variable
+# -o pipefail: exit if any command in a pipe fails
+set -euo pipefail
+
+# --- CONFIGURATION ---
+ROOT_DIR="$(pwd)"
+LOG_FILE="cleanup.log"
+BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
+
+# 1. GLOBAL PRESERVES: filenames to keep anywhere in the tree
+GLOBAL_PRESERVE_NAMES=(
+ "run.sh"
+ "adapter-config.json"
+)
+
+# 2. ROOT PRESERVES: filenames to keep only if in ROOT_DIR
+ROOT_PRESERVE_PATHS=(
+ "clean.sh"
+ "README.md"
+ "precice-config.xml"
+ "$LOG_FILE" # always keep the log (will be overwritten)
+)
+
+# --- DEFAULTS ---
+DRY_RUN=0 # No dry-run by default
+FORCE=0 # No permanent removal of files by default
+MOVED_COUNT=0 # Counter to track if we actually backed anything up
+DELETED_COUNT=0 # Counter to track if we actually deleted anything
+
+# --- HELPERS ---
+
+log() {
+ # We use tee -a (append) here so we don't overwrite previous lines *of this run*.
+ # The file is cleared once at the start of the MAIN block.
+ echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
+}
+
+# Is filename (basename) a global preserved name?
+is_global_preserved() {
+ local filename="$1"
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [[ "$filename" == "$name" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Is a relative path preserved at root?
+is_root_preserved() {
+ local relpath="$1"
+ for p in "${ROOT_PRESERVE_PATHS[@]}"; do
+ if [[ "$relpath" == "$p" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Find out whether directory contains any preserved file *anywhere* under it.
+# Implementation Note: Uses `head -n 1` instead of `-quit` for maximum compatibility
+# across all Linux distros (including Alpine/BusyBox) and BSD/MacOS.
+dir_contains_preserved_content() {
+ local dir="$1"
+ local name
+ local -a find_args=()
+
+ # Build find arguments: ( -name A -o -name B ... )
+ for name in "${GLOBAL_PRESERVE_NAMES[@]}"; do
+ if [ ${#find_args[@]} -eq 0 ]; then
+ find_args+=( -name "$name" )
+ else
+ find_args+=( -o -name "$name" )
+ fi
+ done
+
+ # Check if find returns any match.
+ # piping to head -n 1 causes find to receive SIGPIPE and stop early if a match is found.
+ if [ -n "$(find "$dir" \( "${find_args[@]}" \) -print 2>/dev/null | head -n 1)" ]; then
+ return 0
+ fi
+ return 1
+}
+
+# Ensure backup destination exists and safely move source there preserving relative path.
+# Prevents collisions by making unique names if needed.
+# Args:
+# $1 = absolute src path
+# $2 = rel path relative to ROOT_DIR (used to recreate path inside backup)
+safe_move_to_backup() {
+ local src="$1"
+ local rel="$2"
+
+ # Destination path = $BACKUP_DIR/$rel
+ local dest="$BACKUP_DIR/$rel"
+ local dest_dir
+ dest_dir="$(dirname "$dest")"
+
+ # Normalize: if dirname is ".", use $BACKUP_DIR as target dir
+ if [[ "$dest_dir" == "$BACKUP_DIR/." ]] || [[ "$dest_dir" == "." ]]; then
+ dest_dir="$BACKUP_DIR"
+ fi
+
+ if [ "$DRY_RUN" -eq 0 ]; then
+ mkdir -p "$dest_dir"
+ fi
+
+ # If dest exists, append numeric suffix before extension to avoid overwrite
+ if [ -e "$dest" ]; then
+ local base name ext candidate n
+ base="$(basename "$dest")"
+ name="${base%.*}"
+ ext="${base##*.}"
+ n=1
+
+ # Check if file has an extension
+ if [[ "$ext" == "$base" ]]; then
+ # No extension
+ candidate="$dest_dir/${name}_$n"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n"
+ done
+ else
+ # Has extension
+ candidate="$dest_dir/${name}_$n.$ext"
+ while [ -e "$candidate" ]; do
+ n=$((n+1))
+ candidate="$dest_dir/${name}_$n.$ext"
+ done
+ fi
+ dest="$candidate"
+ fi
+
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would be deleted: ${rel}"
+ else
+ mv -- "$src" "$dest"
+ # Increment the counter because we successfully moved a file
+ MOVED_COUNT=$((MOVED_COUNT + 1))
+ log "Deleted: ${rel}"
+ fi
+}
+
+# Safe remove (used by --force). Respects --dry-run.
+safe_remove() {
+ local src="$1"
+ local rel="$2"
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove: $rel"
+ else
+ rm -rf -- "$src"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed: $rel"
+ fi
+}
+
+# Remove existing backup_* directories permanently (used when FORCE=1).
+remove_existing_backups_permanently() {
+ shopt -s nullglob
+ local found=0
+ for old in "$ROOT_DIR"/backup_*; do
+ if [ -d "$old" ]; then
+ found=1
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would permanently remove backup: $(basename "$old")"
+ else
+ rm -rf -- "$old"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Permanently removed backup: $(basename "$old")"
+ fi
+ fi
+ done
+ shopt -u nullglob
+ if [ "$found" -eq 0 ]; then
+ log "No existing backup directories to remove."
+ fi
+}
+
+# Perform action for a single path (file or directory) that is NOT preserved.
+# Moves to backup preserving tree, or removes permanently when FORCE=1.
+perform_action() {
+ local item="$1"
+ local rel_path="${item#$ROOT_DIR/}"
+
+ if [ "$FORCE" -eq 1 ]; then
+ safe_remove "$item" "$rel_path"
+ else
+ safe_move_to_backup "$item" "$rel_path"
+ fi
+}
+
+# --- RECURSIVE CLEANUP ---
+recursive_cleanup() {
+ local current_dir="$1"
+
+ # dotglob: includes hidden files (starting with .)
+ # nullglob: makes the loop not run if no files match (avoids literal string issues)
+ shopt -s dotglob nullglob
+
+ local item rel_path base
+
+ for item in "$current_dir"/*; do
+ # Sanity check: ensure file exists (handles rare race conditions or broken links)
+ [ ! -e "$item" ] && [ ! -L "$item" ] && continue
+
+ rel_path="${item#$ROOT_DIR/}"
+ base="$(basename "$item")"
+
+ # Skip . and .. (though glob usually excludes them, safety first)
+ if [[ "$base" == "." || "$base" == ".." ]]; then
+ continue
+ fi
+
+ # --- FILE or SYMLINK ---
+ if [ -f "$item" ] || [ -L "$item" ]; then
+ # 1. Preserve by global name anywhere?
+ if is_global_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # 2. Preserve if at root and matches root-preserve list?
+ if [[ "$current_dir" == "$ROOT_DIR" ]] && is_root_preserved "$base"; then
+ log "Preserving: $rel_path"
+ continue
+ fi
+
+ # Otherwise: perform action
+ perform_action "$item"
+ continue
+ fi
+
+ # --- DIRECTORY ---
+ if [ -d "$item" ]; then
+ # Check if dir contains any preserved content anywhere below
+ if dir_contains_preserved_content "$item"; then
+ recursive_cleanup "$item"
+
+ # After recursion, if directory is empty, remove it (respecting dry-run)
+ if [ -z "$(ls -A "$item")" ]; then
+ if [ "$DRY_RUN" -eq 1 ]; then
+ log "Would remove empty directory: $rel_path"
+ else
+ rmdir -- "$item"
+ DELETED_COUNT=$((DELETED_COUNT + 1))
+ log "Removed empty directory: $rel_path"
+ fi
+ fi
+ else
+ # Directory contains no preserved files anywhere deep: remove/move entire directory
+ perform_action "$item"
+ fi
+ continue
+ fi
+ done
+
+ # Restore defaults for shopt to avoid side effects if function is reused
+ shopt -u dotglob nullglob
+}
+
+# --- MAIN ---
+
+# Parse flags
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --dry-run) DRY_RUN=1 ;;
+ --force) FORCE=1 ;;
+ *) echo "Unknown parameter: $1"; exit 1 ;;
+ esac
+ shift
+done
+
+# Initialize/Clear the log file for this new run
+: > "$LOG_FILE"
+
+# Confirmation prompt (skip when dry-run)
+if [ "$DRY_RUN" -eq 1 ]; then
+ log "Dry run enabled, nothing will be removed."
+else
+ read -p "This will delete all files except preserved ones. Proceed? [y/n]: " confirm
+ case "$confirm" in
+ [yY][eE][sS]|[yY]) ;;
+ *) log "Cleanup aborted."; exit 0 ;;
+ esac
+fi
+
+if [ "$FORCE" -eq 1 ] && [ "$DRY_RUN" -eq 1 ]; then
+ log "Ignoring --force."
+ FORCE=0
+fi
+
+log "Starting cleanup..."
+
+
+if [ "$FORCE" -eq 1 ]; then
+ remove_existing_backups_permanently
+fi
+
+recursive_cleanup "$ROOT_DIR"
+
+if [ "$DELETED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+fi
+
+OUTPUT_STR=""
+
+# Output if FORCE
+if [ "$FORCE" -eq 1 ]; then
+ OUTPUT_STR="Deleted $DELETED_COUNT $FILE_STR or $DIRECTORY_STR. "
+fi
+
+if [ "$DRY_RUN" -eq 1 ]; then
+ OUTPUT_STR="Dry-run completed successfully."
+else
+ OUTPUT_STR="${OUTPUT_STR}Cleanup completed successfully."
+fi
+
+# Only append the backup message if we actually moved files (MOVED_COUNT > 0)
+if [ "$MOVED_COUNT" -gt 0 ]; then
+ # Correct wording
+ if [ "$MOVED_COUNT" -eq 1 ]; then
+ FILE_STR="file"
+ DIRECTORY_STR="directory"
+ else
+ FILE_STR="files"
+ DIRECTORY_STR="directories"
+ fi
+ OUTPUT_STR="$OUTPUT_STR Backed up $MOVED_COUNT deleted $FILE_STR or $DIRECTORY_STR in '$BACKUP_DIR'."
+fi
+
+log "$OUTPUT_STR"
\ No newline at end of file
diff --git a/precicecasegenerate/templates/metaConfiguratorSettings.json b/precicecasegenerate/templates/metaConfiguratorSettings.json
deleted file mode 100644
index 1bfe399..0000000
--- a/precicecasegenerate/templates/metaConfiguratorSettings.json
+++ /dev/null
@@ -1,115 +0,0 @@
-{
- "dataFormat": "yaml",
- "toolbarTitle": "preCICE Topology Configurator",
- "hideSchemaEditor": true,
- "hideSettings": true,
- "uiColors": {
- "schemaEditor": "olivedrab",
- "dataEditor": "black",
- "settings": "darkmagenta"
- },
- "codeEditor": {
- "fontSize": 18,
- "tabSize": 2,
- "showFormatSelector": true,
- "xml": {
- "attributeNamePrefix": "_"
- }
- },
- "guiEditor": {
- "maximumDepth": 20,
- "propertySorting": "schemaOrder",
- "hideAddPropertyButton": true
- },
- "schemaDiagram": {
- "editMode": true,
- "vertical": true,
- "showAttributes": true,
- "showEnumValues": true,
- "maxAttributesToShow": 30,
- "maxEnumValuesToShow": 10,
- "moveViewToSelectedElement": false,
- "automaticZoomMaxValue": 1,
- "automaticZoomMinValue": 0.5,
- "mergeAllOfs": true
- },
- "metaSchema": {
- "allowBooleanSchema": false,
- "allowMultipleTypes": false,
- "objectTypesComfort": false,
- "markMoreFieldsAsAdvanced": true,
- "showAdditionalPropertiesButton": false,
- "showJsonLdFields": false
- },
- "panels": {
- "dataEditor": [
- {
- "panelType": "textEditor",
- "mode": "dataEditor",
- "size": 50
- },
- {
- "panelType": "guiEditor",
- "mode": "dataEditor",
- "size": 50
- }
- ],
- "schemaEditor": [
- {
- "panelType": "textEditor",
- "mode": "schemaEditor",
- "size": 33
- },
- {
- "panelType": "schemaDiagram",
- "mode": "schemaEditor",
- "size": 33
- },
- {
- "panelType": "guiEditor",
- "mode": "schemaEditor",
- "size": 40
- }
- ],
- "settings": [
- {
- "panelType": "textEditor",
- "mode": "settings",
- "size": 50
- },
- {
- "panelType": "guiEditor",
- "mode": "settings",
- "size": 50
- }
- ],
- "hidden": [
- "aiPrompts",
- "debug"
- ]
- },
- "rdf": {
- "sparqlEndpointUrl": "https://dbpedia.org/sparql"
- },
- "frontend": {
- "hostname": "http://metaconfigurator.informatik.uni-stuttgart.de"
- },
- "backend": {
- "hostname": "http://metaconfigurator.informatik.uni-stuttgart.de",
- "port": 5000
- },
- "openAi": {
- "model": "gpt-4o-mini",
- "maxTokens": 5000,
- "temperature": 0.3,
- "endpoint": "/chat/completions"
- },
- "settingsVersion": "1.0.0",
- "preferencesSelected": true,
- "aiIntegration": {
- "model": "gpt-4o-mini",
- "maxTokens": 5000,
- "temperature": 0,
- "endpoint": "https://api.openai.com/v1/chat/completions"
- }
-}
diff --git a/precicecasegenerate/templates/run.sh b/precicecasegenerate/templates/run.sh
new file mode 100644
index 0000000..5fdeba8
--- /dev/null
+++ b/precicecasegenerate/templates/run.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+
+#
+# run.sh script
+#
+# This is a template. You need to implement it yourself.
+#
+# If you are trying to launch a python script, you need to add:
+#
+# python path/to/file.py
+#
+# Example: https://github.com/precice/tutorials/blob/develop/flow-over-heated-plate/solid-dunefem/run.sh
+#
+# Note: In the example `run.sh` file, most setup steps (such as creating and activating a virtual environment
+# or installing dependencies) have already been completed.
+# Therefore, you may only need to add:
+#
+# python path/to/file.py
+#
+
+set -e # Exit immediately if any command fails
\ No newline at end of file
diff --git a/precicecasegenerate/templates/template_README.md b/precicecasegenerate/templates/template_README.md
deleted file mode 100644
index e2da34a..0000000
--- a/precicecasegenerate/templates/template_README.md
+++ /dev/null
@@ -1,104 +0,0 @@
-# 🚀 Multiphysics Simulation Project
-
-This `README.md` file was auto-generated by the `FileGenerator.py` script, providing a comprehensive guide to your coupled simulation.
-
----
-
-## 📋 Project Overview
-
-This project utilizes **preCICE** (Precise Code Interaction Coupling Environment) for a multiphysics simulation involving:
-
-- **Participants**:
- {PARTICIPANTS_LIST}
-
-- **Solvers**:
- {SOLVERS_LIST}
-
-- **Coupling Strategy**:
- {COUPLING_STRATEGY}
-
----
-
-## 🛠 Prerequisites
-
-Before running the simulation, ensure you have the following installed:
-- preCICE library
-- {SOLVER1_NAME} solver
-- {SOLVER2_NAME} solver
-- Required dependencies for each solver
-
----
-
-## 🏃♂️ Running the Simulation
-
-### Quick Start
-
-```bash
-# Navigate to the `_generated` folder
-cd _generated/
-
-# Make the run script executable
-chmod +x run.sh
-
-# Execute the simulation
-./run.sh
-```
-
-### Advanced Execution
-
-For more control or debugging:
-- Check `run.sh` for specific command-line arguments
-- Modify solver-specific parameters in `adapter-config.json`
-
----
-
-## 🔍 Simulation Configuration
-
-- **preCICE Configuration**: `precice-config.xml`
- - Defines coupling interface and communication strategy
- - Modify with caution, refer to preCICE documentation
-
-- **Adapter Configuration**: `{PARTICIPANT_NAME}/adapter-config.json`
- - Solver-specific coupling parameters
- - Adjust solver input/output mappings here
-
----
-
-## 🧹 Cleaning Simulation Artifacts
-
-```bash
-# Make the clean script executable
-chmod +x clean.sh
-
-# Remove generated files and reset workspace
-./clean.sh
-```
-
-**Warning**: This will remove all generated files except preserved ones.
-
----
-
-## 📚 Additional Resources
-
-- 🔗 [preCICE Tutorials](https://precice.org/tutorials.html)
-- 🔗 [preCICE Documentation](https://precice.org/docs.html)
-- 🔗 Solver-specific documentation:
-[Solvers Links and Names]
-
----
-
-## 🤝 Troubleshooting
-
-Common issues and solutions:
-- Ensure all solvers are compatible with preCICE version
-- Check network/communication settings
-- Verify adapter configuration mappings
-
-For specific problems, consult:
-- Solver documentation
-- preCICE community forums
-- Project-specific documentation
-
----
-
-*Generated by FileGenerator.py - Simplifying Multiphysics Simulation Workflows*
diff --git a/precicecasegenerate/templates/template_clean.sh b/precicecasegenerate/templates/template_clean.sh
deleted file mode 100644
index de561cc..0000000
--- a/precicecasegenerate/templates/template_clean.sh
+++ /dev/null
@@ -1,199 +0,0 @@
-#!/bin/bash
-
-# -------------------------------------------------------------------
-# Script Name: clean.sh
-# Description: Deletes all files and directories in the current directory
-# except for the hardcoded preserved files.
-# Preserved files:
-# - clean.sh
-# - README.md
-# - precice-config.xml
-# - *-*/adapter-config.json
-# - *-*/run.sh
-# Usage: ./clean.sh [--dry-run]
-# -------------------------------------------------------------------
-
-# Exit immediately if a command exits with a non-zero status
-set -e
-
-# Define the root directory as the current directory
-ROOT_DIR="$(pwd)"
-
-# Define the preserved files with their relative paths from ROOT_DIR
-PRESERVE_FILES=(
- "clean.sh"
- "README.md"
- "precice-config.xml"
- "*-*/adapter-config.json"
- "*-*/run.sh"
-)
-
-# Define backup directory (optional)
-BACKUP_DIR="$ROOT_DIR/backup_$(date '+%Y%m%d_%H%M%S')"
-
-# Default behavior is to perform actual deletion
-DRY_RUN=0
-
-# Define log file
-LOG_FILE="cleanup.log"
-
-# Function to display a message
-log() {
- echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
-}
-
-# Function to check if a relative path is in the preserved list
-is_preserved() {
- local rel_path="$1"
- for preserve in "${PRESERVE_FILES[@]}"; do
- if [ "$rel_path" == "$preserve" ]; then
- return 0 # true
- fi
- done
- return 1 # false
-}
-
-# Function to delete or backup unpreserved files and directories
-cleanup() {
- log "Starting cleanup in directory: $ROOT_DIR"
-
- # Enable dotglob to include hidden files and directories
- shopt -s dotglob
-
- # Iterate over all items in the root directory, including hidden ones
- for item in "$ROOT_DIR"/* "$ROOT_DIR"/.*; do
- # Get the relative path from ROOT_DIR
- rel_path="${item#$ROOT_DIR/}"
-
- # Handle the case when item is ROOT_DIR itself
- if [ "$rel_path" == "$ROOT_DIR" ]; then
- continue
- fi
-
- # Skip '.' and '..'
- if [ "$rel_path" == "." ] || [ "$rel_path" == ".." ]; then
- continue
- fi
-
- # Check if the item is in the preserved list
- if is_preserved "$rel_path"; then
- log "Preserving: $rel_path"
- continue
- fi
-
- # Check if the item is a preserved directory (e.g., 'config')
- PRESERVED_DIRS=()
- for preserve in "${PRESERVE_FILES[@]}"; do
- dir=$(dirname "$preserve")
- if [ "$dir" != "." ] && [[ ! " ${PRESERVED_DIRS[@]} " =~ " ${dir} " ]]; then
- PRESERVED_DIRS+=("$dir")
- fi
- done
-
- preserve_dir=false
- for dir in "${PRESERVED_DIRS[@]}"; do
- if [[ "$rel_path" == "$dir" && -d "$item" ]]; then
- preserve_dir=true
- break
- fi
- done
-
- if [ "$preserve_dir" = true ]; then
- log "Preserving directory: $rel_path"
-
- # Iterate over items inside the preserved directory
- for subitem in "$item"/* "$item"/.*; do
- # Get the relative path of the subitem
- sub_rel_path="${subitem#$ROOT_DIR/}"
-
- # Skip '.' and '..' inside the directory
- sub_basename="$(basename "$subitem")"
- if [ "$sub_basename" == "." ] || [ "$sub_basename" == ".." ]; then
- continue
- fi
-
- # Check if the subitem is in the preserved list
- if is_preserved "$sub_rel_path"; then
- log "Preserving: $sub_rel_path"
- continue
- fi
-
- # Decide to delete or backup
- if [ "$DRY_RUN" -eq 1 ]; then
- log "Would delete file: $sub_rel_path"
- else
- # Create backup directory if not already
- mkdir -p "$BACKUP_DIR"
-
- if [ -f "$subitem" ] || [ -L "$subitem" ]; then
- mv "$subitem" "$BACKUP_DIR/"
- log "Moved file to backup: $sub_rel_path"
- elif [ -d "$subitem" ]; then
- mv "$subitem" "$BACKUP_DIR/"
- log "Moved directory to backup: $sub_rel_path"
- fi
- fi
- done
- continue # Move to the next item in the root directory
- fi
-
- # If not preserved and not a preserved directory, delete or backup the item
- if [ -f "$item" ] || [ -L "$item" ]; then
- if [ "$DRY_RUN" -eq 1 ]; then
- log "Would delete file: $rel_path"
- else
- # Create backup directory if not already
- mkdir -p "$BACKUP_DIR"
-
- mv "$item" "$BACKUP_DIR/"
- log "Moved file to backup: $rel_path"
- fi
- elif [ -d "$item" ]; then
- if [ "$DRY_RUN" -eq 1 ]; then
- log "Would delete directory: $rel_path"
- else
- # Create backup directory if not already
- mkdir -p "$BACKUP_DIR"
-
- mv "$item" "$BACKUP_DIR/"
- log "Moved directory to backup: $rel_path"
- fi
- fi
- done
-
- # Disable dotglob after processing
- shopt -u dotglob
-
- if [ "$DRY_RUN" -eq 1 ]; then
- log "Dry run completed. No files were deleted or moved."
- else
- log "Cleanup completed successfully. Deleted files are backed up in '$BACKUP_DIR'."
- fi
-}
-
-# Parse optional flags
-while [[ "$#" -gt 0 ]]; do
- case $1 in
- --dry-run) DRY_RUN=1 ;;
- *) echo "Unknown parameter passed: $1"; exit 1 ;;
- esac
- shift
-done
-
-# Safety: Prompt the user before proceeding
-if [ "$DRY_RUN" -eq 1 ]; then
- log "Dry run mode enabled. No files will be deleted or moved."
-else
- read -p "This will delete all files and directories except the preserved ones. Are you sure you want to proceed? [y/N]: " confirm
- case "$confirm" in
- [yY][eE][sS]|[yY])
- ;;
- *)
- log "Cleanup aborted by user."
- exit 0
- ;;
- esac
-fi
-
-# Perform cleanup
-cleanup
diff --git a/pyproject.toml b/pyproject.toml
index 07cc202..8bb7eeb 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,58 +1,50 @@
[build-system]
-requires = ["setuptools>=61.0", "wheel", "setuptools-git-versioning"]
+requires = ["setuptools>=41", "wheel", "setuptools-git-versioning"]
build-backend = "setuptools.build_meta"
[project]
name = "precice-case-generate"
-dynamic = [ "version" ]
-description = "Generates File and Folder Structure, including all of the necessary files to quickly kickstart a simulation"
-
+dynamic = ["version"]
+description = "Generate files and folder structure for a coupled simulation with preCICE."
readme = "README.md"
license = "MIT"
license-files = ["LICENSE"]
authors = [
- { name = "VanLaareN", email = "vanlaren@example.com" },
- { name = "Toddelismyname", email = "116207910+Toddelismyname@users.noreply.github.com" }
+ { name = "Orlando", email = "orlac2700@gmail.com" },
]
-requires-python = ">= 3.9"
+requires-python = ">= 3.10"
dependencies = [
- "attrs>=25.3",
- "jsonschema>=4.23",
- "jsonschema-specifications>=2024.10",
- "lxml>=5.3",
- "ruamel_yaml",
- "referencing>=0.36",
- "rpds-py>=0.24",
- "termcolor>=3",
- "typing_extensions>=4.13",
+ "precice-config-graph @ git+https://github.com/precice/config-graph.git@refs/pull/3/head",
+ "precice-adapter-schema",
+ "ruamel.yaml",
+ "jsonschema",
+ "colored",
+]
+
+[project.optional-dependencies]
+dev = [
+ "pytest",
+ "precice-config-check @ git+https://github.com/precice/config-check.git@refs/pull/3/head"
]
[project.scripts]
precice-case-generate = "precicecasegenerate.cli:main"
[project.urls]
-Repository = "https://github.com/precice-forschungsprojekt/precice-generator"
-Issues = "https://github.com/precice-forschungsprojekt/precice-generator/issues"
-Documentation = "https://github.com/precice-forschungsprojekt/precice-generator/blob/main/README.md"
-
-[tool.setuptools]
-packages = [
-"precicecasegenerate",
-"precicecasegenerate.schemas",
-"precicecasegenerate.templates",
-"precicecasegenerate.generation_utils",
-"precicecasegenerate.controller_utils.myutils",
-"precicecasegenerate.controller_utils.precice_struct",
-"precicecasegenerate.controller_utils.ui_struct"
-]
+Repository = "https://github.com/precice/case-generate"
+Documentation = "https://github.com/precice/case-generate/blob/main/README.md"
+
+[tool.setuptools.packages.find]
+where = ["."]
+include = ["precicecasegenerate*"]
[tool.setuptools-git-versioning]
enabled = true
[tool.setuptools.package-data]
precicecasegenerate = [
-"templates/*",
-"schemas/*"
+ "templates/*",
+ "schemas/*"
]
diff --git a/tests/coupling_scheme_type/coupling_scheme_test.py b/tests/coupling_scheme_type/coupling_scheme_test.py
new file mode 100644
index 0000000..7f55a81
--- /dev/null
+++ b/tests/coupling_scheme_type/coupling_scheme_test.py
@@ -0,0 +1,64 @@
+"""
+Test that coupling-schemes are created correctly according to the topology.
+"""
+
+from pathlib import Path
+from precice_config_graph.graph import operations
+
+from preciceconfigcheck.cli import runCheck
+
+from precicecasegenerate.cli import generate_case
+
+# This directory is the same for all tests in this file.
+test_directory: Path = Path(__file__).parent
+
+
+def test_explicit_coupling_scheme():
+ """
+ Test that an explicit coupling-scheme is created.
+ """
+ case_directory: Path = test_directory / "explicit_coupling"
+ input_file_two_participants: Path = case_directory / "two-participants.yaml"
+
+ generate_case(input_file_two_participants, case_directory / "_generated")
+ expected: Path = case_directory / "precice-config_two-participants.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert operations.check_config_equivalence(expected, actual, ignore_names=True), "Configs are not equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
+
+ input_file_three_participants: Path = case_directory / "three-participants.yaml"
+ # The previous files are overwritten
+ generate_case(input_file_three_participants, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config_three-participants.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert operations.check_config_equivalence(expected, actual, ignore_names=True), "Configs are not equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
+
+def test_implicit_coupling_scheme():
+ """
+ Test that an implicit coupling-scheme is created.
+ """
+ case_directory: Path = test_directory / "implicit_coupling"
+ input_file: Path = case_directory / "topology.yaml"
+
+ generate_case(input_file, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert operations.check_config_equivalence(expected, actual, ignore_names=True), "Configs are not equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
+
+def test_multi_coupling_scheme():
+ """
+ Test that a multi-coupling scheme is created.
+ """
+ case_directory: Path = test_directory / "multi_coupling"
+ input_file: Path = case_directory / "topology.yaml"
+
+ generate_case(input_file, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert operations.check_config_equivalence(expected, actual, ignore_names=True), "Configs are not equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
\ No newline at end of file
diff --git a/tests/coupling_scheme_type/explicit_coupling/precice-config_three-participants.xml b/tests/coupling_scheme_type/explicit_coupling/precice-config_three-participants.xml
new file mode 100644
index 0000000..a9ec843
--- /dev/null
+++ b/tests/coupling_scheme_type/explicit_coupling/precice-config_three-participants.xml
@@ -0,0 +1,70 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/coupling_scheme_type/explicit_coupling/precice-config_two-participants.xml b/tests/coupling_scheme_type/explicit_coupling/precice-config_two-participants.xml
new file mode 100644
index 0000000..7479be3
--- /dev/null
+++ b/tests/coupling_scheme_type/explicit_coupling/precice-config_two-participants.xml
@@ -0,0 +1,57 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/coupling_scheme_type/explicit_coupling/three-participants.yaml b/tests/coupling_scheme_type/explicit_coupling/three-participants.yaml
new file mode 100644
index 0000000..351c40e
--- /dev/null
+++ b/tests/coupling_scheme_type/explicit_coupling/three-participants.yaml
@@ -0,0 +1,25 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+ - name: C
+ solver: CSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: A
+ to: C
+ data: Color
+ type: strong
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/coupling_scheme_type/explicit_coupling/two-participants.yaml b/tests/coupling_scheme_type/explicit_coupling/two-participants.yaml
new file mode 100644
index 0000000..b046ed7
--- /dev/null
+++ b/tests/coupling_scheme_type/explicit_coupling/two-participants.yaml
@@ -0,0 +1,22 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: A
+ data: Color
+ type: strong
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/coupling_scheme_type/implicit_coupling/precice-config.xml b/tests/coupling_scheme_type/implicit_coupling/precice-config.xml
new file mode 100644
index 0000000..d290c4a
--- /dev/null
+++ b/tests/coupling_scheme_type/implicit_coupling/precice-config.xml
@@ -0,0 +1,63 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/coupling_scheme_type/implicit_coupling/topology.yaml b/tests/coupling_scheme_type/implicit_coupling/topology.yaml
new file mode 100644
index 0000000..e1a5ffd
--- /dev/null
+++ b/tests/coupling_scheme_type/implicit_coupling/topology.yaml
@@ -0,0 +1,22 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: strong
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: A
+ data: Color
+ type: strong
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/coupling_scheme_type/multi_coupling/precice-config.xml b/tests/coupling_scheme_type/multi_coupling/precice-config.xml
new file mode 100644
index 0000000..9cb54ce
--- /dev/null
+++ b/tests/coupling_scheme_type/multi_coupling/precice-config.xml
@@ -0,0 +1,103 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/coupling_scheme_type/multi_coupling/topology.yaml b/tests/coupling_scheme_type/multi_coupling/topology.yaml
new file mode 100644
index 0000000..800a8f2
--- /dev/null
+++ b/tests/coupling_scheme_type/multi_coupling/topology.yaml
@@ -0,0 +1,39 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+ - name: C
+ solver: CSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: strong
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: A
+ data: Color
+ type: strong
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: C
+ to: A
+ data: Color
+ type: strong
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: A
+ to: C
+ data: Color
+ type: strong
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
\ No newline at end of file
diff --git a/tests/data_type/both_types_both_directions/precice-config.xml b/tests/data_type/both_types_both_directions/precice-config.xml
new file mode 100644
index 0000000..2603294
--- /dev/null
+++ b/tests/data_type/both_types_both_directions/precice-config.xml
@@ -0,0 +1,100 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/data_type/both_types_both_directions/topology.yaml b/tests/data_type/both_types_both_directions/topology.yaml
new file mode 100644
index 0000000..bd32bad
--- /dev/null
+++ b/tests/data_type/both_types_both_directions/topology.yaml
@@ -0,0 +1,53 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+ - name: C
+ solver: CSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: A
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: vector
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: A
+ data: Color
+ type: weak
+ data-type: vector
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: C
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: C
+ data: Color
+ type: weak
+ data-type: vector
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/data_type/data_type_test.py b/tests/data_type/data_type_test.py
new file mode 100644
index 0000000..f178f3b
--- /dev/null
+++ b/tests/data_type/data_type_test.py
@@ -0,0 +1,103 @@
+"""
+Test that the data-type tag of exchanges works as intended.
+This is done by checking whether the generated config file is equivalent to an expected config file.
+Additionally, the config is validated using precice-config-check.
+"""
+
+from pathlib import Path
+import precice_config_graph.graph.operations as operations
+from preciceconfigcheck.cli import runCheck
+
+from precicecasegenerate.cli import generate_case
+
+# This directory is the same for all tests in this file.
+test_directory: Path = Path(__file__).parent
+
+
+def test_same_type_one_direction():
+ """
+ This is the simplest case.
+ There is only one exchange and thus only one data-type.
+ Nothing should go wrong here.
+ """
+ case_directory: Path = test_directory / "same_type_one_direction"
+ input_file: Path = case_directory / "topology.yaml"
+
+ generate_case(input_file, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert operations.check_config_equivalence(expected, actual), "Configs are not equivalent."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
+
+
+def test_different_type_one_direction():
+ """
+ This case is more complex.
+ There are two exchanges with different data-types.
+ Thus, two different data nodes should be created, with names ending in "-Vector" and "-Scalar" respectively.
+ """
+ case_directory: Path = test_directory / "different_type_one_direction"
+ input_file: Path = case_directory / "topology.yaml"
+
+ generate_case(input_file, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert operations.check_config_equivalence(expected, actual), "Configs are not equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
+
+
+def test_same_type_both_directions():
+ """
+ This case is more complex.
+ There are two exchanges, one per direction, with the same data-type.
+ This should result in two data-nodes, one with a "uniquified" name.
+ """
+ case_directory: Path = test_directory / "same_type_both_directions"
+ input_file: Path = case_directory / "topology.yaml"
+
+ generate_case(input_file, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert not operations.check_config_equivalence(expected, actual), "Configs are equivalent with different names."
+ assert operations.check_config_equivalence(expected, actual, ignore_names=True), "Configs not are equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
+
+
+def test_different_type_both_directions():
+ """
+ This case is more complex.
+ There are two exchanges, one per direction, with different data-types.
+ This should result in two data-nodes, with names ending in "-Vector" and "-Scalar" respectively.
+ """
+ case_directory: Path = test_directory / "different_type_both_directions"
+ input_file: Path = case_directory / "topology.yaml"
+
+ generate_case(input_file, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert operations.check_config_equivalence(expected, actual), "Configs are not equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
+
+
+def test_both_types_both_directions():
+ """
+ This case is the most complex.
+ There are six exchanges and now three participants, A, B and C.
+ A and B share four exchanges, which should result in four data nodes;
+ and B and C share two exchanges, which should reuse the same data nodes.
+ This should result in four data-nodes, two with "uniquified" names.
+ """
+ case_directory: Path = test_directory / "both_types_both_directions"
+ input_file: Path = case_directory / "topology.yaml"
+
+ generate_case(input_file, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert not operations.check_config_equivalence(expected, actual), "Configs are equivalent with different names."
+ assert operations.check_config_equivalence(expected, actual, ignore_names=True), "Configs are not equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
diff --git a/tests/data_type/different_type_both_directions/precice-config.xml b/tests/data_type/different_type_both_directions/precice-config.xml
new file mode 100644
index 0000000..338b7ae
--- /dev/null
+++ b/tests/data_type/different_type_both_directions/precice-config.xml
@@ -0,0 +1,57 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/data_type/different_type_both_directions/topology.yaml b/tests/data_type/different_type_both_directions/topology.yaml
new file mode 100644
index 0000000..9ecd8be
--- /dev/null
+++ b/tests/data_type/different_type_both_directions/topology.yaml
@@ -0,0 +1,22 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: A
+ data: Color
+ type: weak
+ data-type: vector
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/data_type/different_type_one_direction/precice-config.xml b/tests/data_type/different_type_one_direction/precice-config.xml
new file mode 100644
index 0000000..2a712ba
--- /dev/null
+++ b/tests/data_type/different_type_one_direction/precice-config.xml
@@ -0,0 +1,50 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/data_type/different_type_one_direction/topology.yaml b/tests/data_type/different_type_one_direction/topology.yaml
new file mode 100644
index 0000000..734ecef
--- /dev/null
+++ b/tests/data_type/different_type_one_direction/topology.yaml
@@ -0,0 +1,22 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: vector
+ from-patch: interface
+ to-patch: interface
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
\ No newline at end of file
diff --git a/tests/data_type/same_type_both_directions/precice-config.xml b/tests/data_type/same_type_both_directions/precice-config.xml
new file mode 100644
index 0000000..cb65a82
--- /dev/null
+++ b/tests/data_type/same_type_both_directions/precice-config.xml
@@ -0,0 +1,57 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/data_type/same_type_both_directions/topology.yaml b/tests/data_type/same_type_both_directions/topology.yaml
new file mode 100644
index 0000000..9c2ed59
--- /dev/null
+++ b/tests/data_type/same_type_both_directions/topology.yaml
@@ -0,0 +1,22 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: B
+ to: A
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/data_type/same_type_one_direction/precice-config.xml b/tests/data_type/same_type_one_direction/precice-config.xml
new file mode 100644
index 0000000..117d4bb
--- /dev/null
+++ b/tests/data_type/same_type_one_direction/precice-config.xml
@@ -0,0 +1,44 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/data_type/same_type_one_direction/topology.yaml b/tests/data_type/same_type_one_direction/topology.yaml
new file mode 100644
index 0000000..c1131cb
--- /dev/null
+++ b/tests/data_type/same_type_one_direction/topology.yaml
@@ -0,0 +1,15 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/exchange_with_non_unique_patches/precice-config.xml b/tests/exchange_with_non_unique_patches/precice-config.xml
new file mode 100644
index 0000000..a727823
--- /dev/null
+++ b/tests/exchange_with_non_unique_patches/precice-config.xml
@@ -0,0 +1,56 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/exchange_with_non_unique_patches/test_data_renaming.py b/tests/exchange_with_non_unique_patches/test_data_renaming.py
new file mode 100644
index 0000000..a492a15
--- /dev/null
+++ b/tests/exchange_with_non_unique_patches/test_data_renaming.py
@@ -0,0 +1,28 @@
+"""
+Test that exchanges between participants correctly lead to data renaming if only the specified patches are unique.
+"""
+
+from pathlib import Path
+from precice_config_graph.graph import operations
+from preciceconfigcheck.cli import runCheck
+
+from precicecasegenerate.cli import generate_case
+
+# This directory is the same for all tests in this file.
+test_directory: Path = Path(__file__).parent
+def test_data_renaming():
+ """
+ Test that the data is renamed correctly when exchanges are not unique if patches are disregarded;
+ i.e., if from, to, type, data, data-type, is not unique in the topology,
+ then the non-unique exchanges need their data renamed.
+ """
+ case_directory: Path = test_directory
+ input_file: Path = case_directory / "topology.yaml"
+
+ generate_case(input_file, case_directory / "_generated")
+
+ expected: Path = case_directory / "precice-config.xml"
+ actual: Path = case_directory / "_generated/precice-config.xml"
+ assert not operations.check_config_equivalence(expected, actual), "Configs are equivalent with different names."
+ assert operations.check_config_equivalence(expected, actual, ignore_names=True), "Configs not are equivalent up to naming."
+ assert runCheck(actual, True) == 0, "The config failed to validate."
\ No newline at end of file
diff --git a/tests/exchange_with_non_unique_patches/topology.yaml b/tests/exchange_with_non_unique_patches/topology.yaml
new file mode 100644
index 0000000..a32ee2f
--- /dev/null
+++ b/tests/exchange_with_non_unique_patches/topology.yaml
@@ -0,0 +1,29 @@
+participants:
+ - name: Alligator
+ solver: ASolver
+ dimensionality: 3
+ - name: Crocodile
+ solver: BSolver
+ dimensionality: 3
+exchanges:
+ - from: Alligator
+ to: Crocodile
+ from-patch: Face
+ to-patch: Face
+ type: weak
+ data: Information
+ data-type: scalar
+ - from: Alligator
+ to: Crocodile
+ from-patch: Body
+ to-patch: Body
+ type: weak
+ data: Information
+ data-type: scalar
+ - from: Alligator
+ to: Crocodile
+ from-patch: Eye
+ to-patch: Eye
+ type: weak
+ data: Information
+ data-type: scalar
\ No newline at end of file
diff --git a/tests/miscellaneous_config_generation/misc1/topology.yaml b/tests/miscellaneous_config_generation/misc1/topology.yaml
new file mode 100644
index 0000000..0b2df99
--- /dev/null
+++ b/tests/miscellaneous_config_generation/misc1/topology.yaml
@@ -0,0 +1,25 @@
+participants:
+ - name: Fluid
+ solver: OpenFOAM
+ dimensionality: 3
+
+ - name: Solid
+ solver: CalculiX
+ dimensionality: 3
+
+exchanges:
+ - from: Fluid
+ from-patch: interface
+ to: Solid
+ to-patch: interface
+ data: Forces
+ data-type: vector
+ type: strong
+
+ - from: Solid
+ from-patch: interface
+ to: Fluid
+ to-patch: interface
+ data: Displacement
+ data-type: vector
+ type: strong
\ No newline at end of file
diff --git a/tests/miscellaneous_config_generation/misc2/topology.yaml b/tests/miscellaneous_config_generation/misc2/topology.yaml
new file mode 100644
index 0000000..2d456cd
--- /dev/null
+++ b/tests/miscellaneous_config_generation/misc2/topology.yaml
@@ -0,0 +1,29 @@
+participants:
+ - name: Heater
+ solver: Python
+ dimensionality: 2
+
+ - name: Wall
+ solver: C++
+ dimensionality: 2
+
+ - name: Fluid
+ solver: Java
+ dimensionality: 3
+
+exchanges:
+ - from: Heater
+ from-patch: top-surface
+ to: Wall
+ to-patch: bottom-surface
+ data: Heat-Flux
+ data-type: scalar
+ type: strong
+
+ - from: Wall
+ from-patch: inner-surface
+ to: Fluid
+ to-patch: wall
+ data: Temperature
+ data-type: scalar
+ type: strong
\ No newline at end of file
diff --git a/tests/miscellaneous_config_generation/misc3/topology.yaml b/tests/miscellaneous_config_generation/misc3/topology.yaml
new file mode 100644
index 0000000..9961329
--- /dev/null
+++ b/tests/miscellaneous_config_generation/misc3/topology.yaml
@@ -0,0 +1,25 @@
+participants:
+ - name: A
+ solver: Solver
+ dimensionality: 3
+
+ - name: B
+ solver: Solver
+ dimensionality: 3
+
+exchanges:
+ - from: A
+ from-patch: interface
+ to: B
+ to-patch: interface
+ data: Mighty-Force
+ data-type: vector
+ type: weak
+
+ - from: B
+ from-patch: interface
+ to: A
+ to-patch: interface
+ data: Force
+ data-type: vector
+ type: weak
\ No newline at end of file
diff --git a/tests/miscellaneous_config_generation/test_validity.py b/tests/miscellaneous_config_generation/test_validity.py
new file mode 100644
index 0000000..a7a3bba
--- /dev/null
+++ b/tests/miscellaneous_config_generation/test_validity.py
@@ -0,0 +1,28 @@
+"""
+This file contains tests for the validity of the generated preCICE config files.
+"""
+
+from pathlib import Path
+from preciceconfigcheck.cli import runCheck
+
+from precicecasegenerate.cli import generate_case
+
+# This directory is the same for all tests in this file.
+test_directory: Path = Path(__file__).parent
+
+
+def test_validity():
+ """
+ Check that all topologies generate valid preCICE config files.
+ """
+ for case_directory in test_directory.iterdir():
+ # Ignore files and folders like __pycache__/
+ if not case_directory.is_dir() or case_directory.name.startswith("__") or case_directory.name.startswith("."):
+ continue
+
+ input_file: Path = case_directory / "topology.yaml"
+
+ assert 0 == generate_case(input_file, case_directory / "_generated"), "Case generation failed."
+
+ config_file: Path = case_directory / "_generated/precice-config.xml"
+ assert runCheck(config_file, True) == 0, "The config failed to validate."
diff --git a/tests/node_generation/complex_topology.yaml b/tests/node_generation/complex_topology.yaml
new file mode 100644
index 0000000..607764b
--- /dev/null
+++ b/tests/node_generation/complex_topology.yaml
@@ -0,0 +1,56 @@
+participants:
+ - name: NASTIN
+ solver: ASolver
+ dimensionality: 3
+ - name: SOLIDZ1
+ solver: SolVer
+ dimensionality: 3
+ - name: SOLIDZ2
+ solver: SolVer
+ dimensionality: 3
+ - name: SOLIDZ3
+ solver: SolVer
+ dimensionality: 3
+exchanges:
+ - from: NASTIN
+ to: SOLIDZ1
+ from-patch: interface
+ to-patch: interface
+ data: Forces1
+ type: strong
+ data-type: vector
+ - from: NASTIN
+ to: SOLIDZ2
+ from-patch: interface
+ to-patch: interface
+ data: Forces2
+ type: strong
+ data-type: vector
+ - from: NASTIN
+ to: SOLIDZ3
+ from-patch: interface
+ to-patch: interface
+ data: Forces3
+ type: strong
+ data-type: vector
+ - from: SOLIDZ1
+ to: NASTIN
+ from-patch: interface
+ to-patch: interface
+ data: Displacements1
+ type: strong
+ data-type: vector
+ - from: SOLIDZ2
+ to: NASTIN
+ from-patch: interface
+ to-patch: interface
+ data: Displacements2
+ type: strong
+ data-type: vector
+ - from: SOLIDZ3
+ to: NASTIN
+ from-patch: interface
+ to-patch: interface
+ data: Displacements3
+ type: strong
+ data-type: vector
diff --git a/tests/node_generation/simple_topology.yaml b/tests/node_generation/simple_topology.yaml
new file mode 100644
index 0000000..90c1784
--- /dev/null
+++ b/tests/node_generation/simple_topology.yaml
@@ -0,0 +1,14 @@
+participants:
+ - name: A
+ solver: ASolver
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ from-patch: interface
+ to-patch: interface
+ data: Color
+ type: weak
+ data-type: scalar
\ No newline at end of file
diff --git a/tests/node_generation/test_node_generation.py b/tests/node_generation/test_node_generation.py
new file mode 100644
index 0000000..4a508e1
--- /dev/null
+++ b/tests/node_generation/test_node_generation.py
@@ -0,0 +1,437 @@
+from pathlib import Path
+from precice_config_graph import nodes as n, enums as e
+from precice_config_graph.nodes import MeshNode
+
+from precicecasegenerate.input_handler.topology_reader import TopologyReader
+from precicecasegenerate.node_creator import NodeCreator
+import precicecasegenerate.helper as helper
+
+test_directory: Path = Path(__file__).parent
+
+
+def simple_setup(patches: bool = False) -> tuple[dict, dict, dict[MeshNode, set[str]]] | tuple[dict, dict]:
+ """
+ Create all nodes for the (simple) test topology file.
+ :return: A dict representing the topology and a dict containing all nodes created from the topology.
+ """
+ topology_file: Path = test_directory / "simple_topology.yaml"
+
+ topology_reader: TopologyReader = TopologyReader(topology_file)
+ topology: dict = topology_reader.get_topology()
+ node_creator: NodeCreator = NodeCreator(topology)
+ nodes: dict = node_creator.get_nodes()
+ if patches:
+ return topology, nodes, node_creator.get_mesh_patch_map()
+ return topology, nodes
+
+
+def complex_setup() -> tuple[dict, dict]:
+ """
+ Create all nodes for the (more complex) test topology file.
+ :return: A dict representing the topology and a dict containing all nodes created from the topology.
+ """
+ topology_file: Path = test_directory / "complex_topology.yaml"
+
+ topology_reader: TopologyReader = TopologyReader(topology_file)
+ topology: dict = topology_reader.get_topology()
+ node_creator: NodeCreator = NodeCreator(topology)
+ nodes: dict = node_creator.get_nodes()
+ return topology, nodes
+
+
+def test_participant_nodes():
+ """
+ Check that all attributes of the participant nodes are correct.
+ """
+ topology, nodes = simple_setup()
+ number_of_participants: int = len(topology["participants"])
+ participants: list[n.ParticipantNode] = nodes["participants"]
+ assert len(participants) == number_of_participants, (f"Found {len(participants)} participants, "
+ f"expected {number_of_participants}.")
+
+ first_name: str = None
+ try:
+ first_name = topology["participants"][0]["name"]
+ first_participant: n.ParticipantNode = next(p for p in participants if p.name == first_name)
+ except StopIteration:
+ assert False, f"Participant {first_name} not found."
+ second_name: str = None
+ try:
+ second_name = topology["participants"][1]["name"]
+ second_participant: n.ParticipantNode = next(p for p in participants if p.name == second_name)
+ except StopIteration:
+ assert False, f"Participant {second_name} not found."
+
+ assert len(nodes["data"]) == 1, f"Found {len(nodes['data'])} data nodes, expected 1."
+ data: n.DataNode = nodes["data"][0]
+
+ if first_participant.read_data and second_participant.read_data:
+ assert False, "Both participants have read-data nodes."
+ elif not first_participant.read_data and not second_participant.read_data:
+ assert False, "Neither participant has read-data nodes."
+
+ read_participant: n.ParticipantNode = first_participant if first_participant.read_data else second_participant
+ try:
+ next(rd for rd in read_participant.read_data if rd.data == data)
+ except StopIteration:
+ assert False, f"Data {data.name} not found in reader-participant {read_participant.name}."
+
+ if first_participant.write_data and second_participant.write_data:
+ assert False, "Both participants have write-data nodes."
+ elif not first_participant.write_data and not second_participant.write_data:
+ assert False, "Neither participant has write-data nodes."
+
+ write_participant: n.ParticipantNode = first_participant if first_participant.write_data else second_participant
+ try:
+ next(wd for wd in write_participant.write_data if wd.data == data)
+ except StopIteration:
+ assert False, f"Data {data.name} not found in writer-participant {write_participant.name}."
+
+ assert write_participant != read_participant, f"The same participant {write_participant.name} reads and writes data."
+
+ from_name: str = topology["exchanges"][0]["from"]
+ to_name: str = topology["exchanges"][0]["to"]
+ from_participant: n.ParticipantNode = first_participant if from_name == first_name else second_participant
+ to_participant: n.ParticipantNode = first_participant if to_name == first_name else second_participant
+ assert from_participant != to_participant, f"Exchange is between the same participant {from_name}."
+
+ assert len(first_participant.provide_meshes) == 1, f"Participant {first_name} provides more than one mesh."
+ assert len(second_participant.provide_meshes) == 1, f"Participant {second_name} provides more than one mesh."
+
+ # Bools for checking mappings
+ extensive_case: bool = data.name in helper.EXTENSIVE_DATA
+ intensive_case: bool = data.name in helper.INTENSIVE_DATA
+ if not extensive_case and not intensive_case:
+ if helper.DEFAULT_DATA_KIND == "extensive":
+ extensive_case = True
+ else:
+ intensive_case = True
+
+ if extensive_case:
+ # For extensive data, a write-mapping is needed. A write-mapping is specified by the from-participant.
+ assert write_participant == from_participant, f"From-participant {from_name} does not write data {data.name}."
+ assert from_participant.mappings, f"Write mapping is not specified by the from-participant {from_name}."
+ assert not to_participant.mappings, f"To-participant {to_name} specifies mapping."
+ assert len(from_participant.mappings) == 1, f"From-participant {from_name} specifies more than one mapping."
+ # Then the from-participant also receives a mesh from the to-participant
+ assert len(from_participant.receive_meshes) == 1, (f"From-participant {from_name} receives "
+ f"{len(from_participant.receive_meshes)} meshes, expected 1.")
+ assert not to_participant.receive_meshes, f"To-participant {to_name} receives a mesh."
+ receive_mesh_node: n.ReceiveMeshNode = from_participant.receive_meshes[0]
+ assert receive_mesh_node.mesh == to_participant.provide_meshes[0], (
+ f"From-participant {from_name} receives mesh {receive_mesh_node.mesh.name}."
+ )
+
+ mapping: n.MappingNode = from_participant.mappings[0]
+ # Write-mappings are always conservative
+ assert mapping.constraint == e.MappingConstraint.CONSERVATIVE, (f"Write mapping is not conservative "
+ f"but has type {mapping.constraint.value}")
+ assert mapping.direction == e.Direction.WRITE, f"Write-mapping is not a write-mapping."
+
+ elif intensive_case:
+ # For intensive data, a read-mapping is needed. A read-mapping is specified by the to-participant.
+ assert read_participant == to_participant, f"To-participant {to_name} does not read data {data.name}."
+ assert to_participant.mappings, f"Read mapping is not specified by the to-participant {to_name}."
+ assert not from_participant.mappings, f"From-participant {from_name} specifies mapping."
+ assert len(to_participant.mappings) == 1, f"To-participant {to_name} specifies more than one mapping."
+ # Then the to-participant also receives a mesh from the from-participant
+ assert len(to_participant.receive_meshes) == 1, (f"To-participant {to_name} does not receives "
+ f"{len(to_participant.receive_meshes)} meshes, expected 1.")
+ assert not from_participant.receive_meshes, f"From-participant {from_name} receives a mesh."
+ receive_mesh_node: n.ReceiveMeshNode = to_participant.receive_meshes[0]
+ assert receive_mesh_node.mesh == from_participant.provide_meshes[0], (
+ f"To-participant {to_name} receives mesh {receive_mesh_node.mesh.name}."
+ )
+
+ mapping: n.MappingNode = to_participant.mappings[0]
+ # Read-mappings are always consistent
+ assert mapping.constraint == e.MappingConstraint.CONSISTENT, (f"Read-mapping is not consistent "
+ f"but has type {mapping.constraint.value}")
+ assert mapping.direction == e.Direction.READ, f"Read-mapping is not a read-mapping."
+
+ assert mapping.method == helper.DEFAULT_MAPPING_METHOD, (f"Mapping method is {mapping.method.value}, "
+ f"expected {helper.DEFAULT_MAPPING_METHOD.value}.")
+ # The from_mesh should be by the from-participant
+ assert mapping.from_mesh == from_participant.provide_meshes[0], (
+ f"Mapping uses wrong from-mesh {mapping.from_mesh.name} ,"
+ f"expected {from_participant.provide_meshes[0].name}."
+ )
+ # The to_mesh should be by the to-participant
+ assert mapping.to_mesh == to_participant.provide_meshes[0], (f"Mapping uses wrong to-mesh {mapping.to_mesh.name}, "
+ f"expected {to_participant.provide_meshes[0].name}.")
+
+ # These attributes are never set by this project and should therefore be empty / None
+ assert not mapping.just_in_time, "Mapping is just-in-time."
+ # A mapping-node has more attributes, but these are not used if the method is NEAREST_NEIGHBOR, as is expected here
+
+ # These attributes are never set by this project and should therefore be empty / None
+ for participant in nodes["participants"]:
+ assert not participant.watchpoints, f"Participant {participant.name} specifies watchpoints."
+ assert not participant.watch_integrals, f"Participant {participant.name} specifies watch-integrals."
+ assert not participant.exports, f"Participant {participant.name} specifies exports."
+ assert not participant.actions, f"Participant {participant.name} specifies actions."
+
+
+def test_explicit_coupling_scheme_nodes():
+ """
+ Check that all attributes of the (explicit) coupling-scheme nodes are correct.
+ """
+ topology, nodes = simple_setup()
+ coupling_scheme_nodes: list[n.CouplingSchemeNode] = nodes["coupling-schemes"]
+ assert len(coupling_scheme_nodes) == 1, (
+ f"Found {len(coupling_scheme_nodes)} coupling schemes, expected 1.")
+
+ coupling_scheme: n.CouplingSchemeNode = coupling_scheme_nodes[0]
+
+ # Since there is only one exchange, the type is always explicit
+ assert coupling_scheme.type == helper.DEFAULT_EXPLICIT_COUPLING_TYPE
+
+ first_participant: n.ParticipantNode = coupling_scheme.first_participant
+ second_participant: n.ParticipantNode = coupling_scheme.second_participant
+ assert first_participant != second_participant, f"Coupling-scheme is between the same participants {first_participant.name}."
+
+ assert len(coupling_scheme.exchanges) == len(topology["exchanges"]), (
+ f"Found {len(coupling_scheme.exchanges)} exchanges, expected {len(topology['exchanges'])}."
+ )
+
+ from_name: str = topology["exchanges"][0]["from"]
+ to_name: str = topology["exchanges"][0]["to"]
+ from_participant: n.ParticipantNode = first_participant if from_name == first_participant.name else second_participant
+ to_participant: n.ParticipantNode = first_participant if to_name == first_participant.name else second_participant
+
+ assert from_participant == coupling_scheme.exchanges[0].from_participant, (
+ f"From-participant {from_name} of coupling-scheme does not match the exchange's from-participant "
+ f"{coupling_scheme.exchanges[0].from_participant.name}."
+ )
+
+ assert to_participant == coupling_scheme.exchanges[0].to_participant, (
+ f"To-participant {to_name} of coupling-scheme does not match the exchange's to-participant "
+ f"{coupling_scheme.exchanges[0].to_participant.name}."
+ )
+ # These attributes should not be set for an explicit coupling-scheme
+ assert not coupling_scheme.acceleration, f"Coupling-scheme specifies accelerations."
+ assert not coupling_scheme.convergence_measures, f"Coupling-scheme specifies convergence measures."
+
+
+def test_multi_coupling_scheme_nodes():
+ """
+ Check that attributes of the multi-coupling-scheme nodes are set correctly.
+ """
+ topology, nodes = complex_setup()
+ multi_coupling_scheme_nodes: list[n.MultiCouplingSchemeNode] = nodes["coupling-schemes"]
+ assert len(multi_coupling_scheme_nodes) == 1, (f"Found {len(multi_coupling_scheme_nodes)} multi-coupling schemes, "
+ f"expected 1.")
+
+ multi_coupling_scheme: n.MultiCouplingSchemeNode = multi_coupling_scheme_nodes[0]
+ assert isinstance(multi_coupling_scheme,
+ n.MultiCouplingSchemeNode), f"Coupling-scheme is not a multi-coupling scheme."
+
+ multi_coupling_scheme_participants: list[n.ParticipantNode] = multi_coupling_scheme.participants
+
+ assert len(multi_coupling_scheme_participants) == len(topology["participants"]), (
+ f"The topology specifies {len(topology['participants'])} participants, the coupling-scheme has {len(multi_coupling_scheme_participants)} participants."
+ )
+
+ exchange_nodes: list[n.ExchangeNode] = multi_coupling_scheme.exchanges
+ topology_exchanges: list[dict] = topology["exchanges"]
+
+ assert len(exchange_nodes) == len(topology_exchanges), (
+ f"The topology specifies {len(topology_exchanges)} exchanges, the coupling-scheme has {len(exchange_nodes)} exchanges."
+ )
+ # Check that the control participant is involved in the most exchanges
+ # and check that each exchange node has a corresponding exchange in the topology
+ exchange_histogramm: dict[n.ParticipantNode, int] = {p: 0 for p in multi_coupling_scheme_participants}
+ for exchange in topology_exchanges:
+ from_name: str = exchange["from"]
+ to_name: str = exchange["to"]
+ data_name: str = exchange["data"]
+ try:
+ exchange_node: n.ExchangeNode = next(e for e in exchange_nodes if e.from_participant.name == from_name and
+ e.to_participant.name == to_name and e.data.name == data_name)
+ from_participant: n.ParticipantNode = exchange_node.from_participant
+ to_participant: n.ParticipantNode = exchange_node.to_participant
+ except StopIteration:
+ assert False, f"Exchange {from_name}->{to_name} with data {data_name} not found in multi-coupling-scheme."
+ exchange_histogramm[from_participant] += 1
+ exchange_histogramm[to_participant] += 1
+
+ most_popular_participant: n.ParticipantNode = max(exchange_histogramm, key=exchange_histogramm.get)
+ assert multi_coupling_scheme.control_participant == most_popular_participant, (
+ f"The most popular participant in the coupling-scheme is {multi_coupling_scheme.control_participant.name}, "
+ f"expected {most_popular_participant.name}."
+ )
+
+ assert multi_coupling_scheme.acceleration, "Multi-coupling scheme does not specify an acceleration."
+
+ acceleration: n.AccelerationNode = multi_coupling_scheme.acceleration
+ assert acceleration.type == helper.DEFAULT_ACCELERATION_TYPE, (f"Acceleration type is {acceleration.type.value}, "
+ f"expected {helper.DEFAULT_ACCELERATION_TYPE.value}.")
+
+ assert len(acceleration.data) == len(topology_exchanges), (
+ f"The topology specifies {len(topology_exchanges)} exchanges, "
+ f"but len{acceleration.data} are accelerated."
+ )
+
+ # Check that each accelerated data corresponds to an exchange
+ for accelerated_data in acceleration.data:
+ try:
+ next(e for e in topology_exchanges if e["data"] == accelerated_data.data.name)
+ except StopIteration:
+ assert False, f"Accelerated data {accelerated_data.data.name} not found in topology."
+ # Since there are the same number of acceleration-data-nodes as exchanges in the topology,
+ # this means that each exchange is accelerated exactly once
+
+ assert multi_coupling_scheme.convergence_measures, "Multi-coupling scheme does not specify convergence measures."
+ assert len(multi_coupling_scheme.convergence_measures) == len(topology_exchanges), (
+ f"The topology specifies {len(topology_exchanges)} exchanges, "
+ f"but {len(multi_coupling_scheme.convergence_measures)} convergence measures are specified."
+ )
+
+ # Check that each convergence measure corresponds to an exchange
+ for convergence_measure in multi_coupling_scheme.convergence_measures:
+ try:
+ next(e for e in topology_exchanges if e["data"] == convergence_measure.data.name)
+ except StopIteration:
+ assert False, f"Convergence-measure for data {convergence_measure.data.name} not found in topology."
+ # Again, this means that there is a bijection between exchanges and convergence measures
+
+ # These attributes are not set during node creation
+ assert not acceleration.preconditioner, "Acceleration specifies a preconditioner."
+ assert not acceleration.filter, "Acceleration specifies a filter."
+
+
+def test_m2n_nodes():
+ """
+ Test that the M2N nodes are created correctly.
+ """
+ topology, nodes = complex_setup()
+ m2n_nodes: list[n.M2NNode] = nodes["m2n"]
+
+ # Check that each exchange in the topology has an M2N node connecting the exchanges' participants
+ for exchange in topology["exchanges"]:
+ from_name: str = exchange["from"]
+ to_name: str = exchange["to"]
+ try:
+ next(m2n for m2n in m2n_nodes if ((m2n.acceptor.name == from_name and m2n.connector.name == to_name)
+ or (m2n.acceptor.name == to_name and m2n.connector.name == from_name)))
+ except StopIteration:
+ assert False, f"No M2N node found connecting {from_name} and {to_name}."
+
+ # Check each M2N node connect participants that exchange something
+ for m2n in m2n_nodes:
+ acceptor_name: str = m2n.acceptor.name
+ connector_name: str = m2n.connector.name
+ assert acceptor_name != connector_name, f"M2N node between same participant {acceptor_name}."
+ try:
+ next(e for e in topology["exchanges"] if ((e["from"] == acceptor_name and e["to"] == connector_name)
+ or (e["from"] == connector_name and e["to"] == acceptor_name)))
+ except StopIteration:
+ assert False, f"No exchange found between {acceptor_name} and {connector_name}."
+ assert m2n.directory, f"M2N node between {acceptor_name} and {connector_name} does not specify a directory."
+ assert m2n.type == helper.DEFAULT_M2N_TYPE, (f"M2N type is {m2n.type.value}, "
+ f"expected {helper.DEFAULT_M2N_TYPE.value}.")
+
+
+def test_data_nodes():
+ """
+ Test that data nodes are created correctly.
+ """
+ topology, nodes = simple_setup()
+ data_nodes: list[n.DataNode] = nodes["data"]
+ # Check that each exchange in the topology has a data node
+ for exchange in topology["exchanges"]:
+ data_name: str = exchange["data"]
+ data_type: str = exchange.get("data-type", helper.DEFAULT_DATA_TYPE)
+ try:
+ # Note: This is only an approximate scheme, as data names can be uniquified
+ # and might thus not match original names completely
+ data_node = next(dn for dn in data_nodes if data_name.lower() in dn.name.lower())
+ except StopIteration:
+ assert False, f"No data node found for data {data_name}."
+ assert data_node.data_type.value == data_type, (f"Data node {data_name} has type {data_node.data_type.value}, "
+ f"expected {data_type}.")
+
+ for data_node in data_nodes:
+ try:
+ # Note: This is only an approximate scheme, as data names can be uniquified
+ # and might thus not match original names completely
+ exchange = next(e for e in topology["exchanges"] if e["data"].lower() in data_node.name.lower())
+ except StopIteration:
+ assert False, f"Data node {data_node.name} does not have an exchange."
+ assert data_node.data_type.value == exchange.get("data-type", helper.DEFAULT_DATA_TYPE), (
+ f"Data node {data_node.name} has type {data_node.data_type.value}, "
+ f"expected {exchange.get('data-type', helper.DEFAULT_DATA_TYPE)}."
+ )
+
+
+def test_mesh_nodes():
+ """
+ Check that mesh nodes are created correctly.
+ """
+ topology, nodes, mesh_patch_map = simple_setup(patches=True)
+ mesh_nodes: list[n.MeshNode] = nodes["meshes"]
+
+ # Check that each mesh is provided by exactly one participant
+ for mesh in mesh_nodes:
+ try:
+ provider: n.ParticipantNode = next(p for p in nodes["participants"] if mesh in p.provide_meshes)
+ impostor: n.ParticipantNode = next(p for p in nodes["participants"][::-1] if mesh in p.provide_meshes)
+ except StopIteration:
+ assert False, f"Mesh {mesh.name} is not provided by any participant."
+ assert provider == impostor, f"Mesh {mesh.name} is provided by both {provider.name} and {impostor.name}."
+
+ assert mesh.use_data, f"Mesh {mesh.name} does not use any data."
+
+ # Check the dimensions of the mesh
+ dimensionality: int = next(p.get("dimensionality", helper.DEFAULT_PARTICIPANT_DIMENSIONALITY)
+ for p in topology["participants"] if p["name"] == provider.name)
+ # It can happen in a "legal" way that the dimensions do not match:
+ # If A maps something to B and one has a higher dimensionality than the other,
+ # the highest dimensionality of the two is used.
+ increase_dim: bool = False
+ if not mesh.dimensions == dimensionality:
+ # We need to check if there exists a mapping involving this mesh that has a higher dimensionality
+ # Since the mapping of this topology is a read-mapping, the "provider" participant defined the mapping
+ for mapping in provider.mappings:
+ if mapping.from_mesh == mesh:
+ assert mapping.to_mesh.dimensions >= dimensionality, (
+ f"Mesh {mesh.name} has dimensions {mesh.dimensions}, "
+ f"expected at least {mapping.to_mesh.dimensions}."
+ )
+ increase_dim = True
+ break
+ elif mapping.to_mesh == mesh:
+ assert mapping.from_mesh.dimensions >= dimensionality, (
+ f"Mesh {mesh.name} has dimensions {mesh.dimensions}, "
+ f"expected at least {mapping.from_mesh.dimensions}."
+ )
+ increase_dim = True
+ break
+ assert increase_dim, (f"Mesh {mesh.name} has dimensions {mesh.dimensions}, "
+ f"expected {dimensionality}.")
+
+ # Check that each mesh is in the mesh-patch map
+ for mesh in mesh_nodes:
+ assert mesh in mesh_patch_map, f"Mesh {mesh.name} is not in the mesh-patch map."
+ # Check that the patch in the mesh-patch-map is in the topology
+ patch_names: set[str] = mesh_patch_map[mesh]
+ for patch_name in patch_names:
+ try:
+ # Note: This is only an approximate scheme, since patch names are not unique
+ next(e for e in topology["exchanges"] if e["from-patch"] == patch_name or e["to-patch"] == patch_name)
+ except StopIteration:
+ assert False, f"Patch {patch_name} is not in any exchange in the topology."
+
+ # Check that each patch in the topology is in the mesh-patch map
+ for exchange in topology["exchanges"]:
+ from_patch_name: str = exchange["from-patch"]
+ try:
+ next(m for m in mesh_nodes if from_patch_name in mesh_patch_map[m])
+ except StopIteration:
+ assert False, f"Patch {from_patch_name} is not in any mesh in the mesh-patch map."
+
+ to_patch_name: str = exchange["to-patch"]
+ try:
+ next(m for m in mesh_nodes if to_patch_name in mesh_patch_map[m])
+ except StopIteration:
+ assert False, f"Patch {to_patch_name} is not in any mesh in the mesh-patch map."
diff --git a/tests/test_examples.py b/tests/test_examples.py
index 1514365..86431f9 100644
--- a/tests/test_examples.py
+++ b/tests/test_examples.py
@@ -8,28 +8,31 @@
def list_examples():
examples = Path(__file__).parent.parent / "examples"
- return examples.rglob("topology.yaml")
+ return examples.rglob("*.yaml")
@pytest.mark.parametrize("example", list_examples())
def test_application_with_example(example: Path):
- """Test the application with each example topology files"""
+ """
+ Test the application with each example topology file.
+ :param example: The path to the example topology file.
+ """
- assert example.exists() and example.is_file(), "topology file doesn't exist"
+ assert example.exists() and example.is_file(), "Topology file doesn't exist."
with tempfile.TemporaryDirectory() as temp_dir:
- cmd = ["precice-case-generate", "-f", str(example), "-o", temp_dir]
+ cmd = ["precice-case-generate", str(example), "-o", temp_dir]
print(f"Running {cmd}")
subprocess.run(cmd)
output = [p.name for p in Path(temp_dir).iterdir()]
print(f"Output {output}")
- assert output, "Nothing generated"
+ assert output, "Nothing generated."
config = Path(temp_dir) / "precice-config.xml"
- assert config.exists(), "No config generated"
+ assert config.exists(), "No config generated."
ret = runCheck(config, True)
if ret != 0:
print("Failed config:")
print(config.read_text())
- assert False, "The config failed to validate"
+ assert False, "The config failed to validate."
diff --git a/tests/topology_parsing/test_topology_parsing.py b/tests/topology_parsing/test_topology_parsing.py
new file mode 100644
index 0000000..7758feb
--- /dev/null
+++ b/tests/topology_parsing/test_topology_parsing.py
@@ -0,0 +1,12 @@
+from ruamel.yaml import YAML
+
+def test_scientific_notation_parsing():
+ """
+ Test that YAML scientific notation (e.g., 1e-2) is parsed as a float, not a string.
+ """
+ yaml = YAML(typ="safe")
+
+ parsed_value = yaml.load("1e-2")
+
+ assert isinstance(parsed_value, float), "Scientific notation was parsed as a string."
+ assert parsed_value == 0.01
\ No newline at end of file
diff --git a/tests/topology_preprocessing/duplicate_exchanges/topology.yaml b/tests/topology_preprocessing/duplicate_exchanges/topology.yaml
new file mode 100644
index 0000000..906b1d0
--- /dev/null
+++ b/tests/topology_preprocessing/duplicate_exchanges/topology.yaml
@@ -0,0 +1,22 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/topology_preprocessing/duplicate_participants/topology.yaml b/tests/topology_preprocessing/duplicate_participants/topology.yaml
new file mode 100644
index 0000000..7614841
--- /dev/null
+++ b/tests/topology_preprocessing/duplicate_participants/topology.yaml
@@ -0,0 +1,15 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: A
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: B
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
diff --git a/tests/topology_preprocessing/exchange_loop/topology.yaml b/tests/topology_preprocessing/exchange_loop/topology.yaml
new file mode 100644
index 0000000..2af1c16
--- /dev/null
+++ b/tests/topology_preprocessing/exchange_loop/topology.yaml
@@ -0,0 +1,11 @@
+participants:
+ - name: Croc
+ solver: IDK
+
+exchanges:
+ - from: Croc
+ to: Croc
+ from-patch: I
+ to-patch: I
+ data: Fish
+ type: strong
\ No newline at end of file
diff --git a/tests/topology_preprocessing/test_preprocessing.py b/tests/topology_preprocessing/test_preprocessing.py
new file mode 100644
index 0000000..1492385
--- /dev/null
+++ b/tests/topology_preprocessing/test_preprocessing.py
@@ -0,0 +1,49 @@
+"""
+This file tests that the preprocessing works as expected.
+"""
+from pathlib import Path
+
+from precicecasegenerate.cli import generate_case
+
+# This directory is the same for all tests in this file.
+test_directory: Path = Path(__file__).parent
+
+
+def test_duplicate_participant_names():
+ """
+ Test that an error is raised when two participants have the same name.
+ """
+ case_directory: Path = test_directory / "duplicate_participants"
+ input_file: Path = case_directory / "topology.yaml"
+
+ assert 0 != generate_case(input_file, case_directory), "The case generation didn't fail."
+
+
+def test_unknown_participant_names():
+ """
+ Test that an error is raised when a participant is mentioned in an exchange but not defined in the topology.
+ """
+ case_directory: Path = test_directory / "unknown_participants"
+ input_file: Path = case_directory / "topology.yaml"
+
+ assert 0 != generate_case(input_file, case_directory), "The case generation didn't fail."
+
+
+def test_duplicate_exchanges():
+ """
+ Test that an error is raised when two exchanges have the exact same items.
+ """
+ case_directory: Path = test_directory / "duplicate_exchanges"
+ input_file: Path = case_directory / "topology.yaml"
+
+ assert 0 != generate_case(input_file, case_directory), "The case generation didn't fail."
+
+
+def test_exchange_loop():
+ """
+ Test that an error is raised when an exchange is a loop, i.e., the `from`-participant is also the `to`-participant.
+ """
+ case_directory: Path = test_directory / "exchange_loop"
+ input_file: Path = case_directory / "topology.yaml"
+
+ assert 0 != generate_case(input_file, case_directory), "The case generation didn't fail."
diff --git a/tests/topology_preprocessing/unknown_participants/topology.yaml b/tests/topology_preprocessing/unknown_participants/topology.yaml
new file mode 100644
index 0000000..06681bf
--- /dev/null
+++ b/tests/topology_preprocessing/unknown_participants/topology.yaml
@@ -0,0 +1,15 @@
+participants:
+ - name: A
+ solver: ASolver
+ dimensionality: 2
+ - name: B
+ solver: BSolver
+ dimensionality: 2
+exchanges:
+ - from: A
+ to: C
+ data: Color
+ type: weak
+ data-type: scalar
+ from-patch: interface
+ to-patch: interface
diff --git a/tox.ini b/tox.ini
index 47c56d1..5586051 100644
--- a/tox.ini
+++ b/tox.ini
@@ -9,9 +9,8 @@ env_list =
description = run the tests with pytest
package = wheel
wheel_build_env = .pkg
-deps =
- precice-config-check
- pytest>=6
+extras =
+ dev
commands =
pytest {tty:--color=yes} {posargs}