Skip to content

Commit 66c9f1c

Browse files
authored
Make sure TRT EPs can loads models when initializers in memory (#26721)
This PR moves the conversion of initializers in-memory from Graph constructor to early in graph transform before the partitioning. This is done to avoid conversion when subgraphs are constructed. It also addresses bugs in TRT and NV TRT providers. Addresses issue: #26653 **Graph Initializer Conversion and Handling:** * Added a new method `Graph::ConvertInitializersIntoOrtValues()` to convert all graph TensorProto initializers into OrtValues and create in-memory external data references, separating this logic from graph construction and making it reusable. (`include/onnxruntime/core/graph/graph.h`, `onnxruntime/core/graph/graph.cc`) [[1]](diffhunk://#diff-aaea1507ec81a94c72a1fa72ce320df712156b665f7798573be3f7e439bb4c37R1457-R1463) [[2]](diffhunk://#diff-e231a92b40d89409cc8e82436be0a15bc87ef95c93b303b9feaeab6e50c8835cR3416-R3447) * Removed the previous lambda for converting large tensor initializers within the graph constructor, delegating this responsibility to the new method above for clearer separation of concerns. (`onnxruntime/core/graph/graph.cc`) [[1]](diffhunk://#diff-e231a92b40d89409cc8e82436be0a15bc87ef95c93b303b9feaeab6e50c8835cL1234-L1255) [[2]](diffhunk://#diff-e231a92b40d89409cc8e82436be0a15bc87ef95c93b303b9feaeab6e50c8835cL1275-L1276) [[3]](diffhunk://#diff-e231a92b40d89409cc8e82436be0a15bc87ef95c93b303b9feaeab6e50c8835cL1353-R1327) **Provider Interface Enhancements:** * Introduced move assignment operators for `GraphProto` and `TensorProto` in both the provider interface (`ProviderHost`) and wrapper structs, allowing for more efficient object transfers and assignment. (`onnxruntime/core/providers/shared_library/provider_interfaces.h`, `onnxruntime/core/providers/shared_library/provider_wrappedtypes.h`) [[1]](diffhunk://#diff-d62681d5e83139cfbc272f32afc4ff897dbfd84a709f02a932666e18240fa094L442-R457) [[2]](diffhunk://#diff-d62681d5e83139cfbc272f32afc4ff897dbfd84a709f02a932666e18240fa094L495-R511) [[3]](diffhunk://#diff-bf62a34e53927025e7a7bcf7f294532a366ec4ee069bbe541fcdc87e3b1eaa8fL178-R179) [[4]](diffhunk://#diff-bf62a34e53927025e7a7bcf7f294532a366ec4ee069bbe541fcdc87e3b1eaa8fL244-R248) * Added iterator interfaces (`TensorProto_ConstIterator`, `TensorProto_Iterator`) and corresponding methods to `TensorProtos` for clean iteration over initializer lists, improving code readability and maintainability. (`onnxruntime/core/providers/shared_library/provider_interfaces.h`, `onnxruntime/core/providers/shared_library/provider_wrappedtypes.h`) [[1]](diffhunk://#diff-d62681d5e83139cfbc272f32afc4ff897dbfd84a709f02a932666e18240fa094L73-R93) [[2]](diffhunk://#diff-d62681d5e83139cfbc272f32afc4ff897dbfd84a709f02a932666e18240fa094L524-R545) [[3]](diffhunk://#diff-bf62a34e53927025e7a7bcf7f294532a366ec4ee069bbe541fcdc87e3b1eaa8fL286-R295) **Execution Provider Logic Simplification:** * Refactored how initializers are processed in the NVExecutionProvider, using the new initializer conversion and iteration logic to simplify handling of external and in-memory data, and ensuring correct assignment and ownership of user-provided weights. (`onnxruntime/core/providers/nv_tensorrt_rtx/nv_execution_provider.cc`) [[1]](diffhunk://#diff-b7114b8cae911bdd2c3523a09019f9a9b9f9d7cce4fdd50b282603c81a6137aaL1657-R1658) [[2]](diffhunk://#diff-b7114b8cae911bdd2c3523a09019f9a9b9f9d7cce4fdd50b282603c81a6137aaR1709-R1733) [[3]](diffhunk://#diff-b7114b8cae911bdd2c3523a09019f9a9b9f9d7cce4fdd50b282603c81a6137aaR2558-R2587) **Other Minor Improvements:** * Improved const-correctness and interface consistency for size and iterator methods in `TensorProtos`. (`onnxruntime/core/providers/shared_library/provider_interfaces.h`, `onnxruntime/core/providers/shared_library/provider_wrappedtypes.h`) [[1]](diffhunk://#diff-d62681d5e83139cfbc272f32afc4ff897dbfd84a709f02a932666e18240fa094L524-R545) [[2]](diffhunk://#diff-bf62a34e53927025e7a7bcf7f294532a366ec4ee069bbe541fcdc87e3b1eaa8fL286-R295)
1 parent 838cf03 commit 66c9f1c

File tree

10 files changed

+220
-144
lines changed

10 files changed

+220
-144
lines changed

include/onnxruntime/core/graph/graph.h

Lines changed: 9 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1454,12 +1454,16 @@ class Graph { // NOLINT(clang-analyzer-optin.performance.Padding): preserve exi
14541454
return Resolve(default_options);
14551455
}
14561456

1457+
/// <summary>
1458+
/// This function converts all the graph TensorProto initializers into OrtValues
1459+
/// and creates a in-memory external data reference for each OrtValue.
1460+
/// </summary>
1461+
/// <returns></returns>
1462+
Status ConvertInitializersIntoOrtValues();
1463+
14571464
/**
1458-
* @brief Converts a subset of graph TensorProto initializers into OrtValues and updates the graph proto.
1459-
*
1460-
* This function converts specified TensorProto initializers in the graph into OrtValues and
1461-
* creates in-memory external data references for each OrtValue. It then updates the provided
1462-
* GraphProto with the modified initializers.
1465+
* @brief This function examines the specified initializers in the graph and converts them inline
1466+
* if any has external data in memory.
14631467
*
14641468
* @param iterators Span of iterators pointing to the initializers and the order that should be processed
14651469
* @param output_graph_proto The GraphProto to be updated with the modified initializers
@@ -1633,17 +1637,6 @@ class Graph { // NOLINT(clang-analyzer-optin.performance.Padding): preserve exi
16331637
/// <returns>Status indicating success or failure</returns>
16341638
Status ProcessSubgraphsInMemoryData(ONNX_NAMESPACE::GraphProto& output_graph_proto) const;
16351639

1636-
/// <summary>
1637-
/// This function replaces all of the initializers within output_graph_proto
1638-
/// from this Graph instance. All in memory initializers are regenerated and inlined.
1639-
/// This is necessary even if the graph_proto_ is already up to date because initializers() may
1640-
/// contain obsolete initializers that are no longer in use due to optimizations and contain obsolete
1641-
/// references to OrtValues that may no longer be around (since we like appending rather than replacing).
1642-
/// </summary>
1643-
/// <param name="output_graph_proto">Destination GraphProto to receive the updated initializers.</param>
1644-
/// <returns>Status indicating success or failure.</returns>
1645-
Status RegenerateInitializersAndReplaceInMemory(ONNX_NAMESPACE::GraphProto& output_graph_proto) const;
1646-
16471640
/// <summary>
16481641
/// This function traverses the graph bottom up and externalizes
16491642
/// constant initializers along with their pre-packed blobs from different

onnxruntime/core/graph/graph.cc

Lines changed: 33 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -1233,28 +1233,6 @@ Graph::Graph(const Model& owning_model,
12331233
ArgNameToTypeMap name_to_type_map;
12341234
const auto& model_path = ModelPath();
12351235

1236-
// If the tensor proto data is large enough, move data from TensorProto to an OrtValue
1237-
// - Add external data reference to TensorProto that points to an OrtValue.
1238-
// This lambda should not be used on initializers that already have external data reference.
1239-
// Otherwise, this function does nothing.
1240-
auto put_large_tensor_in_ort_value = [this, &model_path](ONNX_NAMESPACE::TensorProto& tensor_proto) {
1241-
size_t size_in_bytes = 0;
1242-
ORT_THROW_IF_ERROR(utils::GetSizeInBytesFromTensorProto<0>(tensor_proto, &size_in_bytes));
1243-
if (size_in_bytes > utils::kSmallTensorExternalDataThreshold) {
1244-
OrtValue ort_value;
1245-
ORT_THROW_IF_ERROR(utils::TensorProtoToOrtValue(Env::Default(), model_path, tensor_proto,
1246-
CPUAllocator::DefaultInstance(), ort_value));
1247-
constexpr const bool use_tensor_buffer_true = true;
1248-
auto tensor_proto_to_add = utils::TensorToTensorProto(ort_value.Get<Tensor>(), tensor_proto.name(),
1249-
use_tensor_buffer_true);
1250-
assert(ort_value.IsAllocated());
1251-
auto ins_result = ortvalue_initializers_.insert_or_assign(tensor_proto_to_add.name(), std::move(ort_value));
1252-
ORT_ENFORCE(ins_result.second, "Unexpected duplicate insert or assign OrtValue for tensor: ", tensor_proto_to_add.name(),
1253-
" in the initializer list.");
1254-
tensor_proto = std::move(tensor_proto_to_add);
1255-
}
1256-
};
1257-
12581236
// Process 'Constant' nodes
12591237
// Put the 'TensorProto' stored in the 'Constant' nodes attribute into the graphs initializer list
12601238
for (auto& node : graph_proto_->node()) {
@@ -1274,8 +1252,6 @@ Graph::Graph(const Model& owning_model,
12741252
}
12751253
}
12761254

1277-
put_large_tensor_in_ort_value(*tensor);
1278-
12791255
// Ensure initializers are also graph inputs.
12801256
if (ir_version_ < 4) {
12811257
TypeProto t{utils::TypeProtoFromTensorProto(*tensor)};
@@ -1352,25 +1328,7 @@ Graph::Graph(const Model& owning_model,
13521328
}
13531329

13541330
// Copy initial tensors to a map.
1355-
for (int i = 0, lim = graph_proto_->initializer_size(); i < lim; ++i) {
1356-
auto& tensor = *graph_proto_->mutable_initializer(i);
1357-
// If data is on disk, it will be loaded either by optimizers
1358-
// or during session state finalization.
1359-
// If data is already in memory, do nothing.
1360-
if (!utils::HasExternalData(tensor)) {
1361-
// sparse_tensor_names_ contain references to strings to save memory
1362-
// in case we replace the tensor_proto, we want to make sure we remove
1363-
// the old reference first, and then add a new one.
1364-
const bool is_sparse = sparse_tensor_names_.count(tensor.name());
1365-
if (is_sparse) {
1366-
sparse_tensor_names_.erase(tensor.name());
1367-
}
1368-
put_large_tensor_in_ort_value(tensor);
1369-
if (is_sparse) {
1370-
sparse_tensor_names_.emplace(tensor.name());
1371-
}
1372-
}
1373-
1331+
for (auto& tensor : graph_proto_->initializer()) {
13741332
auto p = name_to_initial_tensor_.emplace(tensor.name(), &tensor);
13751333
if (!p.second) {
13761334
LOGS(logger_, WARNING) << "Duplicate initializer (dense, sparse or ConstantNode): '" << tensor.name()
@@ -3762,6 +3720,38 @@ Status Graph::Resolve(const ResolveOptions& options) {
37623720
return ForThisAndAllSubgraphs(all_subgraphs, finalize_func);
37633721
}
37643722

3723+
Status Graph::ConvertInitializersIntoOrtValues() {
3724+
std::vector<Graph*> all_subgraphs;
3725+
FindAllSubgraphs(all_subgraphs);
3726+
3727+
auto put_weights_maybe_in_memory_func = [&](Graph& graph) -> Status {
3728+
// if we have any initializers that are not in memory, put them there.
3729+
const auto& model_path = graph.ModelPath();
3730+
auto& graph_proto = *graph.graph_proto_;
3731+
for (int i = 0, lim = graph_proto.initializer_size(); i < lim; ++i) {
3732+
auto& tensor_proto = *graph_proto.mutable_initializer(i);
3733+
if (utils::HasExternalData(tensor_proto)) {
3734+
continue; // ignore data on disk, that will be loaded either by EP or at session_state finalize
3735+
}
3736+
3737+
size_t size_in_bytes = 0;
3738+
ORT_RETURN_IF_ERROR(utils::GetSizeInBytesFromTensorProto<0>(tensor_proto, &size_in_bytes));
3739+
if (size_in_bytes > utils::kSmallTensorExternalDataThreshold) {
3740+
OrtValue ort_value;
3741+
ORT_RETURN_IF_ERROR(utils::TensorProtoToOrtValue(Env::Default(), model_path, tensor_proto,
3742+
CPUAllocator::DefaultInstance(), ort_value));
3743+
constexpr const bool use_tensor_buffer_true = true;
3744+
auto tensor_proto_to_add = utils::TensorToTensorProto(ort_value.Get<Tensor>(), tensor_proto.name(),
3745+
use_tensor_buffer_true);
3746+
ORT_RETURN_IF_ERROR(graph.ReplaceInitializedTensor(tensor_proto_to_add, ort_value));
3747+
}
3748+
}
3749+
return Status::OK();
3750+
};
3751+
3752+
return ForThisAndAllSubgraphs(all_subgraphs, put_weights_maybe_in_memory_func);
3753+
}
3754+
37653755
void Graph::SetName(const std::string& name) {
37663756
graph_proto_->set_name(name);
37673757
}

onnxruntime/core/providers/nv_tensorrt_rtx/nv_execution_provider.cc

Lines changed: 36 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -1669,11 +1669,8 @@ SubGraphCollection_t NvExecutionProvider::GetSupportedList(SubGraphCollection_t
16691669
SetAllGraphInputs(graph_build);
16701670
}
16711671

1672-
auto status = graph_build.Resolve();
1673-
if (!status.IsOK()) {
1674-
LOGS_DEFAULT(ERROR) << status.ErrorMessage();
1675-
ORT_THROW_IF_ERROR(ORT_MAKE_STATUS(ONNXRUNTIME, FAIL, "ONNX graph resolve failed: " + status.ErrorMessage()));
1676-
}
1672+
ORT_THROW_IF_ERROR(graph_build.Resolve());
1673+
16771674
// Add parent graph output to the subgraph
16781675
int i = 0;
16791676
std::vector<const NodeArg*> subgraph_outputs;
@@ -1720,41 +1717,38 @@ SubGraphCollection_t NvExecutionProvider::GetSupportedList(SubGraphCollection_t
17201717
auto model = graph_viewer->CreateModel(*GetLogger());
17211718
auto model_proto = model->ToProto();
17221719

1723-
// ORT's default topological sort is using reversed DFS.
1724-
// When creating model proto from graph viewer, let ORT use priority-based topological sort based on node index.
1725-
// The reason is, in some cases, for example ResNet50, using default topological sort will end up with generating
1726-
// the model proto that has different node ordering compared to original onnx model.
1727-
17281720
// save user provided external data in memory instead of writing to ModelProto
17291721
// needed for models > 2GB
17301722
std::vector<TensorrtUserWeights> userWeights;
17311723
if (use_external_data_initializer_) {
1732-
auto c_api = Ort::GetApi();
1733-
const InitializedTensorSet& allInitializers = graph_viewer->GetAllInitializedTensors();
1724+
const auto& allInitializers = graph_viewer->GetAllInitializedTensors();
17341725
userWeights.reserve(allInitializers.size());
1735-
for (auto& entry : allInitializers) {
1736-
OrtValue initializer_value;
1737-
auto* tp = entry.second;
1726+
for (const auto& [name, tp] : allInitializers) {
17381727
if (utils::HasRawData(*tp)) {
1739-
userWeights.emplace_back(TensorrtUserWeights(tp->name(), tp->raw_data().data(), tp->raw_data().size()));
1740-
} else if (graph_viewer->GetOrtValueInitializer(tp->name(), initializer_value)) {
1741-
// the initializer was marked as external data by the ORT graph at load time since it was provided in memory
1742-
size_t size = 0;
1743-
const void* ptr = nullptr;
1744-
Ort::ThrowOnError(c_api.GetTensorSizeInBytes(&initializer_value, &size));
1745-
Ort::ThrowOnError(c_api.GetTensorData(&initializer_value, &ptr));
1746-
userWeights.emplace_back(tp->name(), ptr, size);
1728+
// Keep inits in memory instead of writing to ModelProto.
1729+
userWeights.emplace_back(name, tp->raw_data().data(), tp->raw_data().size());
17471730
} else if (utils::HasExternalDataInMemory(*tp)) {
1748-
// only copy and take ownership of the data if none of the above conditions are met
1749-
std::unique_ptr<ONNX_NAMESPACE::TensorProto> full_init;
1750-
ORT_THROW_IF_ERROR(utils::GetTensorProtoWithDataIfInMemory(*tp, full_init));
1751-
userWeights.emplace_back(std::move(full_init->name()), std::move(full_init->raw_data()));
1731+
// the initializer was marked as external data by the ORT graph at load time since it was provided in memory
1732+
if (OrtValue v; graph_viewer->GetOrtValueInitializer(name, v)) {
1733+
Ort::ConstValue initializer_value{&v};
1734+
const size_t size = initializer_value.GetTensorSizeInBytes();
1735+
const void* ptr = initializer_value.GetTensorRawData();
1736+
userWeights.emplace_back(name, ptr, size);
1737+
} else {
1738+
// only copy and take ownership of the data if none of the above conditions are met
1739+
std::unique_ptr<ONNX_NAMESPACE::TensorProto> full_init;
1740+
ORT_THROW_IF_ERROR(utils::GetTensorProtoWithDataIfInMemory(*tp, full_init));
1741+
userWeights.emplace_back(name, full_init->raw_data());
1742+
}
17521743
}
17531744
}
17541745
}
17551746

1747+
// ORT's default topological sort is using reversed DFS.
1748+
// When creating model proto from graph viewer, let ORT use priority-based topological sort based on node index.
1749+
// The reason is, in some cases, for example ResNet50, using default topological sort will end up with generating
1750+
// the model proto that has different node ordering compared to original onnx model.
17561751
graph_viewer->ToProto(*model_proto->mutable_graph(), true, true, 1 /*priority-based topological sort*/, !use_external_data_initializer_ /*include raw initializers*/);
1757-
17581752
model_proto->set_ir_version(ONNX_NAMESPACE::Version::IR_VERSION);
17591753

17601754
std::string string_buf;
@@ -2582,30 +2576,27 @@ Status NvExecutionProvider::CreateNodeComputeInfoFromGraph(const GraphViewer& gr
25822576
// exclude weights if external
25832577
std::vector<TensorrtUserWeights> userWeights;
25842578
if (use_external_data_initializer_) {
2585-
auto c_api = Ort::GetApi();
25862579
const InitializedTensorSet& allInitializers = graph_body_viewer.GetAllInitializedTensors();
25872580
userWeights.reserve(allInitializers.size());
2588-
for (auto& entry : allInitializers) {
2589-
OrtValue initializer_value;
2590-
auto* tp = entry.second;
2581+
for (const auto& [name, tp] : allInitializers) {
25912582
if (utils::HasRawData(*tp)) {
2592-
userWeights.emplace_back(TensorrtUserWeights(tp->name(), tp->raw_data().data(), tp->raw_data().size()));
2593-
} else if (graph_body_viewer.GetOrtValueInitializer(tp->name(), initializer_value)) {
2594-
// the initializer was marked as external data by the ORT graph at load time since it was provided in memory
2595-
size_t size = 0;
2596-
const void* ptr = nullptr;
2597-
Ort::ThrowOnError(c_api.GetTensorSizeInBytes(&initializer_value, &size));
2598-
Ort::ThrowOnError(c_api.GetTensorData(&initializer_value, &ptr));
2599-
userWeights.emplace_back(tp->name(), ptr, size);
2583+
userWeights.emplace_back(name, tp->raw_data().data(), tp->raw_data().size());
26002584
} else if (utils::HasExternalDataInMemory(*tp)) {
2601-
// only copy and take ownership of the data if none of the above conditions are met
2602-
std::unique_ptr<ONNX_NAMESPACE::TensorProto> full_init;
2603-
ORT_THROW_IF_ERROR(utils::GetTensorProtoWithDataIfInMemory(*tp, full_init));
2604-
userWeights.emplace_back(TensorrtUserWeights(std::move(full_init->name()), std::move(full_init->raw_data())));
2585+
// the initializer was marked as external data by the ORT graph at load time since it was provided in memory
2586+
if (OrtValue v; graph_body_viewer.GetOrtValueInitializer(name, v)) {
2587+
Ort::ConstValue initializer_value{&v};
2588+
const size_t size = initializer_value.GetTensorSizeInBytes();
2589+
const void* ptr = initializer_value.GetTensorRawData();
2590+
userWeights.emplace_back(name, ptr, size);
2591+
} else {
2592+
// only copy and take ownership of the data if none of the above conditions are met
2593+
std::unique_ptr<ONNX_NAMESPACE::TensorProto> full_init;
2594+
ORT_THROW_IF_ERROR(utils::GetTensorProtoWithDataIfInMemory(*tp, full_init));
2595+
userWeights.emplace_back(name, full_init->raw_data());
2596+
}
26052597
}
26062598
}
26072599
}
2608-
26092600
// ORT's default topological sort is using reversed DFS.
26102601
// When creating model proto from graph viewer, let ORT use priority-based topological sort based on node index.
26112602
// The reason is, in some cases, for example ResNet50, using default topological sort will end up with generating

onnxruntime/core/providers/shared_library/provider_interfaces.h

Lines changed: 24 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -70,13 +70,27 @@ struct IteratorHolder {
7070
bool operator!=(const IteratorHolder& p) const { return p_->operator!=(*p.p_); }
7171

7272
void operator++() { p_->operator++(); }
73-
const TResult& operator*() { return p_->operator*(); }
73+
TResult& operator*() { return p_->operator*(); }
7474
T* operator->() { return p_.get(); }
7575

7676
private:
7777
std::unique_ptr<T> p_;
7878
};
7979

80+
struct TensorProto_ConstIterator {
81+
virtual ~TensorProto_ConstIterator() = default;
82+
virtual bool operator!=(const TensorProto_ConstIterator& p) const = 0;
83+
virtual void operator++() = 0;
84+
virtual const ONNX_NAMESPACE::TensorProto& operator*() const = 0;
85+
};
86+
87+
struct TensorProto_Iterator {
88+
virtual ~TensorProto_Iterator() = default;
89+
virtual bool operator!=(const TensorProto_Iterator& p) const = 0;
90+
virtual void operator++() = 0;
91+
virtual ONNX_NAMESPACE::TensorProto& operator*() const = 0;
92+
};
93+
8094
struct NodeAttributes_Iterator {
8195
virtual ~NodeAttributes_Iterator() {}
8296

@@ -439,7 +453,8 @@ struct ProviderHost {
439453
// GraphProto
440454
virtual std::unique_ptr<ONNX_NAMESPACE::GraphProto> GraphProto__construct() = 0;
441455
virtual void GraphProto__operator_delete(ONNX_NAMESPACE::GraphProto* p) = 0;
442-
virtual void GraphProto__operator_assign(ONNX_NAMESPACE::GraphProto* p, const ONNX_NAMESPACE::GraphProto& v) = 0;
456+
virtual ONNX_NAMESPACE::GraphProto& GraphProto__operator_assign(ONNX_NAMESPACE::GraphProto* p, const ONNX_NAMESPACE::GraphProto& v) = 0;
457+
virtual ONNX_NAMESPACE::GraphProto& GraphProto__operator_move_assign(ONNX_NAMESPACE::GraphProto* p, ONNX_NAMESPACE::GraphProto&& v) = 0;
443458

444459
virtual const ONNX_NAMESPACE::ValueInfoProto& GraphProto__input(const ONNX_NAMESPACE::GraphProto* p, int index) = 0;
445460
virtual ONNX_NAMESPACE::ValueInfoProtos* GraphProto__mutable_input(ONNX_NAMESPACE::GraphProto* p) = 0;
@@ -492,7 +507,8 @@ struct ProviderHost {
492507
// TensorProto
493508
virtual std::unique_ptr<ONNX_NAMESPACE::TensorProto> TensorProto__construct() = 0;
494509
virtual void TensorProto__operator_delete(ONNX_NAMESPACE::TensorProto* p) = 0;
495-
virtual void TensorProto__operator_assign(ONNX_NAMESPACE::TensorProto* p, const ONNX_NAMESPACE::TensorProto& v) = 0;
510+
virtual ONNX_NAMESPACE::TensorProto& TensorProto__operator_assign(ONNX_NAMESPACE::TensorProto* p, const ONNX_NAMESPACE::TensorProto& v) = 0;
511+
virtual ONNX_NAMESPACE::TensorProto& TensorProto__operator_move_assign(ONNX_NAMESPACE::TensorProto* p, ONNX_NAMESPACE::TensorProto&& v) = 0;
496512
virtual bool TensorProto__has_name(const ONNX_NAMESPACE::TensorProto* p) = 0;
497513
virtual void TensorProto__set_name(ONNX_NAMESPACE::TensorProto* p, const ::std::string& name) = 0;
498514
virtual const ::std::string& TensorProto__name(const ONNX_NAMESPACE::TensorProto* p) = 0;
@@ -521,8 +537,12 @@ struct ProviderHost {
521537

522538
// TensorProtos
523539
virtual ONNX_NAMESPACE::TensorProto* TensorProtos__Add(ONNX_NAMESPACE::TensorProtos* p) = 0;
524-
virtual int TensorProtos__size(ONNX_NAMESPACE::TensorProtos* p) = 0;
540+
virtual int TensorProtos__size(const ONNX_NAMESPACE::TensorProtos* p) = 0;
525541
virtual ONNX_NAMESPACE::TensorProto& TensorProtos__at(ONNX_NAMESPACE::TensorProtos* p, int index) = 0;
542+
virtual std::unique_ptr<TensorProto_ConstIterator> TensorProtos__begin(const ONNX_NAMESPACE::TensorProtos* p) = 0;
543+
virtual std::unique_ptr<TensorProto_ConstIterator> TensorProtos__end(const ONNX_NAMESPACE::TensorProtos* p) = 0;
544+
virtual std::unique_ptr<TensorProto_Iterator> TensorProtos__begin(ONNX_NAMESPACE::TensorProtos* p) = 0;
545+
virtual std::unique_ptr<TensorProto_Iterator> TensorProtos__end(ONNX_NAMESPACE::TensorProtos* p) = 0;
526546

527547
// TensorShapeProto_Dimension
528548
virtual int TensorShapeProto_Dimension__value_case(const ONNX_NAMESPACE::TensorShapeProto_Dimension* p) = 0;

0 commit comments

Comments
 (0)