Skip to content

Conversation

@Jolanrensen
Copy link
Collaborator

@Jolanrensen Jolanrensen commented Dec 11, 2025

Fixes #1273
Fixes #1587
Fixes #461

Helps #387

Might fix #462 ?

Follows up on #1266 and #462

Work in progress, so more information will come when the design is finalized.

@Jolanrensen Jolanrensen added enhancement New feature or request databases JDBC related issues labels Dec 11, 2025
/** the name of the class of the DuckDB JDBC driver */
override val driverClassName: String = "org.duckdb.DuckDBDriver"

override fun generateTypeInformation(tableColumnMetadata: TableColumnMetadata): AnyTypeInformation =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please consider: ColumnValuesConverter, JdbcColumnConverter, ...
I think it's important to convey that we, most importantly, want to convert values from JDBC classes to something Kotlin-friendly - providing KType / ColumnSchema is just a side effect, right? ValuePreprocessor and ColumnPostprocessor are good in this regard

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, no, the main use case of this function is building the ColumnSchema, aka the type information for DataFrame, given some JDBC TableColumnMetadata, so we know what kind of DataFrame will pop out when some SQL is read.

Actually, most values can directly go from JDBC into a ValueColumn, no conversion needed. However, in the cases where conversion needs to be done (like Date -> Kotlin Date, Struct -> DataRow), this conversion can be provided in the TypeInformation class that's returned, either in the form of a value-preprocessor, or, if you need all values to be gathered first, in the form of a column-postprocessor. Still, they are strictly tied to the TableColumnMetadata.

Does that make sense? :) I've tried several names already for this TypeInformation class, (DbTypeInformation, DbColumnType...) but none really fit or become too large. But I'd like to convey that it does not háve to convert. If you do want to convert, you can create a TypeInformation instance with typeInformationWithPreprocessingFor(...).

We could also split up providing the type information and actually converting them. However, I liked the idea of providing the type information and converting in one place, because it forces you to write a JDBC type, the conversion, and the schema that's created at once, keeping the logic together.

If you'd ignore what I made thus far, how would you make an API like this where you can define how JDBC types are mapped (and potentially converted) to column types?

Copy link
Collaborator

@koperagen koperagen Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, if i'm thinking from scratch: looks like you're referring to ResultSet (kind of Row) -> value extraction, that also can be database specific apparently?
So people might need to specify what class their Jdbc driver will return to our "generic value extractor" of some sort. We're not converting anything at this step, just calling rs.getObject, and we need to know what to expect.
If my description is accurate, can we call it JdbcDriverSpec? Can it be a separate entity from "converters"?
ExpectedTypes?

Copy link
Collaborator Author

@Jolanrensen Jolanrensen Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it can definitely be a separate step.
However, the converters will still need access to the original TableColumnMetadata to function properly, and they might do some duplicate logic:

So, we have a couple cases:

  • The first case is the easiest: We just want to read a column from JDBC, no conversion necessary. So the Db implementation could simply give a KType representing the type of the values coming from that column, and a ValueColumn<ThatType> will be created.
  • The second case is a bit more complicated: We want to read a column of values from JDBC, but they need to be converted, like Dates. In this case, the implementation could give a KType again representing the type of the values coming from that column. A preprocessor could take those values and KType and convert them (and return a new KType) before a DataColumn<NewType> is created. The preprocessor might need the original TableColumnMetadata though, as it might have information that cannot be represented by just the KType.
  • The next case is comparable to the previous one, but now we want to post-process the column, like in a column of arrays where we want to infer their common type. So the implementation will give a KType of java.sql.Array, it may be preprocessed, a ValueColumn is created, and then the postprocessor can do its magic, converting the ValueColumn with values to any sort of column it likes. It might need the original TableColumnMetadata, and KType and return the new column and ColumnSchema.
  • The final case is where a structured column needs to be created: We read a STRUCT column from JDBC; the first step returns the KType java.sql.Struct. The preprocessor can convert each value to a DataRow<*> based on the KType and TableColumnMetadata, so a DataColumn<DataRow<*>, aka a ColumnGroup can be created. Though, we still need to report the new ColumnSchema.Group somewhere so we can do TableColumnMetadata -> ColumnSchema without reading actual data. Maybe in the postprocessor? The preprocessor should do nothing, making a ValueColumn<Struct>, then the post-processor should turn it in a ColumnGroup, returning the resulting ColumnSchema.Group as well.

In the final case, we can see the TableColumnMetadata needing being parsed up to 3 separate times: In the first step to generate a KType we're not even using, as it's just typeOf<AnyRow>(), in the second step, to convert each value to a correct DataRow, and in the final step to produce the right ColumnSchema.Group. This can be quite tedious, as TableColumnMetadata can contain recursive types as well... It's also hard to track the logic of a single type across multiple separate functions...
But maybe we can find a hybrid between these two approaches? Allowing logic to be grouped, but functions to be separate?

Copy link
Collaborator

@koperagen koperagen Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The final case is where a structured column needs to be created: We read a STRUCT column from JDBC; the first step returns the KType java.sql.Struct. The preprocessor can convert each value to a DataRow<> based on the KType and TableColumnMetadata, so a DataColumn<DataRow<>, aka a ColumnGroup can be created. Though, we still need to report the new ColumnSchema.Group somewhere so we can do TableColumnMetadata -> ColumnSchema without reading actual data. Maybe in the postprocessor

As a side note, in recent Map.toDataFrame PR i noticed that map.toDataRow that uses type inference hits a very obvious bottleneck in reflective type inference that is called for each row, each value individually. In that case column-based creation of ColumnGroup reduces time from 17s to 1.5s!
I think efficient transformation of ResultSet with Struct value should be done in ColumnPostprocessor maybe? Like DataColumn -> ColumnGroup.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, the converters will still need access to the original TableColumnMetadata to function properly, and they might do some duplicate logic:

Makes sense

Copy link
Collaborator Author

@Jolanrensen Jolanrensen Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:o

Yes, it makes sense to postpone it to the post-processing step indeed! A DataRow<*> is a DataFrame with one row after all, so forming a DF with 1000 rows will create 1000 intermediate DFs with type inference. We'd need #1541 to be able to do this efficiently.

But this just shows it's good to have both pre- and post-processing :) we need both.

When a FrameColumn is created, it does make sense to create DataFrames in the preprocessing step

@Jolanrensen Jolanrensen force-pushed the DbType2 branch 2 times, most recently from 6b39cc6 to 9c9a699 Compare December 15, 2025 13:27
val columnKTypes = buildColumnKTypes(tableColumns, dbType)
val columnData = readAllRowsFromResultSet(rs, tableColumns, columnKTypes, dbType, limit)
val dataFrame = buildDataFrameFromColumnData(columnData, tableColumns, columnKTypes, dbType, inferNullability)
val expectedJdbcTypes = getExpectedJdbcTypes(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ExpectedJdbcTypes could be more complex structure, but better to keep order of column according indicies for debugging, not name-based, it could be edge object with fields: index, name, KType for example

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's certainly possible! Though that would look a bit more like AdvancedDbType which, for each column, requires you to provide an AnyJdbcToDataFrameConverter containing all information needed to read and convert that column. Maybe the concept could be merged

dbType = dbType,
tableColumns = tableColumns,
)
val preprocessedValueTypes = getPreprocessedValueTypes(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Describe somewhere the processes

Expected->Preprocessed (what's the difference and why we need this step, what does it give)

): DataFrameSchema {
val determinedDbType = dbType ?: extractDBTypeFromConnection(connection)

// TODO don't need to read 1 row, take it just from TableColumnMetadatas
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's very safe and cheap way for any database, I moved from taking info from TableColumnMetadata, also lead to less error - prone in our codebase and make it flexible

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Except for empty databases that just have a schema and no data. When building just the schema, we shouldn't have to look at the actual database contents. It's up to the DbType implementor to provide a watertight getExpectedJdbcType(), getPreprocessedValueType(), and getTargetColumnSchema() from just TableColumnMetadata. So if we manage to construct TableColumnMetadatas from connection, tableName and dbType, we can simply call those functions to get a schema without accessing any data.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...and for databases you may have no query access to

get() = "com.mysql.jdbc.Driver"

override fun convertSqlTypeToColumnSchemaValue(tableColumnMetadata: TableColumnMetadata): ColumnSchema? {
override fun getExpectedJdbcType(tableColumnMetadata: TableColumnMetadata): KType {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to change keep naming a la convertJdbcTypeToKType

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no converting happening at this stage yet. As a user you're supposed to provide the KTypes of the values coming directly from JDBC

tables.getString("table_cat"),
)

override fun convertSqlTypeToKType(tableColumnMetadata: TableColumnMetadata): KType? {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great refactoring

columnIndex: Int,
tableColumnMetadata: TableColumnMetadata,
expectedJdbcType: KType,
): J? =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

J?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

J-DBC. It's clearer in the context of AdvancedDbType:

  • J-DBC type
  • D-ataFrame type
  • P-ost processed type

expectedPreprocessedValueType: KType,
): D? =
when (tableColumnMetadata.jdbcType) {
Types.TIMESTAMP if tableColumnMetadata.javaClassName == "java.time.LocalDateTime" ->
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are Time types processed on that stage?

Types.ARRAY to Array::class,
Types.BLOB to ByteArray::class,
Types.CLOB to Clob::class,
Types.REF to Ref::class,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it kept as is or changed somewhere?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As is. It's the expected types coming from JDBC but I use KTypes now instead of ::class because it's more efficient and flexible.

…pe already handled names. Simplified column name clashes by using ColumnNameGenerator like the rest of DataFrame.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

databases JDBC related issues enhancement New feature or request

Projects

None yet

3 participants