Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manifests table scan should return iceberg schema rather arrow schema #868

Closed
liurenjie1024 opened this issue Jan 3, 2025 · 6 comments · Fixed by #871
Closed

Manifests table scan should return iceberg schema rather arrow schema #868

liurenjie1024 opened this issue Jan 3, 2025 · 6 comments · Fixed by #871
Labels
bug Something isn't working

Comments

@liurenjie1024
Copy link
Contributor

          I think we should return iceberg schema here, and user could easily convert it to arrow schema.

Originally posted by @liurenjie1024 in #861 (comment)

@liurenjie1024 liurenjie1024 added the bug Something isn't working label Jan 3, 2025
@Xuanwo
Copy link
Member

Xuanwo commented Jan 3, 2025

Makes sense to me.

@flaneur2020
Copy link
Contributor

👍 let me fix this

@rshkv
Copy link
Contributor

rshkv commented Jan 8, 2025

Also happy to help with this but not sure why / if we want this.

Looking at the DataFusion integration, it seems useful for the metadata TableProvider to be able to provide the Arrow schema.

If we need to expose the Arrow schema for 1) the DataFusion integration and 2) to build RecordBatch internally, it might be preferable for the metadata tables to have schema() return Arrow. If necessary, clients can still convert to Iceberg. Unlike the other way round, where we know conversion would be necessary.

@liurenjie1024
Copy link
Contributor Author

The reason I suggest returning iceberg schema is that metadata table is a concept in iceberg library, not only in datafusion integration. The difference is that, iceberg library will be used by more engines like datafuse, polars, etc. The reason we provide an record batch stream is for convenience, IMO we should provid similar plan files api so that other engines could consume it, but since arrow is the defact standard for in memory data exchange, I'm fine with keeping the scan api.

@rshkv
Copy link
Contributor

rshkv commented Feb 2, 2025

The Iceberg schema requirement really complicates things because of how we handle Iceberg field ids in Arrow types. I'm trying to explain in #863 (comment). Basically it's difficult to build Arrow arrays with nested types if you need the nested types to have the right ids in field metadata. (Certainly possible it just seems complicated to me because I'm not seeing the simple solution; I'm new to arrow-rs and iceberg-rust.)

I do get your argument of Iceberg wanting to be agnostic of Arrow. But then the engines that Iceberg Rust is going to integrate with any time soon do rely on Arrow (DataFusion, Polars, PyIceberg-on-Rust). So they're well-served if we return Arrow tables, regardless of what schema() type we return. I think PyIceberg made a similar choice on their metadata tables (e.g.).

Maybe we can avoid the discussion of the schema() type by making it non-public or remove it. It can just be an inner helper method to aid constructing Arrow batches. So for now we just have an Arrow-based scan API.

@liurenjie1024, would appreciate you thoughts on this.

@Fokko
Copy link
Contributor

Fokko commented Feb 3, 2025

Similar to @liurenjie1024 I'm in favor of exposing the schema as Iceberg instead of Arrow. I think in general it is not good to expose 3rd party libraries to your public API (since you don't control them, and what happens if something else comes along).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants