Skip to content

fix: list(None) is logically equivalent to list("/") #19

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 13, 2025

Conversation

rtyler
Copy link
Contributor

@rtyler rtyler commented May 13, 2025

Sending minidfs a list operation of an empty string does not appear to be valid, added a regression test here too.

    ---- test::test_object_store stdout ----

    thread 'test::test_object_store' panicked at tests/test_object_store.rs:86:37:
    called `Result::unwrap()` on an `Err` value: Generic { store: "HdfsObjectStore", source: RPCError("java.lang.AssertionError", "Absolute path required, but got ''\n\tat org.apache.hadoop.hdfs.server.namenode.INode.checkAbsolutePath(INode.java:849)\n\tat org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:824)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:726)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:57)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4200)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:1195)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:762)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169)\n\tat java.base/java.security.AccessController.doPrivileged(Native Method)\n\tat java.base/javax.security.auth.Subject.doAs(Subject.java:423)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:3198)\n") }
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    Dropping and killing minidfs

Sending minidfs a list operation of an empty string does not appear to
be valid, added a regression test here too.

        ---- test::test_object_store stdout ----

        thread 'test::test_object_store' panicked at tests/test_object_store.rs:86:37:
        called `Result::unwrap()` on an `Err` value: Generic { store: "HdfsObjectStore", source: RPCError("java.lang.AssertionError", "Absolute path required, but got ''\n\tat org.apache.hadoop.hdfs.server.namenode.INode.checkAbsolutePath(INode.java:849)\n\tat org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:824)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:726)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:57)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4200)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:1195)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:762)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169)\n\tat java.base/java.security.AccessController.doPrivileged(Native Method)\n\tat java.base/javax.security.auth.Subject.doAs(Subject.java:423)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:3198)\n") }
        note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
        Dropping and killing minidfs

Signed-off-by: R. Tyler Croy <[email protected]>
@rtyler rtyler marked this pull request as ready for review May 13, 2025 14:11
@Kimahriman Kimahriman merged commit c42e244 into datafusion-contrib:main May 13, 2025
3 checks passed
@Kimahriman
Copy link
Collaborator

Just let me know if/when you need a release cut

@rtyler
Copy link
Contributor Author

rtyler commented May 13, 2025

@Kimahriman whenever you have time is fine, this behavior causes trouble in delta-kernel-rs. I discovered our integration tests weren't actually running in CI for HDFS 🤦

@Kimahriman
Copy link
Collaborator

@Kimahriman whenever you have time is fine, this behavior causes trouble in delta-kernel-rs. I discovered our integration tests weren't actually running in CI for HDFS 🤦

Hah awesome. If this was the only issue you hit I can make the release

Kimahriman pushed a commit that referenced this pull request May 13, 2025
@rtyler
Copy link
Contributor Author

rtyler commented May 14, 2025

if you wouldn't mind cutting that release, then I can move forward with the enabling of the hdfs integration tests for for kernel

@rtyler rtyler deleted the fix-root-list branch May 14, 2025 12:53
@Kimahriman
Copy link
Collaborator

if you wouldn't mind cutting that release, then I can move forward with the enabling of the hdfs integration tests for for kernel

Done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants