feat(client): No need to re-register for those servers being excluded #7129
2 errors, 2 skipped, 1 171 pass in 6h 35m 42s
3 011 files 3 011 suites 6h 35m 42s ⏱️
1 175 tests 1 171 ✅ 2 💤 0 ❌ 2 🔥
14 899 runs 14 853 ✅ 30 💤 0 ❌ 16 🔥
Results for commit 7c045b5.
Annotations
Check failure on line 0 in org.apache.uniffle.test.PartitionBlockDataReassignBasicTest
github-actions / Test Results
All 8 runs with error: resultCompareTest (org.apache.uniffle.test.PartitionBlockDataReassignBasicTest)
artifacts/integration-reports-spark3.0/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignBasicTest.xml [took 5s]
artifacts/integration-reports-spark3.2.0/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignBasicTest.xml [took 4s]
artifacts/integration-reports-spark3.2/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignBasicTest.xml [took 4s]
artifacts/integration-reports-spark3.3/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignBasicTest.xml [took 4s]
artifacts/integration-reports-spark3.4/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignBasicTest.xml [took 4s]
artifacts/integration-reports-spark3.5-scala2.13/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignBasicTest.xml [took 4s]
artifacts/integration-reports-spark3.5/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignBasicTest.xml [took 4s]
artifacts/integration-reports-spark3/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignBasicTest.xml [took 5s]
Raw output
Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (fv-az2033-434.wajbf2yebbuenirf1pz52wx3db.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
org.apache.spark.SparkException:
Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (fv-az2033-434.wajbf2yebbuenirf1pz52wx3db.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
Caused by: org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:57:51.910] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090782], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:51.917] [main] [INFO] MiniDFSCluster.<init> - starting cluster: numNameNodes=1, numDataNodes=1
Formatting using clusterid: testClusterID
[2025-03-14 07:57:51.918] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2025-03-14 07:57:51.918] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2025-03-14 07:57:51.918] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2025-03-14 07:57:51.918] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2025-03-14 07:57:51.918] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2025-03-14 07:57:51.918] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2025-03-14 07:57:51.919] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2025-03-14 07:57:51.919] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2025 Mar 14 07:57:51
[2025-03-14 07:57:51.919] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2025-03-14 07:57:51.919] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:57:51.919] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2025-03-14 07:57:51.919] [main] [INFO] GSet.computeCapacity - capacity = 2^24 = 16777216 entries
[2025-03-14 07:57:51.921] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2025-03-14 07:57:51.921] [main] [INFO] BlockManager.<init> - defaultReplication = 1
[2025-03-14 07:57:51.921] [main] [INFO] BlockManager.<init> - maxReplication = 512
[2025-03-14 07:57:51.921] [main] [INFO] BlockManager.<init> - minReplication = 1
[2025-03-14 07:57:51.921] [main] [INFO] BlockManager.<init> - maxReplicationStreams = 2
[2025-03-14 07:57:51.921] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2025-03-14 07:57:51.921] [main] [INFO] BlockManager.<init> - encryptDataTransfer = false
[2025-03-14 07:57:51.921] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog = 1000
[2025-03-14 07:57:51.921] [main] [INFO] FSNamesystem.<init> - fsOwner = runner (auth:SIMPLE)
[2025-03-14 07:57:51.921] [main] [INFO] FSNamesystem.<init> - supergroup = supergroup
[2025-03-14 07:57:51.921] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2025-03-14 07:57:51.922] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2025-03-14 07:57:51.922] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2025-03-14 07:57:51.922] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2025-03-14 07:57:51.922] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:57:51.922] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2025-03-14 07:57:51.922] [main] [INFO] GSet.computeCapacity - capacity = 2^23 = 8388608 entries
[2025-03-14 07:57:51.924] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2025-03-14 07:57:51.924] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2025-03-14 07:57:51.924] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2025-03-14 07:57:51.924] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2025-03-14 07:57:51.924] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:57:51.925] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2025-03-14 07:57:51.925] [main] [INFO] GSet.computeCapacity - capacity = 2^21 = 2097152 entries
[2025-03-14 07:57:51.925] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2025-03-14 07:57:51.925] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2025-03-14 07:57:51.925] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension = 0
[2025-03-14 07:57:51.925] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2025-03-14 07:57:51.926] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2025-03-14 07:57:51.926] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2025-03-14 07:57:51.926] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2025-03-14 07:57:51.926] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2025-03-14 07:57:51.926] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2025-03-14 07:57:51.926] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:57:51.926] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2025-03-14 07:57:51.926] [main] [INFO] GSet.computeCapacity - capacity = 2^17 = 131072 entries
[2025-03-14 07:57:51.927] [main] [INFO] FSImage.format - Allocated new BlockPoolId: BP-1947832235-127.0.0.1-1741939071927
[2025-03-14 07:57:51.929] [main] [INFO] Storage.format - Storage directory /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1 has been successfully formatted.
[2025-03-14 07:57:51.931] [main] [INFO] Storage.format - Storage directory /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name2 has been successfully formatted.
[2025-03-14 07:57:51.931] [FSImageSaver for /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1/current/fsimage.ckpt_0000000000000000000 using no compression
[2025-03-14 07:57:51.931] [FSImageSaver for /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name2/current/fsimage.ckpt_0000000000000000000 using no compression
[2025-03-14 07:57:51.970] [FSImageSaver for /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2025-03-14 07:57:51.972] [FSImageSaver for /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name2/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2025-03-14 07:57:51.974] [main] [INFO] NNStorageRetentionManager.getImageTxIdToRetain - Going to retain 1 images with txid >= 0
[2025-03-14 07:57:51.975] [main] [INFO] NameNode.createNameNode - createNameNode []
[2025-03-14 07:57:51.976] [main] [WARN] MetricsConfig.loadFirst - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
[2025-03-14 07:57:51.977] [main] [INFO] MetricsSystemImpl.startTimer - Scheduled Metric snapshot period at 10 second(s).
[2025-03-14 07:57:51.977] [main] [INFO] MetricsSystemImpl.start - NameNode metrics system started
[2025-03-14 07:57:51.978] [main] [INFO] NameNode.setClientNamenodeAddress - fs.defaultFS is hdfs://127.0.0.1:0
[2025-03-14 07:57:51.981] [main] [INFO] DFSUtil.httpServerTemplateForNNAndJN - Starting Web-server for hdfs at: http://localhost:0
[2025-03-14 07:57:51.981] [org.apache.hadoop.util.JvmPauseMonitor$Monitor@739ddf76] [INFO] JvmPauseMonitor.run - Starting JVM pause monitor
[2025-03-14 07:57:51.982] [main] [INFO] AuthenticationFilter.constructSecretProvider - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
[2025-03-14 07:57:51.982] [main] [WARN] HttpRequestLog.getRequestLog - Jetty request log can only be enabled using Log4j
[2025-03-14 07:57:51.983] [main] [INFO] HttpServer2.addGlobalFilter - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[2025-03-14 07:57:51.983] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
[2025-03-14 07:57:51.983] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[2025-03-14 07:57:51.984] [main] [INFO] HttpServer2.initWebHdfs - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
[2025-03-14 07:57:51.984] [main] [INFO] HttpServer2.addJerseyResourcePackage - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
[2025-03-14 07:57:51.985] [main] [INFO] HttpServer2.openListeners - Jetty bound to port 44787
[2025-03-14 07:57:51.985] [main] [INFO] log.info - jetty-6.1.26
[2025-03-14 07:57:51.989] [main] [INFO] log.info - Extract jar:file:/home/runner/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5-tests.jar!/webapps/hdfs to /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/Jetty_localhost_44787_hdfs____.z0uqg0/webapp
[2025-03-14 07:57:52.057] [main] [INFO] log.info - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44787
[2025-03-14 07:57:52.058] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2025-03-14 07:57:52.058] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2025-03-14 07:57:52.058] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2025-03-14 07:57:52.058] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2025-03-14 07:57:52.059] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2025-03-14 07:57:52.059] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2025-03-14 07:57:52.059] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2025-03-14 07:57:52.059] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2025 Mar 14 07:57:52
[2025-03-14 07:57:52.059] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2025-03-14 07:57:52.060] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:57:52.060] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2025-03-14 07:57:52.060] [main] [INFO] GSet.computeCapacity - capacity = 2^24 = 16777216 entries
[2025-03-14 07:57:52.062] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2025-03-14 07:57:52.062] [main] [INFO] BlockManager.<init> - defaultReplication = 1
[2025-03-14 07:57:52.063] [main] [INFO] BlockManager.<init> - maxReplication = 512
[2025-03-14 07:57:52.063] [main] [INFO] BlockManager.<init> - minReplication = 1
[2025-03-14 07:57:52.063] [main] [INFO] BlockManager.<init> - maxReplicationStreams = 2
[2025-03-14 07:57:52.063] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2025-03-14 07:57:52.063] [main] [INFO] BlockManager.<init> - encryptDataTransfer = false
[2025-03-14 07:57:52.063] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog = 1000
[2025-03-14 07:57:52.063] [main] [INFO] FSNamesystem.<init> - fsOwner = runner (auth:SIMPLE)
[2025-03-14 07:57:52.063] [main] [INFO] FSNamesystem.<init> - supergroup = supergroup
[2025-03-14 07:57:52.064] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2025-03-14 07:57:52.064] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2025-03-14 07:57:52.064] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2025-03-14 07:57:52.064] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2025-03-14 07:57:52.064] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:57:52.064] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2025-03-14 07:57:52.065] [main] [INFO] GSet.computeCapacity - capacity = 2^23 = 8388608 entries
[2025-03-14 07:57:52.066] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2025-03-14 07:57:52.066] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2025-03-14 07:57:52.066] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2025-03-14 07:57:52.066] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2025-03-14 07:57:52.066] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:57:52.067] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2025-03-14 07:57:52.067] [main] [INFO] GSet.computeCapacity - capacity = 2^21 = 2097152 entries
[2025-03-14 07:57:52.067] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2025-03-14 07:57:52.067] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2025-03-14 07:57:52.068] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension = 0
[2025-03-14 07:57:52.068] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2025-03-14 07:57:52.068] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2025-03-14 07:57:52.068] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2025-03-14 07:57:52.068] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2025-03-14 07:57:52.068] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2025-03-14 07:57:52.068] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2025-03-14 07:57:52.068] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:57:52.069] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2025-03-14 07:57:52.069] [main] [INFO] GSet.computeCapacity - capacity = 2^17 = 131072 entries
[2025-03-14 07:57:52.070] [main] [INFO] Storage.tryLock - Lock on /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1/in_use.lock acquired by nodename 28235@action-host
[2025-03-14 07:57:52.071] [main] [INFO] Storage.tryLock - Lock on /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name2/in_use.lock acquired by nodename 28235@action-host
[2025-03-14 07:57:52.072] [main] [INFO] FileJournalManager.recoverUnfinalizedSegments - Recovering unfinalized segments in /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1/current
[2025-03-14 07:57:52.072] [main] [INFO] FileJournalManager.recoverUnfinalizedSegments - Recovering unfinalized segments in /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name2/current
[2025-03-14 07:57:52.072] [main] [INFO] FSImage.loadFSImage - No edit log streams selected.
[2025-03-14 07:57:52.073] [main] [INFO] FSImage.loadFSImageFile - Planning to load image: FSImageFile(file=/home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
[2025-03-14 07:57:52.073] [main] [INFO] FSImageFormatPBINode.loadINodeSection - Loading 1 INodes.
[2025-03-14 07:57:52.074] [main] [INFO] FSImageFormatProtobuf.load - Loaded FSImage in 0 seconds.
[2025-03-14 07:57:52.074] [main] [INFO] FSImage.loadFSImage - Loaded image for txid 0 from /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/name1/current/fsimage_0000000000000000000
[2025-03-14 07:57:52.074] [main] [INFO] FSNamesystem.loadFSImage - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
[2025-03-14 07:57:52.075] [main] [INFO] FSEditLog.startLogSegment - Starting log segment at 1
[2025-03-14 07:57:52.082] [main] [INFO] NameCache.initialized - initialized with 0 entries 0 lookups
[2025-03-14 07:57:52.082] [main] [INFO] FSNamesystem.loadFromDisk - Finished loading FSImage in 13 msecs
[2025-03-14 07:57:52.082] [main] [INFO] NameNode.<init> - RPC server is binding to localhost:0
[2025-03-14 07:57:52.083] [main] [INFO] CallQueueManager.<init> - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
[2025-03-14 07:57:52.083] [Socket Reader #1 for port 46173] [INFO] Server.run - Starting Socket Reader #1 for port 46173
[2025-03-14 07:57:52.086] [main] [INFO] NameNode.initialize - Clients are to use localhost:46173 to access this namenode/service.
[2025-03-14 07:57:52.086] [main] [INFO] FSNamesystem.registerMBean - Registered FSNamesystemState MBean
[2025-03-14 07:57:52.100] [main] [INFO] LeaseManager.getNumUnderConstructionBlocks - Number of blocks under construction: 0
[2025-03-14 07:57:52.100] [main] [INFO] BlockManager.initializeReplQueues - initializing replication queues
[2025-03-14 07:57:52.100] [main] [INFO] StateChange.leave - STATE* Leaving safe mode after 0 secs
[2025-03-14 07:57:52.100] [main] [INFO] StateChange.leave - STATE* Network topology has 0 racks and 0 datanodes
[2025-03-14 07:57:52.100] [main] [INFO] StateChange.leave - STATE* UnderReplicatedBlocks has 0 blocks
[2025-03-14 07:57:52.111] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Total number of blocks = 0
[2025-03-14 07:57:52.111] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Number of invalid blocks = 0
[2025-03-14 07:57:52.111] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Number of under-replicated blocks = 0
[2025-03-14 07:57:52.111] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Number of over-replicated blocks = 0
[2025-03-14 07:57:52.111] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Number of blocks being written = 0
[2025-03-14 07:57:52.111] [Replication Queue Initializer] [INFO] StateChange.processMisReplicatesAsync - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 10 msec
[2025-03-14 07:57:52.113] [IPC Server Responder] [INFO] Server.run - IPC Server Responder: starting
[2025-03-14 07:57:52.118] [IPC Server listener on 46173] [INFO] Server.run - IPC Server listener on 46173: starting
[2025-03-14 07:57:52.120] [main] [INFO] NameNode.startCommonServices - NameNode RPC up at: localhost/127.0.0.1:46173
[2025-03-14 07:57:52.120] [main] [WARN] MetricsLoggerTask.makeMetricsLoggerAsync - Metrics logging will not be async since the logger is not log4j
[2025-03-14 07:57:52.120] [main] [INFO] FSNamesystem.startActiveServices - Starting services required for active state
[2025-03-14 07:57:52.120] [main] [INFO] FSDirectory.updateCountForQuota - Initializing quota with 4 thread(s)
[2025-03-14 07:57:52.126] [main] [INFO] FSDirectory.updateCountForQuota - Quota initialization completed in 5 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
[2025-03-14 07:57:52.130] [main] [INFO] MiniDFSCluster.startDataNodes - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/data/data1,[DISK]file:/home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/data/data2
[2025-03-14 07:57:52.131] [CacheReplicationMonitor(773888732)] [INFO] CacheReplicationMonitor.run - Starting CacheReplicationMonitor with interval 30000 milliseconds
[2025-03-14 07:57:52.151] [main] [INFO] MetricsSystemImpl.init - DataNode metrics system started (again)
[2025-03-14 07:57:52.152] [main] [INFO] BlockScanner.<init> - Initialized block scanner with targetBytesPerSec 1048576
[2025-03-14 07:57:52.152] [main] [INFO] DataNode.<init> - Configured hostname is 127.0.0.1
[2025-03-14 07:57:52.152] [main] [INFO] DataNode.startDataNode - Starting DataNode with maxLockedMemory = 0
[2025-03-14 07:57:52.152] [main] [INFO] DataNode.initDataXceiver - Opened streaming server at /127.0.0.1:38429
[2025-03-14 07:57:52.152] [main] [INFO] DataNode.<init> - Balancing bandwith is 10485760 bytes/s
[2025-03-14 07:57:52.153] [main] [INFO] DataNode.<init> - Number threads for balancing is 50
[2025-03-14 07:57:52.155] [main] [INFO] AuthenticationFilter.constructSecretProvider - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
[2025-03-14 07:57:52.155] [main] [WARN] HttpRequestLog.getRequestLog - Jetty request log can only be enabled using Log4j
[2025-03-14 07:57:52.155] [main] [INFO] HttpServer2.addGlobalFilter - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[2025-03-14 07:57:52.156] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
[2025-03-14 07:57:52.156] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[2025-03-14 07:57:52.156] [main] [INFO] HttpServer2.openListeners - Jetty bound to port 43817
[2025-03-14 07:57:52.156] [main] [INFO] log.info - jetty-6.1.26
[2025-03-14 07:57:52.160] [main] [INFO] log.info - Extract jar:file:/home/runner/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5-tests.jar!/webapps/datanode to /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/Jetty_localhost_43817_datanode____2vlfqe/webapp
[2025-03-14 07:57:52.224] [main] [INFO] log.info - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43817
[2025-03-14 07:57:52.225] [main] [INFO] DatanodeHttpServer.start - Listening HTTP traffic on /127.0.0.1:33193
[2025-03-14 07:57:52.225] [main] [INFO] DataNode.startDataNode - dnUserName = runner
[2025-03-14 07:57:52.225] [org.apache.hadoop.util.JvmPauseMonitor$Monitor@1aa01988] [INFO] JvmPauseMonitor.run - Starting JVM pause monitor
[2025-03-14 07:57:52.226] [main] [INFO] DataNode.startDataNode - supergroup = supergroup
[2025-03-14 07:57:52.226] [main] [INFO] CallQueueManager.<init> - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
[2025-03-14 07:57:52.226] [Socket Reader #1 for port 34717] [INFO] Server.run - Starting Socket Reader #1 for port 34717
[2025-03-14 07:57:52.234] [main] [INFO] DataNode.initIpcServer - Opened IPC server at /127.0.0.1:34717
[2025-03-14 07:57:52.237] [main] [INFO] DataNode.refreshNamenodes - Refresh request received for nameservices: null
[2025-03-14 07:57:52.237] [main] [INFO] DataNode.doRefreshNamenodes - Starting BPOfferServices for nameservices: <default>
[2025-03-14 07:57:52.237] [Thread-981] [INFO] DataNode.run - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:46173 starting to offer service
[2025-03-14 07:57:52.237] [main] [WARN] MetricsLoggerTask.makeMetricsLoggerAsync - Metrics logging will not be async since the logger is not log4j
[2025-03-14 07:57:52.239] [IPC Server Responder] [INFO] Server.run - IPC Server Responder: starting
[2025-03-14 07:57:52.239] [IPC Server listener on 34717] [INFO] Server.run - IPC Server listener on 34717: starting
[2025-03-14 07:57:52.252] [Thread-981] [INFO] DataNode.verifyAndSetNamespaceInfo - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:46173
[2025-03-14 07:57:52.253] [Thread-981] [INFO] Storage.getParallelVolumeLoadThreadsNum - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
[2025-03-14 07:57:52.255] [Thread-981] [INFO] Storage.tryLock - Lock on /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/data/data1/in_use.lock acquired by nodename 28235@action-host
[2025-03-14 07:57:52.255] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668064145], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:52.255] [Thread-981] [INFO] Storage.loadStorageDirectory - Storage directory /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/data/data1 is not formatted for namespace 2066936868. Formatting...
[2025-03-14 07:57:52.255] [Thread-981] [INFO] Storage.createStorageID - Generated new storageID DS-b4155312-6ad0-49a8-8bc3-b4104d180873 for directory /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/data/data1
[2025-03-14 07:57:52.257] [main] [INFO] MiniDFSCluster.shouldWait - dnInfo.length != numDataNodes
[2025-03-14 07:57:52.257] [main] [INFO] MiniDFSCluster.waitActive - Waiting for cluster to become active
[2025-03-14 07:57:52.257] [Thread-981] [INFO] Storage.tryLock - Lock on /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8840761810259172772/data/data2/in_u…moryStore.logInfo - Block broadcast_1 stored as values in memory (estimated size 1488.0 B, free 2.5 GiB)
[2025-03-14 07:57:56.953] [main] [INFO] MemoryStore.logInfo - Block broadcast_1_piece0 stored as bytes in memory (estimated size 252.0 B, free 2.5 GiB)
[2025-03-14 07:57:56.953] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Added broadcast_1_piece0 in memory on fv-az2033-434.wajbf2yebbuenirf1pz52wx3db.cx.internal.cloudapp.net:37829 (size: 252.0 B, free: 2.5 GiB)
[2025-03-14 07:57:56.954] [main] [INFO] SparkContext.logInfo - Created broadcast 1 from broadcast at RssSparkShuffleUtils.java:290
[2025-03-14 07:57:56.954] [main] [INFO] RssShuffleManager.registerShuffle - RegisterShuffle with ShuffleId[0], partitionNum[4], shuffleServerForResult: {0=[ShuffleServerInfo{host[10.1.0.36], grpc port[20035]}], 1=[ShuffleServerInfo{host[10.1.0.36], grpc port[20034]}], 2=[ShuffleServerInfo{host[10.1.0.36], grpc port[20035]}], 3=[ShuffleServerInfo{host[10.1.0.36], grpc port[20034]}]}
[2025-03-14 07:57:56.954] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Registering RDD 3 (javaRDD at SparkSQLTest.java:53) as input to shuffle 0
[2025-03-14 07:57:56.955] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Got map stage job 0 (javaRDD at SparkSQLTest.java:53) with 1 output partitions
[2025-03-14 07:57:56.955] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Final stage: ShuffleMapStage 0 (javaRDD at SparkSQLTest.java:53)
[2025-03-14 07:57:56.955] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Parents of final stage: List()
[2025-03-14 07:57:56.955] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Missing parents: List()
[2025-03-14 07:57:56.955] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at javaRDD at SparkSQLTest.java:53), which has no missing parents
[2025-03-14 07:57:56.959] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_2 stored as values in memory (estimated size 35.0 KiB, free 2.5 GiB)
[2025-03-14 07:57:56.960] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_2_piece0 stored as bytes in memory (estimated size 16.9 KiB, free 2.5 GiB)
[2025-03-14 07:57:56.960] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Added broadcast_2_piece0 in memory on fv-az2033-434.wajbf2yebbuenirf1pz52wx3db.cx.internal.cloudapp.net:37829 (size: 16.9 KiB, free: 2.5 GiB)
[2025-03-14 07:57:56.960] [dag-scheduler-event-loop] [INFO] SparkContext.logInfo - Created broadcast 2 from broadcast at DAGScheduler.scala:1478
[2025-03-14 07:57:56.960] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at javaRDD at SparkSQLTest.java:53) (first 15 tasks are for partitions Vector(0))
[2025-03-14 07:57:56.961] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Adding task set 0.0 with 1 tasks resource profile 0
[2025-03-14 07:57:56.962] [dispatcher-event-loop-2] [INFO] TaskSetManager.logInfo - Starting task 0.0 in stage 0.0 (TID 0) (fv-az2033-434.wajbf2yebbuenirf1pz52wx3db.cx.internal.cloudapp.net, executor driver, partition 0, PROCESS_LOCAL, 4923 bytes) taskResourceAssignments Map()
[2025-03-14 07:57:56.962] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] Executor.logInfo - Running task 0.0 in stage 0.0 (TID 0)
[2025-03-14 07:57:56.974] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[0] data with RssHandle[appId local-1741939076872_1741939076839, shuffleId 0].
[2025-03-14 07:57:56.977] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] FileScanRDD.logInfo - Reading File path: file:///home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit8116669796368851734/test.csv, range: 0-8524, partition values: [empty row]
[2025-03-14 07:57:56.984] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] WriteBufferManager.clear - Flush total buffer for shuffleId[0] with allocated[16777216], dataSize[8480], memoryUsed[1048576], number of blocks[4], flush ratio[1.0]
[2025-03-14 07:57:56.995] [Grpc-3] [WARN] ShuffleTaskManager.requireBuffer - Failed to require buffer, require size: 2184
[2025-03-14 07:57:56.995] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer statusCode=SUCCESS from=/10.1.0.36:38092 executionTimeUs=1477 appId=local-1741939076872_1741939076839 shuffleId=0 args{requireSize=1914, partitionIdsSize=2, partitionIds=0, 2} return{requireBufferId=1}
[2025-03-14 07:57:56.995] [Grpc-3] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer statusCode=NO_BUFFER from=/10.1.0.36:34002 executionTimeUs=979 appId=local-1741939076872_1741939076839 shuffleId=0 args{requireSize=2184, partitionIdsSize=2, partitionIds=1, 3} return{requireBufferId=-1}
[2025-03-14 07:57:56.996] [client-data-transfer-1] [WARN] ShuffleServerGrpcClient.requirePreAllocation - ShuffleServer 10.1.0.36:20034 is full and can't send shuffle data successfully due to NO_BUFFER after retry 0 times, cost: 9(ms)
[2025-03-14 07:57:56.996] [client-data-transfer-1] [INFO] RetryUtils.retryWithCondition - Retry due to: org.apache.uniffle.common.exception.RssException. Use DEBUG level to see the full stack: requirePreAllocation failed! size[2184], host[10.1.0.36], port[20034]
[2025-03-14 07:57:56.996] [client-data-transfer-1] [INFO] RetryUtils.retryWithCondition - Will retry 2 more time(s) after waiting 1000 milliseconds.
[2025-03-14 07:57:56.997] [Grpc-6] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData statusCode=SUCCESS from=/10.1.0.36:38092 executionTimeUs=403 appId=local-1741939076872_1741939076839 shuffleId=0 args{requireBufferId=1, timestamp=1741939076995, stageAttemptNumber=0, shuffleDataSize=2}
[2025-03-14 07:57:57.154] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[671088740], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.206] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[671088740], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.256] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668064145], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.329] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076427], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.329] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090685], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.342] [DynamicClientConfService-0] [WARN] DynamicClientConfService.refreshClientConf - Error when update client conf with hdfs://localhost:42871/test/client_conf.
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:466)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1628)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
at org.apache.uniffle.coordinator.conf.DynamicClientConfService.refreshClientConf(DynamicClientConfService.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:57:57.377] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.388] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.412] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090782], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.654] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[671088740], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.706] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[671088740], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.757] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668064145], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.829] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076427], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.829] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090685], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.877] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.889] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.912] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090782], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:57.953] [Grpc-8] [INFO] ShuffleServerGrpcService.appHeartbeat - Get heartbeat from local-1741939076872_1741939076839
[2025-03-14 07:57:57.953] [Grpc-6] [INFO] ShuffleServerGrpcService.appHeartbeat - Get heartbeat from local-1741939076872_1741939076839
[2025-03-14 07:57:57.954] [client-heartbeat-1] [INFO] CoordinatorGrpcRetryableClient.lambda$scheduleAtFixedRateToSendAppHeartBeat$0 - Successfully send heartbeat to Coordinator grpc client ref to 10.1.0.36:19999
[2025-03-14 07:57:57.954] [rss-heartbeat-0] [INFO] RssShuffleManagerBase.lambda$startHeartbeat$11 - Finish send heartbeat to coordinator and servers
[2025-03-14 07:57:57.997] [Grpc-9] [WARN] ShuffleTaskManager.requireBuffer - Failed to require buffer, require size: 2184
[2025-03-14 07:57:57.997] [Grpc-9] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer statusCode=NO_BUFFER from=/10.1.0.36:34002 executionTimeUs=233 appId=local-1741939076872_1741939076839 shuffleId=0 args{requireSize=2184, partitionIdsSize=2, partitionIds=1, 3} return{requireBufferId=-1}
[2025-03-14 07:57:57.997] [client-data-transfer-1] [WARN] ShuffleServerGrpcClient.requirePreAllocation - ShuffleServer 10.1.0.36:20034 is full and can't send shuffle data successfully due to NO_BUFFER after retry 0 times, cost: 1(ms)
[2025-03-14 07:57:57.997] [client-data-transfer-1] [INFO] RetryUtils.retryWithCondition - Retry due to: org.apache.uniffle.common.exception.RssException. Use DEBUG level to see the full stack: requirePreAllocation failed! size[2184], host[10.1.0.36], port[20034]
[2025-03-14 07:57:57.997] [client-data-transfer-1] [INFO] RetryUtils.retryWithCondition - Will retry 1 more time(s) after waiting 1000 milliseconds.
[2025-03-14 07:57:58.155] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[671088740], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.206] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[671088740], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.257] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668064145], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.329] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090685], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.329] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076427], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.342] [DynamicClientConfService-0] [WARN] DynamicClientConfService.refreshClientConf - Error when update client conf with hdfs://localhost:42871/test/client_conf.
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:466)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1628)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
at org.apache.uniffle.coordinator.conf.DynamicClientConfService.refreshClientConf(DynamicClientConfService.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:57:58.377] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.389] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.412] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090782], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.655] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[671088740], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.707] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[671088740], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.757] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668064145], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.830] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090685], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.830] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076427], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.877] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.889] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.913] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090782], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:57:58.998] [Grpc-2] [WARN] ShuffleTaskManager.requireBuffer - Failed to require buffer, require size: 2184
[2025-03-14 07:57:58.998] [Grpc-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer statusCode=NO_BUFFER from=/10.1.0.36:34002 executionTimeUs=228 appId=local-1741939076872_1741939076839 shuffleId=0 args{requireSize=2184, partitionIdsSize=2, partitionIds=1, 3} return{requireBufferId=-1}
[2025-03-14 07:57:58.998] [client-data-transfer-1] [WARN] ShuffleServerGrpcClient.requirePreAllocation - ShuffleServer 10.1.0.36:20034 is full and can't send shuffle data successfully due to NO_BUFFER after retry 0 times, cost: 1(ms)
[2025-03-14 07:57:58.998] [client-data-transfer-1] [WARN] ShuffleServerGrpcClient.sendShuffleData - Failed to send shuffle data due to
org.apache.uniffle.common.exception.RssException: requirePreAllocation failed! size[2184], host[10.1.0.36], port[20034]
at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.lambda$sendShuffleData$0(ShuffleServerGrpcClient.java:599)
at org.apache.uniffle.common.util.RetryUtils.retryWithCondition(RetryUtils.java:81)
at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.sendShuffleData(ShuffleServerGrpcClient.java:583)
at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.lambda$sendShuffleDataAsync$1(ShuffleWriteClientImpl.java:206)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:57:58.999] [client-data-transfer-1] [WARN] ShuffleWriteClientImpl.lambda$sendShuffleDataAsync$1 - ShuffleWriteClientImpl sendShuffleData with 2 blocks to 10.1.0.36-20034 cost: 2013(ms), it failed wth statusCode[NO_BUFFER]
[2025-03-14 07:57:58.999] [org.apache.spark.shuffle.writer.DataPusher-0] [ERROR] ShuffleWriteClientImpl.sendShuffleDataAsync - Some shuffle data can't be sent to shuffle-server, is fast fail: true, cancelled task size: 2
[2025-03-14 07:57:59.000] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.doReassignOnBlockSendFailure - Initiate reassignOnBlockSendFailure. failure partition servers: {1=[ReceivingFailureServer{serverId='10.1.0.36-20034', statusCode=NO_BUFFER}], 3=[ReceivingFailureServer{serverId='10.1.0.36-20034', statusCode=NO_BUFFER}]}
[2025-03-14 07:57:59.001] [Grpc-3] [INFO] ShuffleManagerGrpcService.reassignOnBlockSendFailure - Accepted reassign request on block sent failure for shuffleId: 0, stageId: 0, stageAttemptNumber: 0 from taskAttemptId: 0 on executorId: driver while partition split:false
[2025-03-14 07:57:59.002] [Grpc-2] [INFO] CoordinatorGrpcService.getShuffleAssignments - Request of getShuffleAssignments for appId[local-1741939076872_1741939076839], shuffleId[0], partitionNum[1], partitionNumPerRange[1], replica[1], requiredTags[[ss_v5, GRPC]], requiredShuffleServerNumber[1], faultyServerIds[1], stageId[0], stageAttemptNumber[0], isReassign[true]
[2025-03-14 07:57:59.002] [Grpc-2] [INFO] CoordinatorGrpcService.logAssignmentResult - Shuffle Servers of assignment for appId[local-1741939076872_1741939076839], shuffleId[0] are [10.1.0.36-20035]
[2025-03-14 07:57:59.002] [Grpc-2] [INFO] COORDINATOR_RPC_AUDIT_LOG.close - cmd=getShuffleAssignments statusCode=SUCCESS from=/10.1.0.36:44194 executionTimeUs=819 appId=local-1741939076872_1741939076839 args{shuffleId=0, partitionNum=1, partitionNumPerRange=1, replica=1, requiredTags=[ss_v5, GRPC], requiredShuffleServerNumber=1, faultyServerIds=[10.1.0.36-20034], stageId=0, stageAttemptNumber=0, isReassign=true}
[2025-03-14 07:57:59.003] [Grpc-3] [INFO] CoordinatorGrpcRetryableClient.lambda$getShuffleAssignments$4 - Success to get shuffle server assignment from Coordinator grpc client ref to 10.1.0.36:19999
[2025-03-14 07:57:59.003] [Grpc-3] [INFO] RssShuffleManagerBase.requestShuffleAssignment - Finished reassign
[2025-03-14 07:57:59.003] [Grpc-3] [INFO] RssShuffleManagerBase.registerShuffleServers - Start to register shuffleId[0]
[2025-03-14 07:57:59.003] [Grpc-1] [INFO] ShuffleServerGrpcService.registerShuffle - Get register request for appId[local-1741939076872_1741939076839], shuffleId[0], remoteStorage[] with 1 partition ranges. User: runner
[2025-03-14 07:57:59.003] [Grpc-1] [INFO] ShuffleTaskInfo.setProperties - local-1741939076872_1741939076839 set properties to {spark.rss.client.blockId.partitionIdBits=20, spark.rss.client.blockId.sequenceNoBits=21, spark.rss.client.reassign.enabled=true, spark.rss.writer.serializer.buffer.size=128k, spark.rss.blockId.maxPartitions=1048576, spark.rss.client.type=GRPC, spark.rss.client.send.check.interval.ms=1000, spark.rss.client.read.buffer.size=1m, spark.rss.client.assignment.shuffle.nodes.max=2, spark.rss.shuffle.manager.grpc.port=55435, spark.rss.writer.buffer.spill.size=32m, spark.rss.client.send.check.timeout.ms=30000, spark.rss.writer.buffer.size=4m, spark.rss.client.blockId.taskAttemptIdBits=22, spark.rss.test.mode.enable=true, spark.rss.client.retry.interval.max=1000, spark.rss.index.read.limit=100, spark.rss.storage.type=MEMORY_LOCALFILE, spark.rss.writer.buffer.segment.size=256k, spark.rss.client.retry.max=2, spark.rss.coordinator.quorum=10.1.0.36:19999, spark.rss.enabled=true, spark.rss.heartbeat.interval=2000}
[2025-03-14 07:57:59.004] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=registerShuffle statusCode=SUCCESS from=/10.1.0.36:38092 executionTimeUs=341 appId=local-1741939076872_1741939076839 shuffleId=0 args{remoteStoragePath=, user=runner, stageAttemptNumber=0}
[2025-03-14 07:57:59.004] [Grpc-3] [INFO] RssShuffleManagerBase.registerShuffleServers - Finish register shuffleId[0] with 1 ms
[2025-03-14 07:57:59.004] [Grpc-3] [INFO] RssShuffleManagerBase.reassignOnBlockSendFailure - Finished reassignOnBlockSendFailure request and cost 3(ms). Reassign result: {10.1.0.36-20034={1=[10.1.0.36-20035], 3=[10.1.0.36-20035]}}
[2025-03-14 07:57:59.004] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.doReassignOnBlockSendFailure - Success to reassign. The latest available assignment is {0=[ShuffleServerInfo{host[10.1.0.36], grpc port[20035]}], 1=[ShuffleServerInfo{host[10.1.0.36], grpc port[20035]}], 2=[ShuffleServerInfo{host[10.1.0.36], grpc port[20035]}], 3=[ShuffleServerInfo{host[10.1.0.36], grpc port[20035]}]}
[2025-03-14 07:57:59.005] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.reassignAndResendBlocks - Failed blocks have been resent to data pusher queue since reassignment has been finished successfully
[2025-03-14 07:57:59.005] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer statusCode=SUCCESS from=/10.1.0.36:38092 executionTimeUs=42 appId=local-1741939076872_1741939076839 shuffleId=0 args{requireSize=2184, partitionIdsSize=2, partitionIds=1, 3} return{requireBufferId=2}
[2025-03-14 07:57:59.006] [Grpc-6] [ERROR] ShuffleServerGrpcService.sendShuffleData - Error happened when shuffleEngine.write for appId[local-1741939076872_1741939076839], shuffleId[0], partitionId[3], statusCode=NO_REGISTER
[2025-03-14 07:57:59.006] [Grpc-6] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData statusCode=NO_REGISTER from=/10.1.0.36:38092 executionTimeUs=149 appId=local-1741939076872_1741939076839 shuffleId=0 args{requireBufferId=2, timestamp=1741939079005, stageAttemptNumber=0, shuffleDataSize=2}
[2025-03-14 07:57:59.006] [client-data-transfer-2] [WARN] ShuffleServerGrpcClient.sendShuffleData - Failed to send shuffle data due to
org.apache.uniffle.common.exception.NotRetryException: Can't send shuffle data with 2 blocks to 10.1.0.36:20035, statusCode=NO_REGISTER, errorMsg:Error happened when shuffleEngine.write for appId[local-1741939076872_1741939076839], shuffleId[0], partitionId[3], statusCode=NO_REGISTER
at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.lambda$sendShuffleData$0(ShuffleServerGrpcClient.java:640)
at org.apache.uniffle.common.util.RetryUtils.retryWithCondition(RetryUtils.java:81)
at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.sendShuffleData(ShuffleServerGrpcClient.java:583)
at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.lambda$sendShuffleDataAsync$1(ShuffleWriteClientImpl.java:206)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:57:59.006] [client-data-transfer-2] [WARN] ShuffleWriteClientImpl.lambda$sendShuffleDataAsync$1 - ShuffleWriteClientImpl sendShuffleData with 2 blocks to 10.1.0.36-20035 cost: 1(ms), it failed wth statusCode[NO_REGISTER]
[2025-03-14 07:57:59.006] [org.apache.spark.shuffle.writer.DataPusher-1] [ERROR] ShuffleWriteClientImpl.sendShuffleDataAsync - Some shuffle data can't be sent to shuffle-server, is fast fail: true, cancelled task size: 1
[2025-03-14 07:57:59.007] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [ERROR] RssShuffleWriter.collectFailedBlocksToResend - Partial blocks for taskId: [0_0] retry exceeding the max retry times: [1]. Fast fail! faulty server list: [ShuffleServerInfo{host[10.1.0.36], grpc port[20035]}]
[2025-03-14 07:57:59.008] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleManagerBase.markFailedTask - Mark the task: 0_0 failed.
[2025-03-14 07:57:59.008] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [ERROR] Executor.logError - Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:57:59.009] [task-result-getter-0] [WARN] TaskSetManager.logWarning - Lost task 0.0 in stage 0.0 (TID 0) (fv-az2033-434.wajbf2yebbuenirf1pz52wx3db.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:57:59.009] [task-result-getter-0] [ERROR] TaskSetManager.logError - Task 0 in stage 0.0 failed 1 times; aborting job
[2025-03-14 07:57:59.009] [task-result-getter-0] [INFO] TaskSchedulerImpl.logInfo - Removed TaskSet 0.0, whose tasks have all completed, from pool
[2025-03-14 07:57:59.010] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Cancelling stage 0
[2025-03-14 07:57:59.010] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Killing all running tasks in stage 0: Stage cancelled
[2025-03-14 07:57:59.010] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ShuffleMapStage 0 (javaRDD at SparkSQLTest.java:53) failed in 2.055 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (fv-az2033-434.wajbf2yebbuenirf1pz52wx3db.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
Check failure on line 0 in org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest
github-actions / Test Results
All 8 runs with error: resultCompareTest (org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest)
artifacts/integration-reports-spark3.0/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest.xml [took 8s]
artifacts/integration-reports-spark3.2.0/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest.xml [took 6s]
artifacts/integration-reports-spark3.2/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest.xml [took 6s]
artifacts/integration-reports-spark3.3/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest.xml [took 6s]
artifacts/integration-reports-spark3.4/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest.xml [took 6s]
artifacts/integration-reports-spark3.5-scala2.13/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest.xml [took 6s]
artifacts/integration-reports-spark3.5/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest.xml [took 6s]
artifacts/integration-reports-spark3/integration-test/spark3/target/surefire-reports/TEST-org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest.xml [took 8s]
Raw output
Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (fv-az889-710 executor driver): org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
org.apache.spark.SparkException:
Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (fv-az889-710 executor driver): org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2403)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2352)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2351)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2351)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1109)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1109)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1109)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2591)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2533)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
Caused by: org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:56:24.281] [main] [INFO] MiniDFSCluster.<init> - starting cluster: numNameNodes=1, numDataNodes=1
Formatting using clusterid: testClusterID
[2025-03-14 07:56:24.282] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2025-03-14 07:56:24.282] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2025-03-14 07:56:24.282] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2025-03-14 07:56:24.282] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2025-03-14 07:56:24.282] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2025-03-14 07:56:24.282] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2025-03-14 07:56:24.283] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2025-03-14 07:56:24.283] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2025 Mar 14 07:56:24
[2025-03-14 07:56:24.283] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2025-03-14 07:56:24.283] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:56:24.283] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2025-03-14 07:56:24.283] [main] [INFO] GSet.computeCapacity - capacity = 2^24 = 16777216 entries
[2025-03-14 07:56:24.285] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2025-03-14 07:56:24.285] [main] [INFO] BlockManager.<init> - defaultReplication = 1
[2025-03-14 07:56:24.286] [main] [INFO] BlockManager.<init> - maxReplication = 512
[2025-03-14 07:56:24.286] [main] [INFO] BlockManager.<init> - minReplication = 1
[2025-03-14 07:56:24.286] [main] [INFO] BlockManager.<init> - maxReplicationStreams = 2
[2025-03-14 07:56:24.286] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2025-03-14 07:56:24.286] [main] [INFO] BlockManager.<init> - encryptDataTransfer = false
[2025-03-14 07:56:24.286] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog = 1000
[2025-03-14 07:56:24.286] [main] [INFO] FSNamesystem.<init> - fsOwner = runner (auth:SIMPLE)
[2025-03-14 07:56:24.286] [main] [INFO] FSNamesystem.<init> - supergroup = supergroup
[2025-03-14 07:56:24.286] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2025-03-14 07:56:24.286] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2025-03-14 07:56:24.286] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2025-03-14 07:56:24.287] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2025-03-14 07:56:24.287] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:56:24.287] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2025-03-14 07:56:24.287] [main] [INFO] GSet.computeCapacity - capacity = 2^23 = 8388608 entries
[2025-03-14 07:56:24.289] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2025-03-14 07:56:24.289] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2025-03-14 07:56:24.289] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2025-03-14 07:56:24.289] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2025-03-14 07:56:24.290] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:56:24.290] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2025-03-14 07:56:24.290] [main] [INFO] GSet.computeCapacity - capacity = 2^21 = 2097152 entries
[2025-03-14 07:56:24.290] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2025-03-14 07:56:24.290] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2025-03-14 07:56:24.290] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension = 0
[2025-03-14 07:56:24.291] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2025-03-14 07:56:24.291] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2025-03-14 07:56:24.291] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2025-03-14 07:56:24.291] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2025-03-14 07:56:24.291] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2025-03-14 07:56:24.291] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2025-03-14 07:56:24.291] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:56:24.291] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2025-03-14 07:56:24.291] [main] [INFO] GSet.computeCapacity - capacity = 2^17 = 131072 entries
[2025-03-14 07:56:24.292] [main] [INFO] FSImage.format - Allocated new BlockPoolId: BP-448831992-127.0.0.1-1741938984292
[2025-03-14 07:56:24.294] [main] [INFO] Storage.format - Storage directory /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1 has been successfully formatted.
[2025-03-14 07:56:24.296] [main] [INFO] Storage.format - Storage directory /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name2 has been successfully formatted.
[2025-03-14 07:56:24.296] [FSImageSaver for /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1/current/fsimage.ckpt_0000000000000000000 using no compression
[2025-03-14 07:56:24.296] [FSImageSaver for /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name2/current/fsimage.ckpt_0000000000000000000 using no compression
[2025-03-14 07:56:24.332] [FSImageSaver for /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name2/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2025-03-14 07:56:24.333] [FSImageSaver for /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2025-03-14 07:56:24.335] [main] [INFO] NNStorageRetentionManager.getImageTxIdToRetain - Going to retain 1 images with txid >= 0
[2025-03-14 07:56:24.336] [main] [INFO] NameNode.createNameNode - createNameNode []
[2025-03-14 07:56:24.337] [main] [WARN] MetricsConfig.loadFirst - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
[2025-03-14 07:56:24.338] [main] [INFO] MetricsSystemImpl.startTimer - Scheduled Metric snapshot period at 10 second(s).
[2025-03-14 07:56:24.338] [main] [INFO] MetricsSystemImpl.start - NameNode metrics system started
[2025-03-14 07:56:24.338] [main] [INFO] NameNode.setClientNamenodeAddress - fs.defaultFS is hdfs://127.0.0.1:0
[2025-03-14 07:56:24.341] [org.apache.hadoop.util.JvmPauseMonitor$Monitor@387e956e] [INFO] JvmPauseMonitor.run - Starting JVM pause monitor
[2025-03-14 07:56:24.341] [main] [INFO] DFSUtil.httpServerTemplateForNNAndJN - Starting Web-server for hdfs at: http://localhost:0
[2025-03-14 07:56:24.343] [main] [INFO] AuthenticationFilter.constructSecretProvider - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
[2025-03-14 07:56:24.343] [main] [WARN] HttpRequestLog.getRequestLog - Jetty request log can only be enabled using Log4j
[2025-03-14 07:56:24.343] [main] [INFO] HttpServer2.addGlobalFilter - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[2025-03-14 07:56:24.344] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
[2025-03-14 07:56:24.344] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[2025-03-14 07:56:24.345] [main] [INFO] HttpServer2.initWebHdfs - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
[2025-03-14 07:56:24.345] [main] [INFO] HttpServer2.addJerseyResourcePackage - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
[2025-03-14 07:56:24.345] [main] [INFO] HttpServer2.openListeners - Jetty bound to port 38613
[2025-03-14 07:56:24.346] [main] [INFO] log.info - jetty-6.1.26
[2025-03-14 07:56:24.349] [main] [INFO] log.info - Extract jar:file:/home/runner/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5-tests.jar!/webapps/hdfs to /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/Jetty_localhost_38613_hdfs____8cbbi5/webapp
[2025-03-14 07:56:24.415] [main] [INFO] log.info - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38613
[2025-03-14 07:56:24.416] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2025-03-14 07:56:24.416] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2025-03-14 07:56:24.416] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2025-03-14 07:56:24.416] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2025-03-14 07:56:24.417] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2025-03-14 07:56:24.417] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2025-03-14 07:56:24.417] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2025-03-14 07:56:24.417] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2025 Mar 14 07:56:24
[2025-03-14 07:56:24.417] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2025-03-14 07:56:24.417] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:56:24.417] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2025-03-14 07:56:24.417] [main] [INFO] GSet.computeCapacity - capacity = 2^24 = 16777216 entries
[2025-03-14 07:56:24.419] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2025-03-14 07:56:24.419] [main] [INFO] BlockManager.<init> - defaultReplication = 1
[2025-03-14 07:56:24.419] [main] [INFO] BlockManager.<init> - maxReplication = 512
[2025-03-14 07:56:24.419] [main] [INFO] BlockManager.<init> - minReplication = 1
[2025-03-14 07:56:24.419] [main] [INFO] BlockManager.<init> - maxReplicationStreams = 2
[2025-03-14 07:56:24.419] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2025-03-14 07:56:24.419] [main] [INFO] BlockManager.<init> - encryptDataTransfer = false
[2025-03-14 07:56:24.419] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog = 1000
[2025-03-14 07:56:24.420] [main] [INFO] FSNamesystem.<init> - fsOwner = runner (auth:SIMPLE)
[2025-03-14 07:56:24.420] [main] [INFO] FSNamesystem.<init> - supergroup = supergroup
[2025-03-14 07:56:24.420] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2025-03-14 07:56:24.420] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2025-03-14 07:56:24.420] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2025-03-14 07:56:24.420] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2025-03-14 07:56:24.420] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:56:24.420] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2025-03-14 07:56:24.420] [main] [INFO] GSet.computeCapacity - capacity = 2^23 = 8388608 entries
[2025-03-14 07:56:24.421] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2025-03-14 07:56:24.421] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2025-03-14 07:56:24.421] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2025-03-14 07:56:24.421] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2025-03-14 07:56:24.421] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:56:24.422] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2025-03-14 07:56:24.422] [main] [INFO] GSet.computeCapacity - capacity = 2^21 = 2097152 entries
[2025-03-14 07:56:24.422] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2025-03-14 07:56:24.422] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2025-03-14 07:56:24.422] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension = 0
[2025-03-14 07:56:24.422] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2025-03-14 07:56:24.422] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2025-03-14 07:56:24.422] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2025-03-14 07:56:24.423] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2025-03-14 07:56:24.423] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2025-03-14 07:56:24.423] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2025-03-14 07:56:24.423] [main] [INFO] GSet.computeCapacity - VM type = 64-bit
[2025-03-14 07:56:24.423] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2025-03-14 07:56:24.423] [main] [INFO] GSet.computeCapacity - capacity = 2^17 = 131072 entries
[2025-03-14 07:56:24.425] [main] [INFO] Storage.tryLock - Lock on /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1/in_use.lock acquired by nodename 28234@action-host
[2025-03-14 07:56:24.426] [main] [INFO] Storage.tryLock - Lock on /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name2/in_use.lock acquired by nodename 28234@action-host
[2025-03-14 07:56:24.426] [main] [INFO] FileJournalManager.recoverUnfinalizedSegments - Recovering unfinalized segments in /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1/current
[2025-03-14 07:56:24.426] [main] [INFO] FileJournalManager.recoverUnfinalizedSegments - Recovering unfinalized segments in /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name2/current
[2025-03-14 07:56:24.427] [main] [INFO] FSImage.loadFSImage - No edit log streams selected.
[2025-03-14 07:56:24.427] [main] [INFO] FSImage.loadFSImageFile - Planning to load image: FSImageFile(file=/home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
[2025-03-14 07:56:24.427] [main] [INFO] FSImageFormatPBINode.loadINodeSection - Loading 1 INodes.
[2025-03-14 07:56:24.428] [main] [INFO] FSImageFormatProtobuf.load - Loaded FSImage in 0 seconds.
[2025-03-14 07:56:24.428] [main] [INFO] FSImage.loadFSImage - Loaded image for txid 0 from /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/name1/current/fsimage_0000000000000000000
[2025-03-14 07:56:24.428] [main] [INFO] FSNamesystem.loadFSImage - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
[2025-03-14 07:56:24.428] [main] [INFO] FSEditLog.startLogSegment - Starting log segment at 1
[2025-03-14 07:56:24.435] [main] [INFO] NameCache.initialized - initialized with 0 entries 0 lookups
[2025-03-14 07:56:24.435] [main] [INFO] FSNamesystem.loadFromDisk - Finished loading FSImage in 12 msecs
[2025-03-14 07:56:24.435] [main] [INFO] NameNode.<init> - RPC server is binding to localhost:0
[2025-03-14 07:56:24.436] [main] [INFO] CallQueueManager.<init> - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
[2025-03-14 07:56:24.436] [Socket Reader #1 for port 33683] [INFO] Server.run - Starting Socket Reader #1 for port 33683
[2025-03-14 07:56:24.442] [main] [INFO] NameNode.initialize - Clients are to use localhost:33683 to access this namenode/service.
[2025-03-14 07:56:24.442] [main] [INFO] FSNamesystem.registerMBean - Registered FSNamesystemState MBean
[2025-03-14 07:56:24.455] [main] [INFO] LeaseManager.getNumUnderConstructionBlocks - Number of blocks under construction: 0
[2025-03-14 07:56:24.455] [main] [INFO] BlockManager.initializeReplQueues - initializing replication queues
[2025-03-14 07:56:24.455] [main] [INFO] StateChange.leave - STATE* Leaving safe mode after 0 secs
[2025-03-14 07:56:24.455] [main] [INFO] StateChange.leave - STATE* Network topology has 0 racks and 0 datanodes
[2025-03-14 07:56:24.455] [main] [INFO] StateChange.leave - STATE* UnderReplicatedBlocks has 0 blocks
[2025-03-14 07:56:24.465] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Total number of blocks = 0
[2025-03-14 07:56:24.465] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Number of invalid blocks = 0
[2025-03-14 07:56:24.465] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Number of under-replicated blocks = 0
[2025-03-14 07:56:24.465] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Number of over-replicated blocks = 0
[2025-03-14 07:56:24.465] [Replication Queue Initializer] [INFO] BlockManager.processMisReplicatesAsync - Number of blocks being written = 0
[2025-03-14 07:56:24.465] [Replication Queue Initializer] [INFO] StateChange.processMisReplicatesAsync - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 10 msec
[2025-03-14 07:56:24.467] [IPC Server Responder] [INFO] Server.run - IPC Server Responder: starting
[2025-03-14 07:56:24.468] [IPC Server listener on 33683] [INFO] Server.run - IPC Server listener on 33683: starting
[2025-03-14 07:56:24.472] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668063806], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:24.472] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090651], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:24.473] [main] [INFO] NameNode.startCommonServices - NameNode RPC up at: localhost/127.0.0.1:33683
[2025-03-14 07:56:24.473] [main] [WARN] MetricsLoggerTask.makeMetricsLoggerAsync - Metrics logging will not be async since the logger is not log4j
[2025-03-14 07:56:24.474] [main] [INFO] FSNamesystem.startActiveServices - Starting services required for active state
[2025-03-14 07:56:24.474] [main] [INFO] FSDirectory.updateCountForQuota - Initializing quota with 4 thread(s)
[2025-03-14 07:56:24.476] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076040], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:24.476] [main] [INFO] FSDirectory.updateCountForQuota - Quota initialization completed in 1 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
[2025-03-14 07:56:24.484] [CacheReplicationMonitor(1046053200)] [INFO] CacheReplicationMonitor.run - Starting CacheReplicationMonitor with interval 30000 milliseconds
[2025-03-14 07:56:24.485] [main] [INFO] MiniDFSCluster.startDataNodes - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/data/data1,[DISK]file:/home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/data/data2
[2025-03-14 07:56:24.505] [main] [INFO] MetricsSystemImpl.init - DataNode metrics system started (again)
[2025-03-14 07:56:24.505] [main] [INFO] BlockScanner.<init> - Initialized block scanner with targetBytesPerSec 1048576
[2025-03-14 07:56:24.505] [main] [INFO] DataNode.<init> - Configured hostname is 127.0.0.1
[2025-03-14 07:56:24.506] [main] [INFO] DataNode.startDataNode - Starting DataNode with maxLockedMemory = 0
[2025-03-14 07:56:24.506] [main] [INFO] DataNode.initDataXceiver - Opened streaming server at /127.0.0.1:35285
[2025-03-14 07:56:24.506] [main] [INFO] DataNode.<init> - Balancing bandwith is 10485760 bytes/s
[2025-03-14 07:56:24.506] [main] [INFO] DataNode.<init> - Number threads for balancing is 50
[2025-03-14 07:56:24.508] [main] [INFO] AuthenticationFilter.constructSecretProvider - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
[2025-03-14 07:56:24.508] [main] [WARN] HttpRequestLog.getRequestLog - Jetty request log can only be enabled using Log4j
[2025-03-14 07:56:24.509] [main] [INFO] HttpServer2.addGlobalFilter - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[2025-03-14 07:56:24.509] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
[2025-03-14 07:56:24.509] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[2025-03-14 07:56:24.509] [main] [INFO] HttpServer2.openListeners - Jetty bound to port 35023
[2025-03-14 07:56:24.510] [main] [INFO] log.info - jetty-6.1.26
[2025-03-14 07:56:24.513] [main] [INFO] log.info - Extract jar:file:/home/runner/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5-tests.jar!/webapps/datanode to /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/Jetty_localhost_35023_datanode____.bpfe8a/webapp
[2025-03-14 07:56:24.577] [main] [INFO] log.info - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35023
[2025-03-14 07:56:24.578] [main] [INFO] DatanodeHttpServer.start - Listening HTTP traffic on /127.0.0.1:39397
[2025-03-14 07:56:24.578] [org.apache.hadoop.util.JvmPauseMonitor$Monitor@1148127b] [INFO] JvmPauseMonitor.run - Starting JVM pause monitor
[2025-03-14 07:56:24.578] [main] [INFO] DataNode.startDataNode - dnUserName = runner
[2025-03-14 07:56:24.578] [main] [INFO] DataNode.startDataNode - supergroup = supergroup
[2025-03-14 07:56:24.579] [main] [INFO] CallQueueManager.<init> - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
[2025-03-14 07:56:24.579] [Socket Reader #1 for port 34319] [INFO] Server.run - Starting Socket Reader #1 for port 34319
[2025-03-14 07:56:24.582] [main] [INFO] DataNode.initIpcServer - Opened IPC server at /127.0.0.1:34319
[2025-03-14 07:56:24.584] [main] [INFO] DataNode.refreshNamenodes - Refresh request received for nameservices: null
[2025-03-14 07:56:24.584] [main] [INFO] DataNode.doRefreshNamenodes - Starting BPOfferServices for nameservices: <default>
[2025-03-14 07:56:24.584] [Thread-573] [INFO] DataNode.run - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:33683 starting to offer service
[2025-03-14 07:56:24.584] [main] [WARN] MetricsLoggerTask.makeMetricsLoggerAsync - Metrics logging will not be async since the logger is not log4j
[2025-03-14 07:56:24.585] [IPC Server Responder] [INFO] Server.run - IPC Server Responder: starting
[2025-03-14 07:56:24.589] [IPC Server listener on 34319] [INFO] Server.run - IPC Server listener on 34319: starting
[2025-03-14 07:56:24.607] [Thread-573] [INFO] DataNode.verifyAndSetNamespaceInfo - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:33683
[2025-03-14 07:56:24.608] [Thread-573] [INFO] Storage.getParallelVolumeLoadThreadsNum - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
[2025-03-14 07:56:24.609] [Thread-573] [INFO] Storage.tryLock - Lock on /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/data/data1/in_use.lock acquired by nodename 28234@action-host
[2025-03-14 07:56:24.609] [Thread-573] [INFO] Storage.loadStorageDirectory - Storage directory /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/data/data1 is not formatted for namespace 1184470502. Formatting...
[2025-03-14 07:56:24.610] [Thread-573] [INFO] Storage.createStorageID - Generated new storageID DS-2f941fdf-135a-49ce-b296-f381184eb89d for directory /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/data/data1
[2025-03-14 07:56:24.615] [Thread-573] [INFO] Storage.tryLock - Lock on /home/runner/work/uniffle/uniffle/integration-test/spark3/target/tmp/junit3290718612133137915/data/data2/in_use.lock acquired by nodename 28234@action-host
[2025-03-14 07:56:24.615] [Thread-573] [INFO] Storage.loadStorageDirector…fle data can't be sent to shuffle-server, is fast fail: true, cancelled task size: 1
[2025-03-14 07:56:31.446] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.doReassignOnBlockSendFailure - Initiate reassignOnBlockSendFailure. failure partition servers: {0=[ReceivingFailureServer{serverId='10.1.0.10-20019', statusCode=NO_BUFFER}], 1=[ReceivingFailureServer{serverId='10.1.0.10-20019', statusCode=NO_BUFFER}], 2=[ReceivingFailureServer{serverId='10.1.0.10-20019', statusCode=NO_BUFFER}], 3=[ReceivingFailureServer{serverId='10.1.0.10-20019', statusCode=NO_BUFFER}]}
[2025-03-14 07:56:31.447] [Grpc-4] [INFO] ShuffleManagerGrpcService.reassignOnBlockSendFailure - Accepted reassign request on block sent failure for shuffleId: 0, stageId: 0, stageAttemptNumber: 0 from taskAttemptId: 0 on executorId: driver while partition split:false
[2025-03-14 07:56:31.448] [Grpc-2] [INFO] CoordinatorGrpcService.getShuffleAssignments - Request of getShuffleAssignments for appId[local-1741938989329_1741938989296], shuffleId[0], partitionNum[1], partitionNumPerRange[1], replica[1], requiredTags[[ss_v5, GRPC]], requiredShuffleServerNumber[1], faultyServerIds[1], stageId[0], stageAttemptNumber[0], isReassign[true]
[2025-03-14 07:56:31.449] [Grpc-2] [INFO] CoordinatorGrpcService.logAssignmentResult - Shuffle Servers of assignment for appId[local-1741938989329_1741938989296], shuffleId[0] are [10.1.0.10-20020]
[2025-03-14 07:56:31.449] [Grpc-2] [INFO] COORDINATOR_RPC_AUDIT_LOG.close - cmd=getShuffleAssignments statusCode=SUCCESS from=/10.1.0.10:57202 executionTimeUs=361 appId=local-1741938989329_1741938989296 args{shuffleId=0, partitionNum=1, partitionNumPerRange=1, replica=1, requiredTags=[ss_v5, GRPC], requiredShuffleServerNumber=1, faultyServerIds=[10.1.0.10-20019], stageId=0, stageAttemptNumber=0, isReassign=true}
[2025-03-14 07:56:31.449] [Grpc-4] [INFO] CoordinatorGrpcRetryableClient.lambda$getShuffleAssignments$4 - Success to get shuffle server assignment from Coordinator grpc client ref to 10.1.0.10:19999
[2025-03-14 07:56:31.449] [Grpc-4] [INFO] RssShuffleManagerBase.requestShuffleAssignment - Finished reassign
[2025-03-14 07:56:31.449] [Grpc-4] [INFO] RssShuffleManagerBase.registerShuffleServers - Start to register shuffleId[0]
[2025-03-14 07:56:31.454] [Grpc-1] [INFO] ShuffleServerGrpcService.registerShuffle - Get register request for appId[local-1741938989329_1741938989296], shuffleId[0], remoteStorage[] with 1 partition ranges. User: runner
[2025-03-14 07:56:31.454] [Grpc-1] [INFO] ShuffleTaskInfo.setProperties - local-1741938989329_1741938989296 set properties to {spark.rss.client.blockId.partitionIdBits=20, spark.rss.client.blockId.sequenceNoBits=21, spark.rss.client.reassign.enabled=true, spark.rss.writer.serializer.buffer.size=128k, spark.rss.blockId.maxPartitions=1048576, spark.rss.client.type=GRPC, spark.rss.client.send.check.interval.ms=1000, spark.rss.client.read.buffer.size=1m, spark.rss.client.assignment.shuffle.nodes.max=1, spark.rss.shuffle.manager.grpc.port=54252, spark.rss.writer.buffer.spill.size=32m, spark.rss.client.send.check.timeout.ms=30000, spark.rss.writer.buffer.size=4m, spark.rss.client.blockId.taskAttemptIdBits=22, spark.rss.client.reassign.blockRetryMaxTimes=10, spark.rss.test.mode.enable=true, spark.rss.client.retry.interval.max=1000, spark.rss.index.read.limit=100, spark.rss.storage.type=MEMORY_LOCALFILE, spark.rss.writer.buffer.segment.size=256k, spark.rss.client.retry.max=2, spark.rss.coordinator.quorum=10.1.0.10:19999, spark.rss.enabled=true, spark.rss.heartbeat.interval=2000}
[2025-03-14 07:56:31.454] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=registerShuffle statusCode=SUCCESS from=/10.1.0.10:51138 executionTimeUs=446 appId=local-1741938989329_1741938989296 shuffleId=0 args{remoteStoragePath=, user=runner, stageAttemptNumber=0}
[2025-03-14 07:56:31.455] [Grpc-4] [INFO] RssShuffleManagerBase.registerShuffleServers - Finish register shuffleId[0] with 6 ms
[2025-03-14 07:56:31.455] [Grpc-4] [INFO] RssShuffleManagerBase.reassignOnBlockSendFailure - Finished reassignOnBlockSendFailure request and cost 7(ms). Reassign result: {10.1.0.10-20019={0=[10.1.0.10-20020], 1=[10.1.0.10-20020], 2=[10.1.0.10-20020], 3=[10.1.0.10-20020]}}
[2025-03-14 07:56:31.455] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.doReassignOnBlockSendFailure - Success to reassign. The latest available assignment is {0=[ShuffleServerInfo{host[10.1.0.10], grpc port[20020]}], 1=[ShuffleServerInfo{host[10.1.0.10], grpc port[20020]}], 2=[ShuffleServerInfo{host[10.1.0.10], grpc port[20020]}], 3=[ShuffleServerInfo{host[10.1.0.10], grpc port[20020]}]}
[2025-03-14 07:56:31.456] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.reassignAndResendBlocks - Failed blocks have been resent to data pusher queue since reassignment has been finished successfully
[2025-03-14 07:56:31.456] [Grpc-4] [INFO] MockedShuffleServerGrpcService.requireBuffer - Make require buffer mocked failed.
[2025-03-14 07:56:31.457] [client-data-transfer-1] [WARN] ShuffleServerGrpcClient.requirePreAllocation - ShuffleServer 10.1.0.10:20020 is full and can't send shuffle data successfully due to NO_BUFFER after retry 0 times, cost: 1(ms)
[2025-03-14 07:56:31.457] [client-data-transfer-1] [INFO] RetryUtils.retryWithCondition - Retry due to: org.apache.uniffle.common.exception.RssException. Use DEBUG level to see the full stack: requirePreAllocation failed! size[4048], host[10.1.0.10], port[20020]
[2025-03-14 07:56:31.457] [client-data-transfer-1] [INFO] RetryUtils.retryWithCondition - Will retry 2 more time(s) after waiting 1000 milliseconds.
[2025-03-14 07:56:31.475] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668063806], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:31.475] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090651], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:31.478] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076040], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:31.605] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:31.625] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:31.642] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:31.822] [ApplicationManager-0] [INFO] ApplicationManager.statusCheck - Start to check status for 1 applications.
[2025-03-14 07:56:31.862] [DynamicClientConfService-0] [WARN] DynamicClientConfService.refreshClientConf - Error when update client conf with hdfs://localhost:38881/test/client_conf.
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:466)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1628)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
at org.apache.uniffle.coordinator.conf.DynamicClientConfService.refreshClientConf(DynamicClientConfService.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:56:31.975] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668063806], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:31.975] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090651], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:31.978] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076040], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.106] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.125] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.142] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.416] [Grpc-4] [INFO] ShuffleServerGrpcService.appHeartbeat - Get heartbeat from local-1741938989329_1741938989296
[2025-03-14 07:56:32.416] [Grpc-7] [INFO] ShuffleServerGrpcService.appHeartbeat - Get heartbeat from local-1741938989329_1741938989296
[2025-03-14 07:56:32.417] [client-heartbeat-2] [INFO] CoordinatorGrpcRetryableClient.lambda$scheduleAtFixedRateToSendAppHeartBeat$0 - Successfully send heartbeat to Coordinator grpc client ref to 10.1.0.10:19999
[2025-03-14 07:56:32.417] [rss-heartbeat-0] [INFO] RssShuffleManagerBase.lambda$startHeartbeat$11 - Finish send heartbeat to coordinator and servers
[2025-03-14 07:56:32.458] [Grpc-0] [INFO] MockedShuffleServerGrpcService.requireBuffer - Make require buffer mocked failed.
[2025-03-14 07:56:32.458] [client-data-transfer-1] [WARN] ShuffleServerGrpcClient.requirePreAllocation - ShuffleServer 10.1.0.10:20020 is full and can't send shuffle data successfully due to NO_BUFFER after retry 0 times, cost: 1(ms)
[2025-03-14 07:56:32.458] [client-data-transfer-1] [INFO] RetryUtils.retryWithCondition - Retry due to: org.apache.uniffle.common.exception.RssException. Use DEBUG level to see the full stack: requirePreAllocation failed! size[4048], host[10.1.0.10], port[20020]
[2025-03-14 07:56:32.458] [client-data-transfer-1] [INFO] RetryUtils.retryWithCondition - Will retry 1 more time(s) after waiting 1000 milliseconds.
[2025-03-14 07:56:32.475] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090651], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.475] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668063806], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.478] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076040], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.606] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.625] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.642] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.862] [DynamicClientConfService-0] [WARN] DynamicClientConfService.refreshClientConf - Error when update client conf with hdfs://localhost:38881/test/client_conf.
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:466)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1628)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
at org.apache.uniffle.coordinator.conf.DynamicClientConfService.refreshClientConf(DynamicClientConfService.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:56:32.976] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090651], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.976] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668063806], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:32.978] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076040], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:33.106] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:33.125] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:33.142] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670088640], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:33.459] [Grpc-3] [INFO] MockedShuffleServerGrpcService.requireBuffer - Make require buffer mocked failed.
[2025-03-14 07:56:33.459] [client-data-transfer-1] [WARN] ShuffleServerGrpcClient.requirePreAllocation - ShuffleServer 10.1.0.10:20020 is full and can't send shuffle data successfully due to NO_BUFFER after retry 0 times, cost: 1(ms)
[2025-03-14 07:56:33.459] [client-data-transfer-1] [WARN] ShuffleServerGrpcClient.sendShuffleData - Failed to send shuffle data due to
org.apache.uniffle.common.exception.RssException: requirePreAllocation failed! size[4048], host[10.1.0.10], port[20020]
at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.lambda$sendShuffleData$0(ShuffleServerGrpcClient.java:599)
at org.apache.uniffle.common.util.RetryUtils.retryWithCondition(RetryUtils.java:81)
at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.sendShuffleData(ShuffleServerGrpcClient.java:583)
at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.lambda$sendShuffleDataAsync$1(ShuffleWriteClientImpl.java:206)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:56:33.460] [client-data-transfer-1] [WARN] ShuffleWriteClientImpl.lambda$sendShuffleDataAsync$1 - ShuffleWriteClientImpl sendShuffleData with 4 blocks to 10.1.0.10-20020 cost: 2004(ms), it failed wth statusCode[NO_BUFFER]
[2025-03-14 07:56:33.460] [org.apache.spark.shuffle.writer.DataPusher-1] [ERROR] ShuffleWriteClientImpl.sendShuffleDataAsync - Some shuffle data can't be sent to shuffle-server, is fast fail: true, cancelled task size: 1
[2025-03-14 07:56:33.462] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.doReassignOnBlockSendFailure - Initiate reassignOnBlockSendFailure. failure partition servers: {0=[ReceivingFailureServer{serverId='10.1.0.10-20020', statusCode=NO_BUFFER}], 1=[ReceivingFailureServer{serverId='10.1.0.10-20020', statusCode=NO_BUFFER}], 2=[ReceivingFailureServer{serverId='10.1.0.10-20020', statusCode=NO_BUFFER}], 3=[ReceivingFailureServer{serverId='10.1.0.10-20020', statusCode=NO_BUFFER}]}
[2025-03-14 07:56:33.463] [Grpc-7] [INFO] ShuffleManagerGrpcService.reassignOnBlockSendFailure - Accepted reassign request on block sent failure for shuffleId: 0, stageId: 0, stageAttemptNumber: 0 from taskAttemptId: 0 on executorId: driver while partition split:false
[2025-03-14 07:56:33.463] [Grpc-2] [INFO] CoordinatorGrpcService.getShuffleAssignments - Request of getShuffleAssignments for appId[local-1741938989329_1741938989296], shuffleId[0], partitionNum[1], partitionNumPerRange[1], replica[1], requiredTags[[ss_v5, GRPC]], requiredShuffleServerNumber[1], faultyServerIds[2], stageId[0], stageAttemptNumber[0], isReassign[true]
[2025-03-14 07:56:33.464] [Grpc-2] [INFO] CoordinatorGrpcService.logAssignmentResult - Shuffle Servers of assignment for appId[local-1741938989329_1741938989296], shuffleId[0] are [10.1.0.10-20021]
[2025-03-14 07:56:33.464] [Grpc-2] [INFO] COORDINATOR_RPC_AUDIT_LOG.close - cmd=getShuffleAssignments statusCode=SUCCESS from=/10.1.0.10:57202 executionTimeUs=299 appId=local-1741938989329_1741938989296 args{shuffleId=0, partitionNum=1, partitionNumPerRange=1, replica=1, requiredTags=[ss_v5, GRPC], requiredShuffleServerNumber=1, faultyServerIds=[10.1.0.10-20019, 10.1.0.10-20020], stageId=0, stageAttemptNumber=0, isReassign=true}
[2025-03-14 07:56:33.464] [Grpc-7] [INFO] CoordinatorGrpcRetryableClient.lambda$getShuffleAssignments$4 - Success to get shuffle server assignment from Coordinator grpc client ref to 10.1.0.10:19999
[2025-03-14 07:56:33.464] [Grpc-7] [INFO] RssShuffleManagerBase.requestShuffleAssignment - Finished reassign
[2025-03-14 07:56:33.464] [Grpc-7] [INFO] RssShuffleManagerBase.registerShuffleServers - Start to register shuffleId[0]
[2025-03-14 07:56:33.467] [Grpc-1] [INFO] ShuffleServerGrpcService.registerShuffle - Get register request for appId[local-1741938989329_1741938989296], shuffleId[0], remoteStorage[] with 1 partition ranges. User: runner
[2025-03-14 07:56:33.467] [Grpc-1] [INFO] ShuffleTaskInfo.setProperties - local-1741938989329_1741938989296 set properties to {spark.rss.client.blockId.partitionIdBits=20, spark.rss.client.blockId.sequenceNoBits=21, spark.rss.client.reassign.enabled=true, spark.rss.writer.serializer.buffer.size=128k, spark.rss.blockId.maxPartitions=1048576, spark.rss.client.type=GRPC, spark.rss.client.send.check.interval.ms=1000, spark.rss.client.read.buffer.size=1m, spark.rss.client.assignment.shuffle.nodes.max=1, spark.rss.shuffle.manager.grpc.port=54252, spark.rss.writer.buffer.spill.size=32m, spark.rss.client.send.check.timeout.ms=30000, spark.rss.writer.buffer.size=4m, spark.rss.client.blockId.taskAttemptIdBits=22, spark.rss.client.reassign.blockRetryMaxTimes=10, spark.rss.test.mode.enable=true, spark.rss.client.retry.interval.max=1000, spark.rss.index.read.limit=100, spark.rss.storage.type=MEMORY_LOCALFILE, spark.rss.writer.buffer.segment.size=256k, spark.rss.client.retry.max=2, spark.rss.coordinator.quorum=10.1.0.10:19999, spark.rss.enabled=true, spark.rss.heartbeat.interval=2000}
[2025-03-14 07:56:33.468] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=registerShuffle statusCode=SUCCESS from=/10.1.0.10:51104 executionTimeUs=502 appId=local-1741938989329_1741938989296 shuffleId=0 args{remoteStoragePath=, user=runner, stageAttemptNumber=0}
[2025-03-14 07:56:33.470] [Grpc-7] [INFO] RssShuffleManagerBase.registerShuffleServers - Finish register shuffleId[0] with 6 ms
[2025-03-14 07:56:33.470] [Grpc-7] [INFO] RssShuffleManagerBase.reassignOnBlockSendFailure - Finished reassignOnBlockSendFailure request and cost 7(ms). Reassign result: {10.1.0.10-20020={0=[10.1.0.10-20021], 1=[10.1.0.10-20021], 2=[10.1.0.10-20021], 3=[10.1.0.10-20021]}}
[2025-03-14 07:56:33.471] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.doReassignOnBlockSendFailure - Success to reassign. The latest available assignment is {0=[ShuffleServerInfo{host[10.1.0.10], grpc port[20021]}], 1=[ShuffleServerInfo{host[10.1.0.10], grpc port[20021]}], 2=[ShuffleServerInfo{host[10.1.0.10], grpc port[20021]}], 3=[ShuffleServerInfo{host[10.1.0.10], grpc port[20021]}]}
[2025-03-14 07:56:33.471] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.reassignAndResendBlocks - Failed blocks have been resent to data pusher queue since reassignment has been finished successfully
[2025-03-14 07:56:33.473] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer statusCode=SUCCESS from=/10.1.0.10:51104 executionTimeUs=496 appId=local-1741938989329_1741938989296 shuffleId=0 args{requireSize=4048, partitionIdsSize=4, partitionIds=[0~3]} return{requireBufferId=1}
[2025-03-14 07:56:33.474] [Grpc-6] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670092688], preAllocatedSize[4048], inFlushSize[0]
[2025-03-14 07:56:33.475] [Grpc-6] [INFO] ShuffleBufferManager.pickFlushedShuffle - Pick application_shuffleId[local-1741938989329_1741938989296/0] with 957 bytes
[2025-03-14 07:56:33.475] [Grpc-6] [ERROR] ShuffleServerGrpcService.sendShuffleData - Error happened when shuffleEngine.write for appId[local-1741938989329_1741938989296], shuffleId[0], partitionId[1], statusCode=NO_REGISTER
[2025-03-14 07:56:33.476] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[670090651], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:33.477] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[668063806], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:33.477] [Grpc-6] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData statusCode=NO_REGISTER from=/10.1.0.10:51104 executionTimeUs=3265 appId=local-1741938989329_1741938989296 shuffleId=0 args{requireBufferId=1, timestamp=1741938993473, stageAttemptNumber=0, shuffleDataSize=4}
[2025-03-14 07:56:33.477] [client-data-transfer-2] [WARN] ShuffleServerGrpcClient.sendShuffleData - Failed to send shuffle data due to
org.apache.uniffle.common.exception.NotRetryException: Can't send shuffle data with 4 blocks to 10.1.0.10:20021, statusCode=NO_REGISTER, errorMsg:Error happened when shuffleEngine.write for appId[local-1741938989329_1741938989296], shuffleId[0], partitionId[1], statusCode=NO_REGISTER
at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.lambda$sendShuffleData$0(ShuffleServerGrpcClient.java:640)
at org.apache.uniffle.common.util.RetryUtils.retryWithCondition(RetryUtils.java:81)
at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.sendShuffleData(ShuffleServerGrpcClient.java:583)
at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.lambda$sendShuffleDataAsync$1(ShuffleWriteClientImpl.java:206)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:56:33.478] [LocalFileFlushEventThreadPool-0] [INFO] LocalStorageMeta.createMetadataIfNotExist - Create metadata of shuffle local-1741938989329_1741938989296/0.
[2025-03-14 07:56:33.478] [client-data-transfer-2] [WARN] ShuffleWriteClientImpl.lambda$sendShuffleDataAsync$1 - ShuffleWriteClientImpl sendShuffleData with 4 blocks to 10.1.0.10-20021 cost: 6(ms), it failed wth statusCode[NO_REGISTER]
[2025-03-14 07:56:33.478] [org.apache.spark.shuffle.writer.DataPusher-2] [ERROR] ShuffleWriteClientImpl.sendShuffleDataAsync - Some shuffle data can't be sent to shuffle-server, is fast fail: true, cancelled task size: 1
[2025-03-14 07:56:33.478] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [ERROR] RssShuffleWriter.collectFailedBlocksToResend - Partial blocks for taskId: [0_0] failed on the illegal status code: [NO_REGISTER] without resend on server: ShuffleServerInfo{host[10.1.0.10], grpc port[20021]}
[2025-03-14 07:56:33.478] [triggerShuffleBufferManagerFlush-0] [INFO] ShuffleBufferManager.flushIfNecessary - Start to flush with usedMemory[669076040], preAllocatedSize[0], inFlushSize[0]
[2025-03-14 07:56:33.479] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [ERROR] RssShuffleWriter.collectFailedBlocksToResend - Partial blocks for taskId: [0_0] failed on the illegal status code: [NO_REGISTER] without resend on server: ShuffleServerInfo{host[10.1.0.10], grpc port[20021]}
[2025-03-14 07:56:33.479] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [ERROR] RssShuffleWriter.collectFailedBlocksToResend - Partial blocks for taskId: [0_0] failed on the illegal status code: [NO_REGISTER] without resend on server: ShuffleServerInfo{host[10.1.0.10], grpc port[20021]}
[2025-03-14 07:56:33.479] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [ERROR] RssShuffleWriter.collectFailedBlocksToResend - Partial blocks for taskId: [0_0] failed on the illegal status code: [NO_REGISTER] without resend on server: ShuffleServerInfo{host[10.1.0.10], grpc port[20021]}
[2025-03-14 07:56:33.480] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleManagerBase.markFailedTask - Mark the task: 0_0 failed.
[2025-03-14 07:56:33.482] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [ERROR] Executor.logError - Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:56:33.489] [task-result-getter-0] [WARN] TaskSetManager.logWarning - Lost task 0.0 in stage 0.0 (TID 0) (fv-az889-710 executor driver): org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-14 07:56:33.490] [task-result-getter-0] [ERROR] TaskSetManager.logError - Task 0 in stage 0.0 failed 1 times; aborting job
[2025-03-14 07:56:33.490] [task-result-getter-0] [INFO] TaskSchedulerImpl.logInfo - Removed TaskSet 0.0, whose tasks have all completed, from pool
[2025-03-14 07:56:33.491] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Cancelling stage 0
[2025-03-14 07:56:33.492] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Killing all running tasks in stage 0: Stage cancelled
[2025-03-14 07:56:33.492] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ShuffleMapStage 0 (javaRDD at SparkSQLTest.java:53) failed in 4.074 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (fv-az889-710 executor driver): org.apache.uniffle.common.exception.RssSendFailedException: Errors on resending the blocks data to the remote shuffle-server.
at org.apache.spark.shuffle.writer.RssShuffleWriter.collectFailedBlocksToResend(RssShuffleWriter.java:622)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkDataIfAnyFailure(RssShuffleWriter.java:524)
at org.apache.spark.shuffle.writer.RssShuffleWriter.checkBlockSendResult(RssShuffleWriter.java:486)
at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:341)
at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:292)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
Check notice on line 0 in .github
github-actions / Test Results
2 skipped tests found
There are 2 skipped tests, see "Raw output" for the full list of skipped tests.
Raw output
org.apache.uniffle.test.AccessClusterTest ‑ org.apache.uniffle.test.AccessClusterTest
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ rpcMetricsTest
Check notice on line 0 in .github
github-actions / Test Results
1175 tests found (test 1 to 724)
There are 1175 tests, see "Raw output" for the list of tests 1 to 724.
Raw output
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testCombineBuffer
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testCommitBlocksWhenMemoryShuffleDisabled
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testOnePartition
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testWriteException
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testWriteNormal
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testWriteNormalWithRemoteMerge
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testWriteNormalWithRemoteMergeAndCombine
org.apache.hadoop.mapred.SortWriteBufferTest ‑ testReadWrite
org.apache.hadoop.mapred.SortWriteBufferTest ‑ testSortBufferIterator
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ applyDynamicClientConfTest
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ baskAttemptIdTest
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ blockConvertTest
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ partitionIdConvertBlockTest
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ testEstimateTaskConcurrency
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ testGetRequiredShuffleServerNumber
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ testValidateRssClientConf
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ extraEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ missingEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ multiPassEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ obsoletedAndTipFailedEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ singlePassEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ singlePassWithRepeatedSuccessEventFetch
org.apache.hadoop.mapreduce.task.reduce.FetcherTest ‑ testCodecIsDuplicated
org.apache.hadoop.mapreduce.task.reduce.FetcherTest ‑ writeAndReadDataMergeFailsTestWithRss
org.apache.hadoop.mapreduce.task.reduce.FetcherTest ‑ writeAndReadDataTestWithRss
org.apache.hadoop.mapreduce.task.reduce.FetcherTest ‑ writeAndReadDataTestWithoutRss
org.apache.hadoop.mapreduce.task.reduce.RMRssShuffleTest ‑ testReadShuffleWithCombine
org.apache.hadoop.mapreduce.task.reduce.RMRssShuffleTest ‑ testReadShuffleWithoutCombine
org.apache.hadoop.mapreduce.task.reduce.RssInMemoryRemoteMergerTest ‑ mergerTest{File}
org.apache.hadoop.mapreduce.task.reduce.RssRemoteMergeManagerTest ‑ mergerTest{File}
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testCreateFallback
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testCreateInDriver
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testCreateInDriverDenied
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testCreateInExecutor
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testDefaultIncludeExcludeProperties
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testExcludeProperties
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testIncludeProperties
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testTryAccessCluster
org.apache.spark.shuffle.FunctionUtilsTests ‑ testOnceFunction0
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testCreateShuffleManagerServer
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testGetDataDistributionType
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testRssShuffleManagerInterface
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testRssShuffleManagerRegisterShuffle{int}[1]
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testRssShuffleManagerRegisterShuffle{int}[2]
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testRssShuffleManagerRegisterShuffle{int}[3]
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testWithStageRetry
org.apache.spark.shuffle.RssSpark2ShuffleUtilsTest ‑ testCreateFetchFailedException
org.apache.spark.shuffle.RssSpark2ShuffleUtilsTest ‑ testIsStageResubmitSupported
org.apache.spark.shuffle.RssSpark3ShuffleUtilsTest ‑ testCreateFetchFailedException
org.apache.spark.shuffle.RssSpark3ShuffleUtilsTest ‑ testIsStageResubmitSupported
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ applyDynamicClientConfTest
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ odfsConfigurationTest
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ testAssignmentTags
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ testEstimateTaskConcurrency
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ testGetRequiredShuffleServerNumber
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ testValidateRssClientConf
org.apache.spark.shuffle.SparkVersionUtilsTest ‑ testSpark3Version
org.apache.spark.shuffle.SparkVersionUtilsTest ‑ testSparkVersion
org.apache.spark.shuffle.handle.MutableShuffleHandleInfoTest ‑ testCreatePartitionReplicaTracking
org.apache.spark.shuffle.handle.MutableShuffleHandleInfoTest ‑ testListAllPartitionAssignmentServers
org.apache.spark.shuffle.handle.MutableShuffleHandleInfoTest ‑ testUpdateAssignment
org.apache.spark.shuffle.handle.MutableShuffleHandleInfoTest ‑ testUpdateAssignmentOnPartitionSplit
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ cleanup
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest1{BlockIdLayout}[1]
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest1{BlockIdLayout}[2]
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest2
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest3
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest4
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest5
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest7
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTestUncompressedShuffle
org.apache.spark.shuffle.reader.RssShuffleReaderTest ‑ readTest
org.apache.spark.shuffle.writer.DataPusherTest ‑ testSendData
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ blockFailureResendTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ checkBlockSendResultTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ dataConsistencyWhenSpillTriggeredTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ postBlockEventTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ reassignMultiTimesForOnePartitionIdTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ refreshAssignmentTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ writeTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addFirstRecordWithLargeSizeTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addHugeRecordTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addNullValueRecordTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addPartitionDataTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addRecordCompressedTest{BlockIdLayout}[1]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addRecordCompressedTest{BlockIdLayout}[2]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addRecordUnCompressedTest{BlockIdLayout}[1]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addRecordUnCompressedTest{BlockIdLayout}[2]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ buildBlockEventsTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ createBlockIdTest{BlockIdLayout}[1]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ createBlockIdTest{BlockIdLayout}[2]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ spillByOthersTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ spillByOwnTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ spillByOwnWithSparkTaskMemoryManagerTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ spillPartial
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ testClearWithSpillRatio
org.apache.spark.shuffle.writer.WriteBufferTest ‑ test
org.apache.tez.common.GetShuffleServerRequestTest ‑ testSerDe
org.apache.tez.common.GetShuffleServerResponseTest ‑ testSerDe
org.apache.tez.common.IdUtilsTest ‑ testConvertTezTaskAttemptID
org.apache.tez.common.InputContextUtilsTest ‑ testGetTezTaskAttemptID
org.apache.tez.common.RssTezUtilsTest ‑ attemptTaskIdTest
org.apache.tez.common.RssTezUtilsTest ‑ baskAttemptIdTest
org.apache.tez.common.RssTezUtilsTest ‑ blockConvertTest
org.apache.tez.common.RssTezUtilsTest ‑ testApplyDynamicClientConf
org.apache.tez.common.RssTezUtilsTest ‑ testComputeShuffleId
org.apache.tez.common.RssTezUtilsTest ‑ testEstimateTaskConcurrency
org.apache.tez.common.RssTezUtilsTest ‑ testFilterRssConf
org.apache.tez.common.RssTezUtilsTest ‑ testGetRequiredShuffleServerNumber
org.apache.tez.common.RssTezUtilsTest ‑ testParseDagId
org.apache.tez.common.RssTezUtilsTest ‑ testParseRssWorker
org.apache.tez.common.RssTezUtilsTest ‑ testPartitionIdConvertBlock
org.apache.tez.common.RssTezUtilsTest ‑ testTaskIdStrToTaskId
org.apache.tez.common.ShuffleAssignmentsInfoWritableTest ‑ testSerDe
org.apache.tez.common.TezIdHelperTest ‑ testTetTaskAttemptId
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testDagStateChangeCallback
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testFetchRemoteStorageFromCoordinator{String}[1]
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testFetchRemoteStorageFromCoordinator{String}[2]
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testFetchRemoteStorageFromDynamicConf{String}[1]
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testFetchRemoteStorageFromDynamicConf{String}[2]
org.apache.tez.dag.app.TezRemoteShuffleManagerTest ‑ testTezRemoteShuffleManager
org.apache.tez.dag.app.TezRemoteShuffleManagerTest ‑ testTezRemoteShuffleManagerSecure
org.apache.tez.runtime.library.common.shuffle.impl.RssShuffleManagerTest ‑ testFetchFailed
org.apache.tez.runtime.library.common.shuffle.impl.RssShuffleManagerTest ‑ testProgressWithEmptyPendingHosts
org.apache.tez.runtime.library.common.shuffle.impl.RssShuffleManagerTest ‑ testUseSharedExecutor
org.apache.tez.runtime.library.common.shuffle.impl.RssSimpleFetchedInputAllocatorTest ‑ testAllocate{File}
org.apache.tez.runtime.library.common.shuffle.impl.RssTezFetcherTest ‑ testReadWithDiskFetchedInput{File}
org.apache.tez.runtime.library.common.shuffle.impl.RssTezFetcherTest ‑ testReadWithRemoteFetchedInput{File}
org.apache.tez.runtime.library.common.shuffle.impl.RssTezFetcherTest ‑ writeAndReadDataTestWithoutRss
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RMRssShuffleTest ‑ testReadMultiPartitionShuffleData
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RMRssShuffleTest ‑ testReadShuffleData
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssInMemoryMergerTest ‑ mergerTest
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssMergeManagerTest ‑ mergerTest
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testPenalty
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testProgressDuringGetHostWait
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth1
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth2
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth3
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth4
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth5
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth6
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth7
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testShutdownWithInterrupt
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleTest ‑ testKillSelf
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleTest ‑ testSchedulerTerminatesOnException
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezBypassWriterTest ‑ testCalcChecksum
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezBypassWriterTest ‑ testWrite
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezBypassWriterTest ‑ testWriteDiskFetchInput{File}
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezBypassWriterTest ‑ testWriteRemoteFetchInput
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezShuffleDataFetcherTest ‑ testIteratorWithInMemoryReader
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testCommitBlocksWhenMemoryShuffleDisabled{File}
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testFailFastWhenFailedToSendBlocks{File}
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testWriteException{File}
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testWriteNormal{File}
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testWriteWithRemoteMerge
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferTest ‑ testReadWrite
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferTest ‑ testReadWriteWithRemoteMergeAndNoSort
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferTest ‑ testReadWriteWithRemoteMergeAndSort
org.apache.tez.runtime.library.common.sort.impl.RssSorterTest ‑ testCollectAndRecordsPerPartition
org.apache.tez.runtime.library.common.sort.impl.RssTezPerPartitionRecordTest ‑ testNumPartitions
org.apache.tez.runtime.library.common.sort.impl.RssTezPerPartitionRecordTest ‑ testRssTezIndexHasData
org.apache.tez.runtime.library.common.sort.impl.RssUnSorterTest ‑ testCollectAndRecordsPerPartition
org.apache.tez.runtime.library.input.RMRssOrderedGroupedKVInputTest ‑ testRMRssOrderedGroupedKVInput
org.apache.tez.runtime.library.input.RMRssOrderedGroupedKVInputTest ‑ testRMRssOrderedGroupedKVInputMulitPartition
org.apache.tez.runtime.library.input.RssOrderedGroupedKVInputTest ‑ testInterruptWhileAwaitingInput
org.apache.tez.runtime.library.input.RssSortedGroupedMergedInputTest ‑ testSimpleConcatenatedMergedKeyValueInput
org.apache.tez.runtime.library.input.RssSortedGroupedMergedInputTest ‑ testSimpleConcatenatedMergedKeyValuesInput
org.apache.tez.runtime.library.output.RssOrderedPartitionedKVOutputTest ‑ testClose
org.apache.tez.runtime.library.output.RssOrderedPartitionedKVOutputTest ‑ testNonStartedOutput
org.apache.tez.runtime.library.output.RssUnorderedKVOutputTest ‑ testClose
org.apache.tez.runtime.library.output.RssUnorderedKVOutputTest ‑ testNonStartedOutput
org.apache.tez.runtime.library.output.RssUnorderedPartitionedKVOutputTest ‑ testClose
org.apache.tez.runtime.library.output.RssUnorderedPartitionedKVOutputTest ‑ testNonStartedOutput
org.apache.uniffle.cli.AdminRestApiTest ‑ testRunRefreshAccessChecker
org.apache.uniffle.cli.CLIContentUtilsTest ‑ testTableFormat
org.apache.uniffle.cli.UniffleTestAdminCLI ‑ testAdminRefreshCLI
org.apache.uniffle.cli.UniffleTestAdminCLI ‑ testMissingClientCLI
org.apache.uniffle.cli.UniffleTestCLI ‑ testExampleCLI
org.apache.uniffle.cli.UniffleTestCLI ‑ testHelp
org.apache.uniffle.client.ClientUtilsTest ‑ testGenerateTaskIdBitMap
org.apache.uniffle.client.ClientUtilsTest ‑ testGetMaxAttemptNo
org.apache.uniffle.client.ClientUtilsTest ‑ testGetNumberOfSignificantBits
org.apache.uniffle.client.ClientUtilsTest ‑ testValidateClientType
org.apache.uniffle.client.ClientUtilsTest ‑ testWaitUntilDoneOrFail
org.apache.uniffle.client.PartitionDataReplicaRequirementTrackingTest ‑ testMultipleReplicaWithMultiServers
org.apache.uniffle.client.PartitionDataReplicaRequirementTrackingTest ‑ testMultipleReplicaWithSingleServer
org.apache.uniffle.client.PartitionDataReplicaRequirementTrackingTest ‑ testSingleReplicaWithMultiServers
org.apache.uniffle.client.PartitionDataReplicaRequirementTrackingTest ‑ testSingleReplicaWithSingleShuffleServer
org.apache.uniffle.client.factory.ShuffleManagerClientFactoryTest ‑ createShuffleManagerClient
org.apache.uniffle.client.impl.FailedBlockSendTrackerTest ‑ test
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest1
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest10
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest11
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest12
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest13
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest13b
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest14
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest15
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest16
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest2
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest3
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest4
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest5
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest7
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest8
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest9
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testAbandonEventWhenTaskFailed
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testGetShuffleResult{BlockIdLayout}[1]
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testGetShuffleResult{BlockIdLayout}[2]
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testRegisterAndUnRegisterShuffleServer
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testSendData
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testSendDataWithDefectiveServers
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testSettingRssClientConfigs
org.apache.uniffle.client.record.reader.BufferedSegmentTest ‑ testMergeResolvedSegmentWithHook
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithCombine{String}[1]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithCombine{String}[2]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithCombine{String}[3]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithCombine{String}[4]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithoutCombine{String}[1]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithoutCombine{String}[2]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithoutCombine{String}[3]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithoutCombine{String}[4]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithCombine{String}[1]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithCombine{String}[2]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithCombine{String}[3]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithCombine{String}[4]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithoutCombine{String}[1]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithoutCombine{String}[2]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithoutCombine{String}[3]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithoutCombine{String}[4]
org.apache.uniffle.client.record.writer.RecordCollectionTest ‑ testSortAndSerializeRecords{String}[1]
org.apache.uniffle.client.record.writer.RecordCollectionTest ‑ testSortAndSerializeRecords{String}[2]
org.apache.uniffle.client.record.writer.RecordCollectionTest ‑ testSortAndSerializeRecords{String}[3]
org.apache.uniffle.client.record.writer.RecordCollectionTest ‑ testSortCombineAndSerializeRecords{String}[1]
org.apache.uniffle.client.record.writer.RecordCollectionTest ‑ testSortCombineAndSerializeRecords{String}[2]
org.apache.uniffle.client.record.writer.RecordCollectionTest ‑ testSortCombineAndSerializeRecords{String}[3]
org.apache.uniffle.client.shuffle.MRCombinerTest ‑ testMRCombiner
org.apache.uniffle.client.shuffle.RecordCollectorTest ‑ testRecordCollector
org.apache.uniffle.common.ArgumentsTest ‑ argEmptyTest
org.apache.uniffle.common.ArgumentsTest ‑ argTest
org.apache.uniffle.common.BufferSegmentTest ‑ testEquals
org.apache.uniffle.common.BufferSegmentTest ‑ testGetOffset
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[1]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[2]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[3]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[4]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[5]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[6]
org.apache.uniffle.common.BufferSegmentTest ‑ testToString
org.apache.uniffle.common.PartitionRangeTest ‑ testCompareTo
org.apache.uniffle.common.PartitionRangeTest ‑ testEquals
org.apache.uniffle.common.PartitionRangeTest ‑ testHashCode
org.apache.uniffle.common.PartitionRangeTest ‑ testPartitionRange
org.apache.uniffle.common.PartitionRangeTest ‑ testToString
org.apache.uniffle.common.ReconfigurableConfManagerTest ‑ test
org.apache.uniffle.common.ReconfigurableConfManagerTest ‑ testWithoutInitialization
org.apache.uniffle.common.ReconfigurableRegistryTest ‑ testUpdate
org.apache.uniffle.common.ReconfigurableRegistryTest ‑ testUpdateSpecificKey
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testEmptyStoragePath{String}[1]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testEmptyStoragePath{String}[2]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testEquals
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testHashCode
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testNotEquals{String}[1]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testNotEquals{String}[2]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testNotEquals{String}[3]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testRemoteStorageInfo{String, Map, String}[1]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testRemoteStorageInfo{String, Map, String}[2]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testRemoteStorageInfo{String, Map, String}[3]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testUncommonConfString{String}[1]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testUncommonConfString{String}[2]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testUncommonConfString{String}[3]
org.apache.uniffle.common.ServerStatusTest ‑ test
org.apache.uniffle.common.ShuffleBlockInfoTest ‑ testToString
org.apache.uniffle.common.ShuffleDataResultTest ‑ testEmpty
org.apache.uniffle.common.ShuffleIndexResultTest ‑ testEmpty
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ shufflePartitionedBlockTest
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testEquals
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testNotEquals{int, long, long, int}[1]
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testNotEquals{int, long, long, int}[2]
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testNotEquals{int, long, long, int}[3]
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testNotEquals{int, long, long, int}[4]
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testSize
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testToString
org.apache.uniffle.common.ShufflePartitionedDataTest ‑ testToString
org.apache.uniffle.common.ShuffleRegisterInfoTest ‑ testEquals
org.apache.uniffle.common.ShuffleRegisterInfoTest ‑ testToString
org.apache.uniffle.common.ShuffleServerInfoTest ‑ testEquals
org.apache.uniffle.common.ShuffleServerInfoTest ‑ testToString
org.apache.uniffle.common.UnionKeyTest ‑ test
org.apache.uniffle.common.compression.CompressionTest ‑ checkDecompressBufferOffsets
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[10]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[11]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[12]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[13]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[14]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[15]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[16]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[17]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[18]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[19]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[1]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[20]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[21]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[22]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[23]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[24]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[2]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[3]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[4]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[5]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[6]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[7]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[8]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[9]
org.apache.uniffle.common.config.ConfigOptionTest ‑ testBasicTypes
org.apache.uniffle.common.config.ConfigOptionTest ‑ testDeprecatedAndFallbackKeys
org.apache.uniffle.common.config.ConfigOptionTest ‑ testDeprecatedKeys
org.apache.uniffle.common.config.ConfigOptionTest ‑ testEnumType
org.apache.uniffle.common.config.ConfigOptionTest ‑ testFallbackKeys
org.apache.uniffle.common.config.ConfigOptionTest ‑ testListTypes
org.apache.uniffle.common.config.ConfigOptionTest ‑ testSetKVWithStringTypeDirectly
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[14]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[15]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[16]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[17]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[14]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToString{Object, String}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToString{Object, String}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToString{Object, String}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValueWithUnsupportedType
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testGetAllConfigOptions
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[6]
org.apache.uniffle.common.config.RssConfTest ‑ testOptionWithDefault
org.apache.uniffle.common.config.RssConfTest ‑ testOptionWithNoDefault
org.apache.uniffle.common.config.RssConfTest ‑ testSetStringAndGetConcreteType
org.apache.uniffle.common.executor.ThreadPoolManagerTest ‑ test0
org.apache.uniffle.common.executor.ThreadPoolManagerTest ‑ test1
org.apache.uniffle.common.executor.ThreadPoolManagerTest ‑ test2
org.apache.uniffle.common.executor.ThreadPoolManagerTest ‑ testReject
org.apache.uniffle.common.filesystem.HadoopFilesystemProviderTest ‑ testGetSecuredFilesystem
org.apache.uniffle.common.filesystem.HadoopFilesystemProviderTest ‑ testGetSecuredFilesystemButNotInitializeHadoopSecurityContext
org.apache.uniffle.common.filesystem.HadoopFilesystemProviderTest ‑ testWriteAndReadBySecuredFilesystem
org.apache.uniffle.common.future.CompletableFutureExtensionTest ‑ timeoutExceptionTest
org.apache.uniffle.common.future.CompletableFutureUtilsTest ‑ timeoutTest
org.apache.uniffle.common.merger.MergerTest ‑ testMergeSegmentToFile{String, File}[1]
org.apache.uniffle.common.merger.MergerTest ‑ testMergeSegmentToFile{String, File}[2]
org.apache.uniffle.common.merger.MergerTest ‑ testMergeSegmentToFile{String, File}[3]
org.apache.uniffle.common.merger.MergerTest ‑ testMergeSegmentToFile{String, File}[4]
org.apache.uniffle.common.merger.MergerTest ‑ testMergeSegmentToFile{String, File}[5]
org.apache.uniffle.common.merger.MergerTest ‑ testMergeSegmentToFile{String, File}[6]
org.apache.uniffle.common.metrics.MetricReporterFactoryTest ‑ testGetMetricReporter
org.apache.uniffle.common.metrics.MetricsManagerTest ‑ testMetricsManager
org.apache.uniffle.common.metrics.prometheus.PrometheusPushGatewayMetricReporterTest ‑ test
org.apache.uniffle.common.metrics.prometheus.PrometheusPushGatewayMetricReporterTest ‑ testParseGroupingKey
org.apache.uniffle.common.metrics.prometheus.PrometheusPushGatewayMetricReporterTest ‑ testParseIncompleteGroupingKey
org.apache.uniffle.common.netty.EncoderAndDecoderTest ‑ test
org.apache.uniffle.common.netty.TransportFrameDecoderTest ‑ testShouldRpcRequestsToBeReleased
org.apache.uniffle.common.netty.TransportFrameDecoderTest ‑ testShouldRpcResponsesToBeReleased
org.apache.uniffle.common.netty.buffer.FileSegmentManagedBufferTest ‑ testNioByteBuffer{File}
org.apache.uniffle.common.netty.client.TransportClientFactoryTest ‑ testClientDiffPartition
org.apache.uniffle.common.netty.client.TransportClientFactoryTest ‑ testClientDiffServer
org.apache.uniffle.common.netty.client.TransportClientFactoryTest ‑ testClientReuse
org.apache.uniffle.common.netty.client.TransportClientFactoryTest ‑ testCreateClient
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetLocalShuffleDataRequest
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetLocalShuffleDataResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetLocalShuffleIndexRequest
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetLocalShuffleIndexResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetMemoryShuffleDataRequest
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetMemoryShuffleDataResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetSortedShuffleDataRequest
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetSortedShuffleDataResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testRpcResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testSendShuffleDataRequest
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFileUseDirect{String, File}[1]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFileUseDirect{String, File}[2]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[10]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[11]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[12]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[1]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[2]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[3]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[4]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[5]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[6]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[7]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[8]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile{String, File}[9]
org.apache.uniffle.common.rpc.GrpcServerTest ‑ testGrpcExecutorPool
org.apache.uniffle.common.rpc.GrpcServerTest ‑ testRandomPort
org.apache.uniffle.common.rpc.StatusCodeTest ‑ test
org.apache.uniffle.common.security.HadoopSecurityContextTest ‑ testCreateIllegalContext
org.apache.uniffle.common.security.HadoopSecurityContextTest ‑ testSecuredCallable
org.apache.uniffle.common.security.HadoopSecurityContextTest ‑ testSecuredDisableProxyUser
org.apache.uniffle.common.security.HadoopSecurityContextTest ‑ testWithOutKrb5Conf
org.apache.uniffle.common.security.SecurityContextFactoryTest ‑ testCreateHadoopSecurityContext
org.apache.uniffle.common.security.SecurityContextFactoryTest ‑ testDefaultSecurityContext
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testAvoidEOFException{int}[1]
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testAvoidEOFException{int}[2]
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testAvoidEOFException{int}[3]
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testSplit
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testSplitContainsStorageId
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[1]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[2]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[3]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[4]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[5]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[6]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testDiscontinuousMapTaskIds
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testSplit
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testSplitContainsStorageId
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testSplitForMergeContinuousSegments
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testKryoWriteRandomRead{boolean, File}[1]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testKryoWriteRandomRead{boolean, File}[2]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testSerDeKeyValues{String, File}[1]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testSerDeKeyValues{String, File}[2]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testSerDeKeyValues{String, File}[3]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testSerDeKeyValues{String, File}[4]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testSerDeObject{Class}[1]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testSerDeObject{Class}[2]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testSerDeObject{Class}[3]
org.apache.uniffle.common.serializer.KryoSerializerTest ‑ testSerDeObject{Class}[4]
org.apache.uniffle.common.serializer.SerInputOutputStreamTest ‑ testReadFileInputStream
org.apache.uniffle.common.serializer.SerInputOutputStreamTest ‑ testReadMemoryInputStream
org.apache.uniffle.common.serializer.SerInputOutputStreamTest ‑ testReadNullBytes
org.apache.uniffle.common.serializer.SerializerFactoryTest ‑ testGetSerializer
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValuesUseDirect{String, File}[1]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValuesUseDirect{String, File}[2]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues{String, File}[1]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues{String, File}[2]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues{String, File}[3]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues{String, File}[4]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues{String, File}[5]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues{String, File}[6]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues{String, File}[7]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues{String, File}[8]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeObject{Class}[1]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeObject{Class}[2]
org.apache.uniffle.common.storage.StorageInfoUtilsTest ‑ testFromProto
org.apache.uniffle.common.storage.StorageInfoUtilsTest ‑ testToProto
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testEquals
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testFromLengths
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testFromLengthsErrors
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[1]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[2]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[3]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[4]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[5]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[6]
org.apache.uniffle.common.util.BlockIdTest ‑ testEquals
org.apache.uniffle.common.util.BlockIdTest ‑ testToString
org.apache.uniffle.common.util.ByteBufUtilsTest ‑ test
org.apache.uniffle.common.util.ChecksumUtilsTest ‑ crc32ByteBufferTest
org.apache.uniffle.common.util.ChecksumUtilsTest ‑ crc32TestWithByte
org.apache.uniffle.common.util.ChecksumUtilsTest ‑ crc32TestWithByteBuff
org.apache.uniffle.common.util.ExitUtilsTest ‑ test
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ stressingTestManySuppliers
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testAutoCloseable
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testCacheable
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testDelegateExtendClose
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testMultipleSupplierShouldNotInterfere
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testReClose
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testRenew
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testSerialization
org.apache.uniffle.common.util.JavaUtilsTest ‑ test
org.apache.uniffle.common.util.NettyUtilsTest ‑ test
org.apache.uniffle.common.util.OutputUtilsTest ‑ test
org.apache.uniffle.common.util.RetryUtilsTest ‑ testRetry
org.apache.uniffle.common.util.RetryUtilsTest ‑ testRetryWithCondition
org.apache.uniffle.common.util.RssUtilsTest ‑ getMetricNameForHostNameTest
org.apache.uniffle.common.util.RssUtilsTest ‑ testCloneBitmap
org.apache.uniffle.common.util.RssUtilsTest ‑ testGenerateServerToPartitions
org.apache.uniffle.common.util.RssUtilsTest ‑ testGetConfiguredLocalDirs
org.apache.uniffle.common.util.RssUtilsTest ‑ testGetHostIp
org.apache.uniffle.common.util.RssUtilsTest ‑ testGetPropertiesFromFile
org.apache.uniffle.common.util.RssUtilsTest ‑ testLoadExtentions
org.apache.uniffle.common.util.RssUtilsTest ‑ testSerializeBitmap
org.apache.uniffle.common.util.RssUtilsTest ‑ testSettingProperties
org.apache.uniffle.common.util.RssUtilsTest ‑ testShuffleBitmapToPartitionBitmap{BlockIdLayout}[1]
org.apache.uniffle.common.util.RssUtilsTest ‑ testShuffleBitmapToPartitionBitmap{BlockIdLayout}[2]
org.apache.uniffle.common.util.RssUtilsTest ‑ testStartServiceOnPort
org.apache.uniffle.common.util.ThreadUtilsTest ‑ invokeAllTimeoutThreadPoolTest
org.apache.uniffle.common.util.ThreadUtilsTest ‑ shutdownThreadPoolTest
org.apache.uniffle.common.util.ThreadUtilsTest ‑ testExecuteTasksWithFutureHandler
org.apache.uniffle.common.util.ThreadUtilsTest ‑ testExecuteTasksWithFutureHandlerAndTimeout
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[10]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[11]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[12]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[13]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[14]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[15]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[16]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[17]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[18]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[19]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[1]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[20]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[21]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[22]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[23]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[24]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[25]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[26]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[27]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[28]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[2]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[3]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[4]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[5]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[6]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[7]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[8]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[9]
org.apache.uniffle.common.util.UnitConverterTest ‑ testFormatSize
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[10]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[11]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[12]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[13]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[14]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[15]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[16]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[17]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[18]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[1]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[2]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[3]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[4]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[5]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[6]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[7]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[8]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[9]
org.apache.uniffle.common.web.JettyServerTest ‑ jettyServerStartTest
org.apache.uniffle.common.web.JettyServerTest ‑ jettyServerTest
org.apache.uniffle.coordinator.ApplicationManagerTest ‑ clearWithoutRemoteStorageTest
org.apache.uniffle.coordinator.ApplicationManagerTest ‑ refreshTest
org.apache.uniffle.coordinator.CoordinatorConfTest ‑ test
org.apache.uniffle.coordinator.CoordinatorServerTest ‑ test
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testCheckQuota
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testCheckQuotaMetrics
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testCheckQuotaWithDefault
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testDetectUserResource
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testQuotaManagerWithoutAccessQuotaChecker
org.apache.uniffle.coordinator.ServerNodeTest ‑ compareTest
org.apache.uniffle.coordinator.ServerNodeTest ‑ testNettyPort
org.apache.uniffle.coordinator.ServerNodeTest ‑ testStorageInfoOfServerNode
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ excludeNodesNoDelayTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ getLostServerListTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ getServerListForNettyTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ getServerListTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ getUnhealthyServerList
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ heartbeatTimeoutTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ startupSilentPeriodTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ testGetCorrectServerNodesWhenOneNodeRemoved
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ testGetCorrectServerNodesWhenOneNodeRemovedAndUnhealthyNodeFound
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ updateExcludeNodesTest
org.apache.uniffle.coordinator.access.AccessManagerTest ‑ test
org.apache.uniffle.coordinator.checker.AccessCandidatesCheckerTest ‑ test{File}
org.apache.uniffle.coordinator.checker.AccessClusterLoadCheckerTest ‑ testAccessInfoRequiredShuffleServers
org.apache.uniffle.coordinator.checker.AccessClusterLoadCheckerTest ‑ testWhenAvailableServerThresholdSpecified
org.apache.uniffle.coordinator.checker.AccessQuotaCheckerTest ‑ testAccessInfoRequiredShuffleServers
org.apache.uniffle.coordinator.conf.DynamicClientConfServiceTest ‑ testByLegacyParser{File}
org.apache.uniffle.coordinator.conf.LegacyClientConfParserTest ‑ testParse
org.apache.uniffle.coordinator.conf.RssClientConfApplyManagerTest ‑ testBypassApply
org.apache.uniffle.coordinator.conf.RssClientConfApplyManagerTest ‑ testCustomizeApplyStrategy
org.apache.uniffle.coordinator.conf.YamlClientConfParserTest ‑ testFromFile
org.apache.uniffle.coordinator.conf.YamlClientConfParserTest ‑ testParse
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testAllMetrics
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testCoordinatorMetrics
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testCoordinatorMetricsWithNames
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testDynamicMetrics
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testGrpcMetrics
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testJvmMetrics
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testAssign
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testAssignWithDifferentNodeNum
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testAssignmentShuffleNodesNum
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testRandomAssign
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testWithContinuousSelectPartitionStrategy
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssign
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssignmentShuffleNodesNum
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssignmentWithMustDiff
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssignmentWithNone
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssignmentWithPreferDiff
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testWithContinuousSelectPartitionStrategy
org.apache.uniffle.coordinator.strategy.assignment.PartitionRangeAssignmentTest ‑ test
org.apache.uniffle.coordinator.strategy.assignment.PartitionRangeTest ‑ test
org.apache.uniffle.coordinator.strategy.partition.ContinuousSelectPartitionStrategyTest ‑ test
org.apache.uniffle.coordinator.strategy.storage.AppBalanceSelectStorageStrategyTest ‑ selectStorageTest
org.apache.uniffle.coordinator.strategy.storage.AppBalanceSelectStorageStrategyTest ‑ storageCounterMulThreadTest
org.apache.uniffle.coordinator.strategy.storage.LowestIOSampleCostSelectStorageStrategyTest ‑ selectStorageMulThreadTest
org.apache.uniffle.coordinator.strategy.storage.LowestIOSampleCostSelectStorageStrategyTest ‑ selectStorageTest
org.apache.uniffle.coordinator.util.CoordinatorUtilsTest ‑ testExtractClusterConf
org.apache.uniffle.coordinator.util.CoordinatorUtilsTest ‑ testGenerateRanges
org.apache.uniffle.coordinator.util.CoordinatorUtilsTest ‑ testGenerateRangesGroup
org.apache.uniffle.coordinator.util.CoordinatorUtilsTest ‑ testNextId
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplications
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsPage
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsWithAppRegex
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsWithNoFilter
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsWithNull
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsWithStartTimeAndEndTime
org.apache.uniffle.dashboard.web.utils.DashboardUtilsTest ‑ testConvertToMap
org.apache.uniffle.server.HealthScriptCheckerTest ‑ checkIsHealthy
org.apache.uniffle.server.KerberizedShuffleTaskManagerTest ‑ removeShuffleDataWithHdfsTest
org.apache.uniffle.server.LocalSingleStorageTypeFromEnvProviderTest ‑ testJsonSourceParse
org.apache.uniffle.server.LocalSingleStorageTypeFromEnvProviderTest ‑ testMultipleMountPoints
org.apache.uniffle.server.LocalStorageCheckerTest ‑ testCheckingStorageHang{File}
org.apache.uniffle.server.LocalStorageCheckerTest ‑ testGetUniffleUsedSpace{File}
org.apache.uniffle.server.ShuffleFlushManagerOnKerberizedHadoopTest ‑ clearTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ clearLocalTest{File}
Check notice on line 0 in .github
github-actions / Test Results
1175 tests found (test 725 to 1175)
There are 1175 tests, see "Raw output" for the list of tests 725 to 1175.
Raw output
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ clearTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ complexWriteTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ concurrentWrite2HdfsWriteOfSinglePartition
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ concurrentWrite2HdfsWriteOneByOne
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ defaultFlushEventHandlerTest{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ fallbackWrittenWhenHybridStorageManagerEnableTest{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ hadoopConfTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ localMetricsTest{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ testCreateWriteHandlerFailed{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ totalLocalFileWriteDataMetricTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ writeTest
org.apache.uniffle.server.ShuffleServerConfTest ‑ confByStringTest
org.apache.uniffle.server.ShuffleServerConfTest ‑ confTest
org.apache.uniffle.server.ShuffleServerConfTest ‑ defaultConfTest
org.apache.uniffle.server.ShuffleServerConfTest ‑ envConfTest
org.apache.uniffle.server.ShuffleServerGrpcMetricsTest ‑ testLatencyMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testGrpcMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testHadoopStorageWriteDataSize
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testJvmMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testNettyMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testServerMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testServerMetricsConcurrently
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testStorageCounter
org.apache.uniffle.server.ShuffleServerTest ‑ decommissionTest{boolean}[1]
org.apache.uniffle.server.ShuffleServerTest ‑ decommissionTest{boolean}[2]
org.apache.uniffle.server.ShuffleServerTest ‑ nettyServerTest
org.apache.uniffle.server.ShuffleServerTest ‑ startTest
org.apache.uniffle.server.ShuffleTaskInfoTest ‑ hugePartitionConcurrentTest
org.apache.uniffle.server.ShuffleTaskInfoTest ‑ hugePartitionTest
org.apache.uniffle.server.ShuffleTaskInfoTest ‑ isHugePartitionTest
org.apache.uniffle.server.ShuffleTaskInfoTest ‑ partitionSizeSummaryTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ appPurgeWithLocalfileTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ checkAndClearLeakShuffleDataTest{File}
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ clearMultiTimesTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ clearTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ getBlockIdsByMultiPartitionTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ getBlockIdsByPartitionIdTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ hugePartitionMemoryUsageLimitTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ partitionDataSizeSummaryTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ registerShuffleTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ removeResourcesByShuffleIdsMultiTimesTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ removeShuffleDataWithHdfsTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ removeShuffleDataWithLocalfileTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testAddFinishedBlockIdsWithoutRegister
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testGetFinishedBlockIds
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testGetMaxConcurrencyWriting
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testRegisterShuffleAfterAppIsExpired
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testStorageRemoveResourceHang{File}
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ writeProcessTest
org.apache.uniffle.server.StorageCheckerTest ‑ checkTest{File}
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ blockSizeMetricsTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ bufferManagerInitTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ bufferSizeTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ cacheShuffleDataTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ cacheShuffleDataWithPreAllocationTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ flushBufferTestWhenNotSelectedStorage{File}
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ flushSingleBufferForHugePartitionTest{File}
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ flushSingleBufferTest{File}
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ getShuffleDataTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ getShuffleDataWithExpectedTaskIdsTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ registerBufferTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ shuffleFlushThreshold
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ shuffleIdToSizeTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ splitPartitionTest{File}
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ appendMultiBlocksTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ appendRepeatBlockTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ appendTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ getShuffleDataTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ getShuffleDataWithExpectedTaskIdsFilterTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ getShuffleDataWithLocalOrderTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ toFlushEventTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ appendMultiBlocksTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ appendRepeatBlockTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ appendTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ getShuffleDataWithExpectedTaskIdsFilterTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ toFlushEventTest
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMergeWhenInterrupted{String, File}[1]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMergeWhenInterrupted{String, File}[2]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMergeWhenInterrupted{String, File}[3]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMergeWhenInterrupted{String, File}[4]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMergeWhenInterrupted{String, File}[5]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMergeWhenInterrupted{String, File}[6]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[10]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[11]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[12]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[1]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[2]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[3]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[4]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[5]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[6]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[7]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[8]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[9]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergeSegmentToMergeResult{String, File}[1]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergeSegmentToMergeResult{String, File}[2]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergeSegmentToMergeResult{String, File}[3]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergeSegmentToMergeResult{String, File}[4]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergeSegmentToMergeResult{String, File}[5]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergeSegmentToMergeResult{String, File}[6]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergedResult
org.apache.uniffle.server.merge.ShuffleMergeManagerTest ‑ testMergerManager{String}[1]
org.apache.uniffle.server.merge.ShuffleMergeManagerTest ‑ testMergerManager{String}[2]
org.apache.uniffle.server.merge.ShuffleMergeManagerTest ‑ testMergerManager{String}[3]
org.apache.uniffle.server.merge.ShuffleMergeManagerTest ‑ testMergerManager{String}[4]
org.apache.uniffle.server.merge.ShuffleMergeManagerTest ‑ testMergerManager{String}[5]
org.apache.uniffle.server.merge.ShuffleMergeManagerTest ‑ testMergerManager{String}[6]
org.apache.uniffle.server.storage.HadoopStorageManagerTest ‑ testRegisterRemoteStorage
org.apache.uniffle.server.storage.HadoopStorageManagerTest ‑ testRemoveExpiredResourcesWithOneReplica{File}
org.apache.uniffle.server.storage.HadoopStorageManagerTest ‑ testRemoveExpiredResourcesWithTwoReplicas{File}
org.apache.uniffle.server.storage.HadoopStorageManagerTest ‑ testRemoveResources
org.apache.uniffle.server.storage.HybridStorageManagerTest ‑ fallbackTestWhenLocalStorageCorrupted
org.apache.uniffle.server.storage.HybridStorageManagerTest ‑ selectStorageManagerTest
org.apache.uniffle.server.storage.HybridStorageManagerTest ‑ testStorageManagerSelectorOfPreferCold
org.apache.uniffle.server.storage.HybridStorageManagerTest ‑ underStorageManagerSelectionTest
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testEnvStorageTypeProvider
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testGetLocalStorageInfo
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testInitLocalStorageManager
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testInitializeLocalStorage
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testNewAppWhileCheckLeak{ExtensionContext}
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testStorageSelection
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testStorageSelectionWhenReachingHighWatermark
org.apache.uniffle.server.storage.StorageManagerFallbackStrategyTest ‑ testDefaultFallbackStrategy
org.apache.uniffle.server.storage.StorageManagerFallbackStrategyTest ‑ testHadoopFallbackStrategy
org.apache.uniffle.server.storage.StorageManagerFallbackStrategyTest ‑ testLocalFallbackStrategy
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[1]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[2]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[3]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[4]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[5]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[6]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[1]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[2]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[3]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[4]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[5]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutOverrides
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[1]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[2]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[3]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[4]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[5]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[6]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[10]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[11]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[12]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[13]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[14]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[15]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[16]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[17]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[18]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[19]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[1]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[20]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[21]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[22]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[23]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[24]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[25]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[26]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[27]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[28]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[29]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[2]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[30]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[31]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[3]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[4]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[5]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[6]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[7]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[8]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[9]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testFetchAndApplyDynamicConf
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testGetDefaultRemoteStorageInfo
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testGetTaskAttemptIdWithSpeculation
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testGetTaskAttemptIdWithoutSpeculation
org.apache.uniffle.shuffle.manager.ShuffleManagerGrpcServiceTest ‑ testShuffleManagerGrpcService
org.apache.uniffle.shuffle.manager.ShuffleManagerServerFactoryTest ‑ testShuffleManagerServerType{ServerType}[1]
org.apache.uniffle.shuffle.manager.ShuffleManagerServerFactoryTest ‑ testShuffleManagerServerType{ServerType}[2]
org.apache.uniffle.storage.common.DefaultStorageMediaProviderTest ‑ getGetDeviceName
org.apache.uniffle.storage.common.DefaultStorageMediaProviderTest ‑ getGetFileStore{File}
org.apache.uniffle.storage.common.DefaultStorageMediaProviderTest ‑ testStorageProvider
org.apache.uniffle.storage.common.LocalStorageTest ‑ baseDirectoryInitTest
org.apache.uniffle.storage.common.LocalStorageTest ‑ canWriteTest
org.apache.uniffle.storage.common.LocalStorageTest ‑ canWriteTestWithDiskCapacityCheck
org.apache.uniffle.storage.common.LocalStorageTest ‑ diskStorageInfoTest
org.apache.uniffle.storage.common.LocalStorageTest ‑ getCapacityInitTest
org.apache.uniffle.storage.common.LocalStorageTest ‑ writeHandlerTest
org.apache.uniffle.storage.common.ShuffleFileInfoTest ‑ test
org.apache.uniffle.storage.handler.impl.HadoopClientReadHandlerTest ‑ test
org.apache.uniffle.storage.handler.impl.HadoopFileReaderTest ‑ createStreamAppendTest
org.apache.uniffle.storage.handler.impl.HadoopFileReaderTest ‑ createStreamTest
org.apache.uniffle.storage.handler.impl.HadoopFileReaderTest ‑ readDataTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ createStreamAppendTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ createStreamDirectory
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ createStreamFirstTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ createStreamTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ writeBufferArrayTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ writeBufferTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ writeSegmentTest
org.apache.uniffle.storage.handler.impl.HadoopHandlerTest ‑ initTest
org.apache.uniffle.storage.handler.impl.HadoopHandlerTest ‑ writeTest
org.apache.uniffle.storage.handler.impl.HadoopShuffleReadHandlerTest ‑ test
org.apache.uniffle.storage.handler.impl.HadoopShuffleReadHandlerTest ‑ testDataInconsistent
org.apache.uniffle.storage.handler.impl.KerberizedHadoopClientReadHandlerTest ‑ test
org.apache.uniffle.storage.handler.impl.KerberizedHadoopShuffleReadHandlerTest ‑ test
org.apache.uniffle.storage.handler.impl.LocalFileHandlerTest ‑ testReadIndex
org.apache.uniffle.storage.handler.impl.LocalFileHandlerTest ‑ writeBigDataTest{File}
org.apache.uniffle.storage.handler.impl.LocalFileHandlerTest ‑ writeTest{File}
org.apache.uniffle.storage.handler.impl.LocalFileServerReadHandlerTest ‑ testDataInconsistent
org.apache.uniffle.storage.handler.impl.PooledHadoopShuffleWriteHandlerTest ‑ concurrentWrite
org.apache.uniffle.storage.handler.impl.PooledHadoopShuffleWriteHandlerTest ‑ initializationFailureTest
org.apache.uniffle.storage.handler.impl.PooledHadoopShuffleWriteHandlerTest ‑ lazyInitializeWriterHandlerTest
org.apache.uniffle.storage.handler.impl.PooledHadoopShuffleWriteHandlerTest ‑ writeSameFileWhenNoRaceCondition
org.apache.uniffle.storage.handler.impl.PrefetchableClientReadHandlerTest ‑ test_with_fetch_failure
org.apache.uniffle.storage.handler.impl.PrefetchableClientReadHandlerTest ‑ test_with_prefetch
org.apache.uniffle.storage.handler.impl.PrefetchableClientReadHandlerTest ‑ test_with_timeout
org.apache.uniffle.storage.handler.impl.PrefetchableClientReadHandlerTest ‑ test_without_prefetch
org.apache.uniffle.storage.util.ShuffleHadoopStorageUtilsTest ‑ testUploadFile{File}
org.apache.uniffle.storage.util.ShuffleKerberizedHadoopStorageUtilsTest ‑ testUploadFile{File}
org.apache.uniffle.storage.util.ShuffleStorageUtilsTest ‑ getPartitionRangeTest
org.apache.uniffle.storage.util.ShuffleStorageUtilsTest ‑ getShuffleDataPathWithRangeTest
org.apache.uniffle.storage.util.ShuffleStorageUtilsTest ‑ getStorageIndexTest
org.apache.uniffle.storage.util.ShuffleStorageUtilsTest ‑ mergeSegmentsTest
org.apache.uniffle.storage.util.StorageTypeTest ‑ commonTest
org.apache.uniffle.test.AQERepartitionTest ‑ resultCompareTest
org.apache.uniffle.test.AQESkewedJoinTest ‑ resultCompareTest
org.apache.uniffle.test.AQESkewedJoinWithLocalOrderTest ‑ resultCompareTest
org.apache.uniffle.test.AccessCandidatesCheckerHadoopTest ‑ test
org.apache.uniffle.test.AccessCandidatesCheckerKerberizedHadoopTest ‑ test
org.apache.uniffle.test.AccessClusterTest ‑ org.apache.uniffle.test.AccessClusterTest
org.apache.uniffle.test.AssignmentWithTagsTest ‑ testTags
org.apache.uniffle.test.AutoAccessTest ‑ test
org.apache.uniffle.test.CombineByKeyTest ‑ combineByKeyTest
org.apache.uniffle.test.ContinuousSelectPartitionStrategyTest ‑ resultCompareTest
org.apache.uniffle.test.CoordinatorAdminServiceTest ‑ test
org.apache.uniffle.test.CoordinatorAssignmentTest ‑ testAssignmentServerNodesNumber
org.apache.uniffle.test.CoordinatorAssignmentTest ‑ testGetReShuffleAssignments
org.apache.uniffle.test.CoordinatorAssignmentTest ‑ testSilentPeriod
org.apache.uniffle.test.CoordinatorGrpcServerTest ‑ testGrpcConnectionSize
org.apache.uniffle.test.CoordinatorGrpcTest ‑ appHeartbeatTest
org.apache.uniffle.test.CoordinatorGrpcTest ‑ getShuffleAssignmentsTest{File}
org.apache.uniffle.test.CoordinatorGrpcTest ‑ getShuffleRegisterInfoTest
org.apache.uniffle.test.CoordinatorGrpcTest ‑ rpcMetricsTest
org.apache.uniffle.test.CoordinatorGrpcTest ‑ shuffleServerHeartbeatTest{File}
org.apache.uniffle.test.CoordinatorGrpcTest ‑ testGetPartitionToServers
org.apache.uniffle.test.CoordinatorReconfigureNodeMaxTest ‑ testReconfigureNodeMax
org.apache.uniffle.test.DynamicClientConfServiceHadoopTest ‑ test
org.apache.uniffle.test.DynamicClientConfServiceKerberlizedHadoopTest ‑ testConfInHadoop
org.apache.uniffle.test.DynamicConfTest ‑ dynamicConfTest{ClientType}[1]
org.apache.uniffle.test.DynamicConfTest ‑ dynamicConfTest{ClientType}[2]
org.apache.uniffle.test.DynamicFetchClientConfTest ‑ test
org.apache.uniffle.test.FailingTasksTest ‑ testFailedTasks
org.apache.uniffle.test.FetchClientConfTest ‑ testFetchRemoteStorageByApp{File}
org.apache.uniffle.test.FetchClientConfTest ‑ testFetchRemoteStorageByIO{File}
org.apache.uniffle.test.FetchClientConfTest ‑ test{File}
org.apache.uniffle.test.GetReaderTest ‑ test
org.apache.uniffle.test.GetShuffleReportForMultiPartTest ‑ resultCompareTest
org.apache.uniffle.test.GroupByKeyTest ‑ groupByTest
org.apache.uniffle.test.HadoopConfTest ‑ hadoopConfTest{ClientType}[1]
org.apache.uniffle.test.HadoopConfTest ‑ hadoopConfTest{ClientType}[2]
org.apache.uniffle.test.HealthCheckCoordinatorGrpcTest ‑ healthCheckTest
org.apache.uniffle.test.HealthCheckTest ‑ buildInCheckerTest
org.apache.uniffle.test.HealthCheckTest ‑ checkTest
org.apache.uniffle.test.LargeSorterTest ‑ largeSorterTest{ClientType}[1]
org.apache.uniffle.test.LargeSorterTest ‑ largeSorterTest{ClientType}[2]
org.apache.uniffle.test.MapSideCombineTest ‑ resultCompareTest
org.apache.uniffle.test.NullOfKeyOrValueTest ‑ nullOfKeyOrValueTest
org.apache.uniffle.test.PartitionBalanceCoordinatorGrpcTest ‑ getShuffleAssignmentsTest
org.apache.uniffle.test.PartitionBlockDataReassignBasicTest ‑ resultCompareTest
org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest ‑ resultCompareTest
org.apache.uniffle.test.QuorumTest ‑ case1
org.apache.uniffle.test.QuorumTest ‑ case10
org.apache.uniffle.test.QuorumTest ‑ case11
org.apache.uniffle.test.QuorumTest ‑ case12
org.apache.uniffle.test.QuorumTest ‑ case2
org.apache.uniffle.test.QuorumTest ‑ case3
org.apache.uniffle.test.QuorumTest ‑ case4
org.apache.uniffle.test.QuorumTest ‑ case5{File}
org.apache.uniffle.test.QuorumTest ‑ case6
org.apache.uniffle.test.QuorumTest ‑ case7
org.apache.uniffle.test.QuorumTest ‑ case8
org.apache.uniffle.test.QuorumTest ‑ case9
org.apache.uniffle.test.QuorumTest ‑ quorumConfigTest
org.apache.uniffle.test.QuorumTest ‑ rpcFailedTest
org.apache.uniffle.test.RMTezOrderedWordCountTest ‑ orderedWordCountTest
org.apache.uniffle.test.RMWordCountTest ‑ wordCountTest{ClientType}[1]
org.apache.uniffle.test.RMWordCountTest ‑ wordCountTest{ClientType}[2]
org.apache.uniffle.test.RSSStageDynamicServerReWriteTest ‑ testRSSStageResubmit
org.apache.uniffle.test.RSSStageResubmitTest ‑ testRSSStageResubmit
org.apache.uniffle.test.ReassignAndStageRetryTest ‑ resultCompareTest
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[3]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[4]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[5]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[6]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[7]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[8]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[3]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[4]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[5]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[6]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[7]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[8]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[3]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[4]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[5]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[6]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[7]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[8]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[3]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[4]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[5]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[6]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[7]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[8]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[3]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[4]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[5]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[6]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[7]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[8]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[3]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[4]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[5]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[6]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[7]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[8]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[3]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[4]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[5]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[6]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[7]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[8]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[3]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[4]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[5]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[6]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[7]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[8]
org.apache.uniffle.test.RepartitionWithHadoopHybridStorageRssTest ‑ resultCompareTest
org.apache.uniffle.test.RepartitionWithLocalFileRssTest ‑ resultCompareTest
org.apache.uniffle.test.RepartitionWithMemoryHybridStorageRssTest ‑ resultCompareTest
org.apache.uniffle.test.RepartitionWithMemoryRssTest ‑ resultCompareTest
org.apache.uniffle.test.RepartitionWithMemoryRssTest ‑ testMemoryRelease
org.apache.uniffle.test.RpcClientRetryTest ‑ testRpcRetryLogic{StorageType}[1]
org.apache.uniffle.test.RpcClientRetryTest ‑ testRpcRetryLogic{StorageType}[2]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConfOverride{boolean}[1]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConfOverride{boolean}[2]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConf{BlockIdLayout}[1]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConf{BlockIdLayout}[2]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConf{BlockIdLayout}[3]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerDynamicClientConf{BlockIdLayout}[1]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerDynamicClientConf{BlockIdLayout}[2]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerDynamicClientConf{BlockIdLayout}[3]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManager{boolean}[1]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManager{boolean}[2]
org.apache.uniffle.test.SecondarySortTest ‑ secondarySortTest{ClientType}[1]
org.apache.uniffle.test.SecondarySortTest ‑ secondarySortTest{ClientType}[2]
org.apache.uniffle.test.ServletTest ‑ testDecommissionServlet
org.apache.uniffle.test.ServletTest ‑ testDecommissionSingleNode
org.apache.uniffle.test.ServletTest ‑ testDecommissionedNodeServlet
org.apache.uniffle.test.ServletTest ‑ testGetSingleNode
org.apache.uniffle.test.ServletTest ‑ testLostNodesServlet
org.apache.uniffle.test.ServletTest ‑ testNodesServlet
org.apache.uniffle.test.ServletTest ‑ testRequestWithWrongCredentials
org.apache.uniffle.test.ServletTest ‑ testUnhealthyNodesServlet
org.apache.uniffle.test.ShuffleServerConcurrentWriteOfHadoopTest ‑ testConcurrentWrite2Hadoop{int, int, boolean}[1]
org.apache.uniffle.test.ShuffleServerConcurrentWriteOfHadoopTest ‑ testConcurrentWrite2Hadoop{int, int, boolean}[2]
org.apache.uniffle.test.ShuffleServerConcurrentWriteOfHadoopTest ‑ testConcurrentWrite2Hadoop{int, int, boolean}[3]
org.apache.uniffle.test.ShuffleServerConcurrentWriteOfHadoopTest ‑ testConcurrentWrite2Hadoop{int, int, boolean}[4]
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ clearResourceTest
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ multipleShuffleResultTest{BlockIdLayout}[1]
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ multipleShuffleResultTest{BlockIdLayout}[2]
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ registerTest
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ rpcMetricsTest
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ sendDataWithoutRequirePreAllocation
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ shuffleResultTest
org.apache.uniffle.test.ShuffleServerInternalGrpcTest ‑ decommissionTest
org.apache.uniffle.test.ShuffleServerOnRandomPortTest ‑ startGrpcServerOnRandomPort{File}
org.apache.uniffle.test.ShuffleServerOnRandomPortTest ‑ startStreamServerOnRandomPort{File}
org.apache.uniffle.test.ShuffleServerWithLocalOfExceptionTest ‑ testReadWhenConnectionFailedShouldThrowException
org.apache.uniffle.test.ShuffleServerWithMemLocalHadoopTest ‑ memoryLocalFileHadoopReadWithFilterTest{boolean, boolean}[1]
org.apache.uniffle.test.ShuffleServerWithMemLocalHadoopTest ‑ memoryLocalFileHadoopReadWithFilterTest{boolean, boolean}[2]
org.apache.uniffle.test.ShuffleServerWithMemLocalHadoopTest ‑ memoryLocalFileHadoopReadWithFilterTest{boolean, boolean}[3]
org.apache.uniffle.test.ShuffleServerWithMemLocalHadoopTest ‑ memoryLocalFileHadoopReadWithFilterTest{boolean, boolean}[4]
org.apache.uniffle.test.ShuffleUnregisterWithHadoopTest ‑ unregisterShuffleTest
org.apache.uniffle.test.ShuffleUnregisterWithLocalfileTest ‑ unregisterShuffleTest
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ emptyTaskTest
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ reportBlocksToShuffleServerIfNecessary
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ reportMultipleServerTest
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ rpcFailTest
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ testRetryAssgin
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ writeReadTest
org.apache.uniffle.test.SimpleShuffleServerManagerTest ‑ testClientAndServerConnections
org.apache.uniffle.test.SparkClientWithLocalForMultiPartLocalStorageManagerTest ‑ testClientRemoteReadFromMultipleDisk{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalForMultiPartLocalStorageManagerTest ‑ testClientRemoteReadFromMultipleDisk{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest10{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest10{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest1{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest1{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest2{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest2{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest3{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest3{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest4{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest4{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest5{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest5{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest6{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest6{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest7{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest7{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest8{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest8{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest9{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest9{boolean}[2]
org.apache.uniffle.test.SparkSQLWithDelegationShuffleManagerFallbackTest ‑ resultCompareTest
org.apache.uniffle.test.SparkSQLWithDelegationShuffleManagerTest ‑ resultCompareTest
org.apache.uniffle.test.SparkSQLWithMemoryLocalTest ‑ resultCompareTest
org.apache.uniffle.test.TezCartesianProductTest ‑ cartesianProductTest
org.apache.uniffle.test.TezHashJoinTest ‑ hashJoinDoBroadcastTest
org.apache.uniffle.test.TezHashJoinTest ‑ hashJoinTest
org.apache.uniffle.test.TezOrderedWordCountTest ‑ orderedWordCountTest
org.apache.uniffle.test.TezSimpleSessionExampleTest ‑ simpleSessionExampleTest
org.apache.uniffle.test.TezSortMergeJoinTest ‑ sortMergeJoinTest
org.apache.uniffle.test.TezWordCountTest ‑ wordCountTest
org.apache.uniffle.test.TezWordCountWithFailuresTest ‑ wordCountTestWithNodeUnhealthyWhenAvoidRecomputeDisable
org.apache.uniffle.test.TezWordCountWithFailuresTest ‑ wordCountTestWithNodeUnhealthyWhenAvoidRecomputeEnable
org.apache.uniffle.test.TezWordCountWithFailuresTest ‑ wordCountTestWithTaskFailureWhenAvoidRecomputeDisable
org.apache.uniffle.test.TezWordCountWithFailuresTest ‑ wordCountTestWithTaskFailureWhenAvoidRecomputeEnable
org.apache.uniffle.test.WordCountTest ‑ wordCountTest{ClientType}[1]
org.apache.uniffle.test.WordCountTest ‑ wordCountTest{ClientType}[2]
org.apache.uniffle.test.WriteAndReadMetricsTest ‑ test