refactor(api): use NightCompress library for reading CBX#113
refactor(api): use NightCompress library for reading CBX#113imnotjames wants to merge 38 commits intogrimmory-tools:developfrom
NightCompress library for reading CBX#113Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces per-format archive handling and external unrar CLI with a unified Spring-managed ArchiveService (NightCompress/libarchive), refactors services/tests to delegate to it, removes UnrarHelper, and updates CI/Docker/Gradle to provide libarchive and enable JVM native access. Changes
Sequence Diagram(s)sequenceDiagram
participant Reader as "Cbx* (Processor/Reader/Extractor)"
participant ArchiveSvc as "ArchiveService (NightCompress)"
participant Native as "libarchive (native)"
participant FS as "Filesystem (archive Path)"
Reader->>ArchiveSvc: streamEntryNames(path) / getEntryBytes(path,name) / transferEntryTo(path,name,out)
ArchiveSvc->>FS: open archive file (Path)
ArchiveSvc->>Native: invoke libarchive via NightCompress bindings
Native-->>ArchiveSvc: entry metadata / data stream
ArchiveSvc-->>Reader: entry names / bytes / streams
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Comment |
6fcc749 to
d40109e
Compare
d40109e to
736b501
Compare
b864c95 to
dc0a281
Compare
|
I wonder if it'd better to use gradle to download libarchive. Proxmox helper scripts people gonna be mad if we push breaking change to their thing, and since they use gradle too, but not docker I would imagine it'd be good if it was done that way. I guess this is just shower thought. What do you think? 🤔 I may be overthinking it, and it'd be better to just let them know this in advance and force it through. But anyways, thanks for this PR, it's really valuable/high priority. |
I'll investigate. |
a216306 to
1720775
Compare
|
I could split this into three PRs.
Might be easier to chew on rather than this big PR. |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java (1)
891-900:⚠️ Potential issue | 🟡 MinorTest setup doesn't mock image file discovery correctly.
The test calls
mockComicInfo()which stubsstreamEntryNames()to return only"ComicInfo.xml". However,extractCover()usesgetComicImageEntryNames()to discover image files by filteringstreamEntryNames()for image extensions. Since"ComicInfo.xml"is not an image file, it gets filtered out and the stream returns empty. The mock ofgetEntryBytes(cbz, "path_001.jpg")is never reached because"path_001.jpg"is not included in thestreamEntryNames()mock.Use
mockArchiveContents()instead, as other cover extraction tests do (lines 904+), to properly set up both entry names and their byte contents:Path cbz = mockArchiveContents(Map.of( "ComicInfo.xml", wrapInComicInfo("<Title>Test</Title>").getBytes(), "path_001.jpg", createMinimalJpeg() ));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java` around lines 891 - 900, The test uses mockComicInfo(...) which only stubs streamEntryNames() to return "ComicInfo.xml", so extractor.extractCover(cbz) never sees "path_001.jpg"; replace the setup to use mockArchiveContents(...) (as other tests do) to provide both "ComicInfo.xml" and "path_001.jpg" entries and their bytes so getComicImageEntryNames()/streamEntryNames() will include the image and the existing mocked getEntryBytes/getEntryBytes lookup is exercised; update the test to call mockArchiveContents with a map containing "ComicInfo.xml" (wrapped via wrapInComicInfo(...) bytes) and "path_001.jpg" (createMinimalJpeg() bytes) instead of mockComicInfo(...).
🧹 Nitpick comments (9)
dev.docker-compose.yml (1)
6-9: Consider removing the unrar-layer if fully migrating to NightCompress.The dev compose still includes the
unrar-layerstage and copies theunrarbinary, while also addinglibarchivefor NightCompress. If the goal is to fully replace unrar with NightCompress, consider removing lines 6 and 9 to align with the PR objective of eliminating unrar dependencies.If this is intentional for fallback or incremental migration, this can be addressed in a follow-up PR.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev.docker-compose.yml` around lines 6 - 9, The Dockerfile still defines the unrar build stage and copies the unrar binary (the "unrar-layer" stage and the COPY --from=unrar-layer /usr/bin/unrar-alpine /usr/local/bin/unrar) even though NightCompress and libarchive are being added; remove the "FROM linuxserver/unrar:7.1.10 AS unrar-layer" stage and the corresponding COPY of "unrar-alpine" to fully eliminate unrar dependencies, or explicitly document/guard those lines if you intend to keep unrar as a fallback during incremental migration.booklore-api/src/test/java/org/booklore/service/metadata/writer/CbxMetadataWriterTest.java (1)
264-280: Consider adding assertions for the CBR→CBZ conversion side effects.The test validates that the output CBZ contains the correct metadata, but it doesn't verify:
- Whether the original
.cbrfile was deleted- Whether the book entity's file path was updated to point to the new
.cbzIf the writer is expected to handle file renaming/deletion, consider adding assertions to cover these behaviors. Otherwise, if this is handled elsewhere, this test is sufficient.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/test/java/org/booklore/service/metadata/writer/CbxMetadataWriterTest.java` around lines 264 - 280, Add assertions that verify CBR→CBZ side effects: after calling writer.saveMetadataToFile(zipAsCbr, meta, ...), assert the original .cbr file was removed (e.g. assertFalse(Files.exists(zipAsCbr.toPath())) or similar) and assert the book entity's file path was updated to the new .cbz (for example assertEquals(zipAsCbz.toString(), meta.getFilePath()) or the appropriate getter on BookMetadataEntity). Locate these checks in the test method saveMetadataToFile_ZipNamedAsCbr_ShouldUpdateMetadata near the existing writer.saveMetadataToFile call and ZIP assertions.booklore-api/src/test/java/org/booklore/service/reader/CbxReaderServiceTest.java (1)
93-127: TeststestStreamPageImage_PageOutOfRange_ThrowsandtestStreamPageImage_EntryNotFound_Throwsappear to test the same scenario.Both tests mock the same setup (single entry "1.jpg") and request page 2, which is out of range. The test names suggest different scenarios but the implementations are identical.
Consider either:
- Removing one as a duplicate, or
- Differentiating
testStreamPageImage_EntryNotFound_Throwsto test a scenario where the entry exists in metadata buttransferEntryTofails (e.g., throws an exception when the actual file entry is missing from the archive).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/test/java/org/booklore/service/reader/CbxReaderServiceTest.java` around lines 93 - 127, The two tests are identical; change testStreamPageImage_EntryNotFound_Throws so it actually covers the "entry present but transfer fails" case: keep bookRepository/findByIdWithBookFiles and the FileUtils/Files static mocks, have archiveService.streamEntryNames(cbzPath) return a stream that includes the requested entry name (e.g., "2.jpg") to simulate metadata showing the entry exists, and mock archiveService.transferEntryTo(cbzPath, "2.jpg", any(OutputStream.class)) to throw a FileNotFoundException (or appropriate exception) so asserting cbxReaderService.streamPageImage(1L, 2, new ByteArrayOutputStream()) throws verifies the transfer failure path instead of duplicating the out-of-range test.booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java (1)
66-70: Unused parameter in helper method.The
exceptionClassparameter is declared but never used in the method body. The method always throwsIOException.classregardless of the parameter passed.Suggested fix
- private Path mockRaisesException(Class exceptionClass) throws IOException { + private Path mockRaisesException() throws IOException { Path path = Path.of("test.cbz"); when(archiveService.getEntryBytes(path, "ComicInfo.xml")).thenThrow(IOException.class); return path; }Then update all call sites from
mockRaisesException(IOException.class)tomockRaisesException().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java` around lines 66 - 70, The helper method mockRaisesException currently declares an unused parameter exceptionClass; remove that parameter from the signature (change mockRaisesException(Class exceptionClass) to mockRaisesException()) and keep the existing behavior that stubs archiveService.getEntryBytes(path, "ComicInfo.xml") to thenThrow(IOException.class); then update all call sites that pass IOException.class to call mockRaisesException() with no arguments so the test compiles and behavior remains the same.booklore-api/src/main/java/org/booklore/service/kobo/CbxConversionService.java (1)
206-206: Minor: Log message references wrong archive type.The log message says "CBR file" but this method handles all CBX formats (CBZ, CBR, CB7). Consider using a more generic term.
Suggested fix
- log.debug("Found {} image entries in CBR file", imagePaths.size()); + log.debug("Found {} image entries in CBX archive", imagePaths.size());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/main/java/org/booklore/service/kobo/CbxConversionService.java` at line 206, The log message in CbxConversionService at the log.debug call that currently says "Found {} image entries in CBR file" is misleading because the service handles CBX formats (CBZ, CBR, CB7); update the message to a generic term like "CBX file" or "CBX archive" (e.g., "Found {} image entries in CBX archive") so the log accurately reflects all supported formats while keeping the existing log.debug and imagePaths.size() usage.booklore-api/src/main/java/org/booklore/service/reader/CbxReaderService.java (1)
240-258: Add comprehensive unit tests for natural sort ordering with multi-digit numbers.The
sortNaturallymethod is indirectly exercised throughgetAvailablePagesandgetPageInfo, but the existing test at line 59 ofCbxReaderServiceTest.javaonly uses a single entry"1.jpg", which doesn't actually test the sorting logic. Add a dedicated unit test with entries like["page10.jpg", "page2.jpg", "page1.jpg"]to verify they sort numerically as[1, 2, 10]rather than lexicographically.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/main/java/org/booklore/service/reader/CbxReaderService.java` around lines 240 - 258, Add a focused unit test in CbxReaderServiceTest that directly verifies the natural sorting behavior used by sortNaturally (and exercised by getAvailablePages/getPageInfo): provide inputs like ["page10.jpg","page2.jpg","page1.jpg"] (and variants with mixed padding/case) and assert the returned/ordered result is ["page1.jpg","page2.jpg","page10.jpg"] (numeric order, not lexicographic); implement as a standalone test method (e.g., testNaturalSortWithMultiDigitNumbers) that calls the service's sorting path (getAvailablePages or directly invokes the sorting behavior) and asserts the exact ordering.booklore-api/src/main/java/org/booklore/service/ArchiveService.java (3)
41-58: Consider using a Java record for the Entry DTO.The
Entryclass with private constructor and direct field access works, but a record would be more concise and provide immutability:♻️ Suggested refactor using record
- public static class Entry { - private Entry() {} - - `@Getter` - private String name; - - `@Getter` - private long size; - } - - private Entry getEntryFromArchiveEntry(ArchiveEntry archiveEntry) { - Entry entry = new Entry(); - - entry.name = archiveEntry.getName(); - entry.size = archiveEntry.getSize(); - - return entry; - } + public record Entry(String name, long size) {} + + private Entry getEntryFromArchiveEntry(ArchiveEntry archiveEntry) { + return new Entry(archiveEntry.getName(), archiveEntry.getSize()); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/main/java/org/booklore/service/ArchiveService.java` around lines 41 - 58, Replace the mutable inner class Entry with a Java record to make the DTO concise and immutable: change the Entry declaration to a record (e.g., record Entry(String name, long size)) and update getEntryFromArchiveEntry(ArchiveEntry archiveEntry) to construct and return the Entry record using archiveEntry.getName() and archiveEntry.getSize(); keep the method name getEntryFromArchiveEntry and the ArchiveEntry usages intact.
23-39: Consider consistent availability guards in production code paths.The
@PostConstructlogs availability status, and tests guard with@EnabledIf("...#isAvailable"), but the context snippets show production callers (e.g.,CbxProcessor,CbxReaderService,CbxConversionService) invoke archive methods without checkingisAvailable()first. If libarchive fails to load, production code will fail at runtime rather than gracefully degrading or failing fast.Consider either:
- Throwing an exception in
@PostConstructif unavailable (fail-fast), or- Adding guard checks in methods that throw a clear exception when unavailable
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/main/java/org/booklore/service/ArchiveService.java` around lines 23 - 39, The current checkArchiveAvailable() in ArchiveService only logs availability but production callers (CbxProcessor, CbxReaderService, CbxConversionService) invoke Archive APIs without guards; update the module to fail fast or provide explicit guards: either (A) modify checkArchiveAvailable() to throw a runtime exception when Archive.isAvailable() is false so the app aborts on startup, or (B) add a reusable guard method (e.g., ArchiveService.ensureAvailable() that calls isAvailable() and throws a clear IllegalStateException) and call that guard at the start of all archive-using entry points (methods in CbxProcessor, CbxReaderService, CbxConversionService) to surface a clear error rather than NPEs. Ensure you reference Archive.isAvailable(), ArchiveService.isAvailable(), and update checkArchiveAvailable() or add ensureAvailable() accordingly.
113-123: Consider usingStandardCopyOption.REPLACE_EXISTINGfor clarity on file overwrite behavior.
Files.copy(inputStream, outputPath)will throwFileAlreadyExistsExceptionif the target already exists. While the caller's broadcatch (Exception e)implicitly handles this, explicitly passingStandardCopyOption.REPLACE_EXISTINGwould clarify intent:return Files.copy(inputStream, outputPath, StandardCopyOption.REPLACE_EXISTING);Additionally,
extractEntryToPathandtransferEntryToshare nearly identical structure (null check, exception wrapping, "Entry not found" fallback). Extract common logic to reduce duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/main/java/org/booklore/service/ArchiveService.java` around lines 113 - 123, The method extractEntryToPath currently calls Files.copy(inputStream, outputPath) which throws FileAlreadyExistsException if the target exists; update extractEntryToPath to call Files.copy(inputStream, outputPath, StandardCopyOption.REPLACE_EXISTING) to make overwrite behavior explicit, and apply the same option in transferEntryTo; then remove duplicated try-with-resources/exception-wrapping/"Entry not found" logic by extracting a private helper (e.g., a method like copyEntry(Path archivePath, String entryName, BiFunction<InputStream,Path,Long> copier) or a simpler copyEntryTo(Path, String, Path) used by both extractEntryToPath and transferEntryTo) that opens Archive.getInputStream, checks for null, wraps exceptions into IOException with the same message, and performs the Files.copy with REPLACE_EXISTING.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@booklore-api/build.gradle.kts`:
- Around line 89-90: The added dependency
implementation("com.github.gotson.nightcompress:nightcompress:1.1.1") is
unavailable on JitPack (status: "none") and will break Gradle resolution; either
replace that dependency with a publicly available artifact (for example revert
to the original junrar library or a published nightcompress release) by updating
the implementation(...) coordinate in build.gradle.kts, or configure access to
the private JitPack repo by adding proper repository and authentication
credentials in the Gradle settings so the dependency can be resolved.
In
`@booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.java`:
- Around line 543-545: The code in extractCoverEntryNameFromComicInfo currently
skips pages where page.getAttribute("Type") equals "FrontCover", preventing
extraction of ImageFile/Image; change the condition so you only skip
non-FrontCover pages (i.e., remove the continue for FrontCover and invert the
check) so that pages with Type="FrontCover" are processed and their
ImageFile/Image attributes are used for cover extraction.
In
`@booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java`:
- Around line 451-472: The updateArchive method currently swallows exceptions
from Files.deleteIfExists which can leave both .cbx and .cbz files on disk while
returning null and signaling success; change updateArchive (and callers) to
surface that failure: either rethrow the delete exception (wrap as IOException)
so the overall metadata write fails, or return a clear result/flag (e.g., return
the target Path or a boolean) indicating whether original deletion succeeded and
then have BookMetadataUpdater.updateFileNameIfConverted check that flag before
updating the DB; reference updateArchive, replaceFileAtomic,
Files.deleteIfExists and BookMetadataUpdater.updateFileNameIfConverted and
ensure callers handle the propagated exception or the returned success indicator
instead of proceeding when deletion failed.
---
Outside diff comments:
In
`@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java`:
- Around line 891-900: The test uses mockComicInfo(...) which only stubs
streamEntryNames() to return "ComicInfo.xml", so extractor.extractCover(cbz)
never sees "path_001.jpg"; replace the setup to use mockArchiveContents(...) (as
other tests do) to provide both "ComicInfo.xml" and "path_001.jpg" entries and
their bytes so getComicImageEntryNames()/streamEntryNames() will include the
image and the existing mocked getEntryBytes/getEntryBytes lookup is exercised;
update the test to call mockArchiveContents with a map containing
"ComicInfo.xml" (wrapped via wrapInComicInfo(...) bytes) and "path_001.jpg"
(createMinimalJpeg() bytes) instead of mockComicInfo(...).
---
Nitpick comments:
In `@booklore-api/src/main/java/org/booklore/service/ArchiveService.java`:
- Around line 41-58: Replace the mutable inner class Entry with a Java record to
make the DTO concise and immutable: change the Entry declaration to a record
(e.g., record Entry(String name, long size)) and update
getEntryFromArchiveEntry(ArchiveEntry archiveEntry) to construct and return the
Entry record using archiveEntry.getName() and archiveEntry.getSize(); keep the
method name getEntryFromArchiveEntry and the ArchiveEntry usages intact.
- Around line 23-39: The current checkArchiveAvailable() in ArchiveService only
logs availability but production callers (CbxProcessor, CbxReaderService,
CbxConversionService) invoke Archive APIs without guards; update the module to
fail fast or provide explicit guards: either (A) modify checkArchiveAvailable()
to throw a runtime exception when Archive.isAvailable() is false so the app
aborts on startup, or (B) add a reusable guard method (e.g.,
ArchiveService.ensureAvailable() that calls isAvailable() and throws a clear
IllegalStateException) and call that guard at the start of all archive-using
entry points (methods in CbxProcessor, CbxReaderService, CbxConversionService)
to surface a clear error rather than NPEs. Ensure you reference
Archive.isAvailable(), ArchiveService.isAvailable(), and update
checkArchiveAvailable() or add ensureAvailable() accordingly.
- Around line 113-123: The method extractEntryToPath currently calls
Files.copy(inputStream, outputPath) which throws FileAlreadyExistsException if
the target exists; update extractEntryToPath to call Files.copy(inputStream,
outputPath, StandardCopyOption.REPLACE_EXISTING) to make overwrite behavior
explicit, and apply the same option in transferEntryTo; then remove duplicated
try-with-resources/exception-wrapping/"Entry not found" logic by extracting a
private helper (e.g., a method like copyEntry(Path archivePath, String
entryName, BiFunction<InputStream,Path,Long> copier) or a simpler
copyEntryTo(Path, String, Path) used by both extractEntryToPath and
transferEntryTo) that opens Archive.getInputStream, checks for null, wraps
exceptions into IOException with the same message, and performs the Files.copy
with REPLACE_EXISTING.
In
`@booklore-api/src/main/java/org/booklore/service/kobo/CbxConversionService.java`:
- Line 206: The log message in CbxConversionService at the log.debug call that
currently says "Found {} image entries in CBR file" is misleading because the
service handles CBX formats (CBZ, CBR, CB7); update the message to a generic
term like "CBX file" or "CBX archive" (e.g., "Found {} image entries in CBX
archive") so the log accurately reflects all supported formats while keeping the
existing log.debug and imagePaths.size() usage.
In
`@booklore-api/src/main/java/org/booklore/service/reader/CbxReaderService.java`:
- Around line 240-258: Add a focused unit test in CbxReaderServiceTest that
directly verifies the natural sorting behavior used by sortNaturally (and
exercised by getAvailablePages/getPageInfo): provide inputs like
["page10.jpg","page2.jpg","page1.jpg"] (and variants with mixed padding/case)
and assert the returned/ordered result is ["page1.jpg","page2.jpg","page10.jpg"]
(numeric order, not lexicographic); implement as a standalone test method (e.g.,
testNaturalSortWithMultiDigitNumbers) that calls the service's sorting path
(getAvailablePages or directly invokes the sorting behavior) and asserts the
exact ordering.
In
`@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java`:
- Around line 66-70: The helper method mockRaisesException currently declares an
unused parameter exceptionClass; remove that parameter from the signature
(change mockRaisesException(Class exceptionClass) to mockRaisesException()) and
keep the existing behavior that stubs archiveService.getEntryBytes(path,
"ComicInfo.xml") to thenThrow(IOException.class); then update all call sites
that pass IOException.class to call mockRaisesException() with no arguments so
the test compiles and behavior remains the same.
In
`@booklore-api/src/test/java/org/booklore/service/metadata/writer/CbxMetadataWriterTest.java`:
- Around line 264-280: Add assertions that verify CBR→CBZ side effects: after
calling writer.saveMetadataToFile(zipAsCbr, meta, ...), assert the original .cbr
file was removed (e.g. assertFalse(Files.exists(zipAsCbr.toPath())) or similar)
and assert the book entity's file path was updated to the new .cbz (for example
assertEquals(zipAsCbz.toString(), meta.getFilePath()) or the appropriate getter
on BookMetadataEntity). Locate these checks in the test method
saveMetadataToFile_ZipNamedAsCbr_ShouldUpdateMetadata near the existing
writer.saveMetadataToFile call and ZIP assertions.
In
`@booklore-api/src/test/java/org/booklore/service/reader/CbxReaderServiceTest.java`:
- Around line 93-127: The two tests are identical; change
testStreamPageImage_EntryNotFound_Throws so it actually covers the "entry
present but transfer fails" case: keep bookRepository/findByIdWithBookFiles and
the FileUtils/Files static mocks, have archiveService.streamEntryNames(cbzPath)
return a stream that includes the requested entry name (e.g., "2.jpg") to
simulate metadata showing the entry exists, and mock
archiveService.transferEntryTo(cbzPath, "2.jpg", any(OutputStream.class)) to
throw a FileNotFoundException (or appropriate exception) so asserting
cbxReaderService.streamPageImage(1L, 2, new ByteArrayOutputStream()) throws
verifies the transfer failure path instead of duplicating the out-of-range test.
In `@dev.docker-compose.yml`:
- Around line 6-9: The Dockerfile still defines the unrar build stage and copies
the unrar binary (the "unrar-layer" stage and the COPY --from=unrar-layer
/usr/bin/unrar-alpine /usr/local/bin/unrar) even though NightCompress and
libarchive are being added; remove the "FROM linuxserver/unrar:7.1.10 AS
unrar-layer" stage and the corresponding COPY of "unrar-alpine" to fully
eliminate unrar dependencies, or explicitly document/guard those lines if you
intend to keep unrar as a fallback during incremental migration.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2bdec801-f825-4c73-aa45-94d754521b0c
📒 Files selected for processing (20)
.github/workflows/test-suite.ymlDockerfilebooklore-api/build.gradle.ktsbooklore-api/src/main/java/org/booklore/service/ArchiveService.javabooklore-api/src/main/java/org/booklore/service/fileprocessor/CbxProcessor.javabooklore-api/src/main/java/org/booklore/service/kobo/CbxConversionService.javabooklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.javabooklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.javabooklore-api/src/main/java/org/booklore/service/reader/CbxReaderService.javabooklore-api/src/main/java/org/booklore/util/UnrarHelper.javabooklore-api/src/test/java/org/booklore/service/Rar5IntegrationTest.javabooklore-api/src/test/java/org/booklore/service/fileprocessor/CbxProcessorTest.javabooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionIntegrationTest.javabooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionServiceTest.javabooklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.javabooklore-api/src/test/java/org/booklore/service/metadata/writer/CbxComicInfoComplianceTest.javabooklore-api/src/test/java/org/booklore/service/metadata/writer/CbxMetadataWriterTest.javabooklore-api/src/test/java/org/booklore/service/reader/CbxReaderServiceTest.javabooklore-api/src/test/java/org/booklore/util/UnrarHelperIntegrationTest.javadev.docker-compose.yml
💤 Files with no reviewable changes (3)
- booklore-api/src/test/java/org/booklore/service/fileprocessor/CbxProcessorTest.java
- booklore-api/src/main/java/org/booklore/util/UnrarHelper.java
- booklore-api/src/test/java/org/booklore/util/UnrarHelperIntegrationTest.java
📜 Review details
🧰 Additional context used
📓 Path-based instructions (2)
booklore-api/src/**/*.java
📄 CodeRabbit inference engine (AGENTS.md)
booklore-api/src/**/*.java: Use 4-space indentation in Java files and match surrounding Java style
Prefer constructor injection via Lombok patterns already used in the codebase. Do not introduce@Autowiredfield injection
Use MapStruct for entity/DTO mapping
Files:
booklore-api/src/test/java/org/booklore/service/metadata/writer/CbxComicInfoComplianceTest.javabooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionIntegrationTest.javabooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionServiceTest.javabooklore-api/src/test/java/org/booklore/service/Rar5IntegrationTest.javabooklore-api/src/test/java/org/booklore/service/metadata/writer/CbxMetadataWriterTest.javabooklore-api/src/main/java/org/booklore/service/reader/CbxReaderService.javabooklore-api/src/main/java/org/booklore/service/kobo/CbxConversionService.javabooklore-api/src/main/java/org/booklore/service/fileprocessor/CbxProcessor.javabooklore-api/src/test/java/org/booklore/service/reader/CbxReaderServiceTest.javabooklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.javabooklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.javabooklore-api/src/main/java/org/booklore/service/ArchiveService.javabooklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java
booklore-api/src/test/**/*.java
📄 CodeRabbit inference engine (AGENTS.md)
Prefer focused unit tests; use
@SpringBootTestonly when the Spring context is required
Files:
booklore-api/src/test/java/org/booklore/service/metadata/writer/CbxComicInfoComplianceTest.javabooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionIntegrationTest.javabooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionServiceTest.javabooklore-api/src/test/java/org/booklore/service/Rar5IntegrationTest.javabooklore-api/src/test/java/org/booklore/service/metadata/writer/CbxMetadataWriterTest.javabooklore-api/src/test/java/org/booklore/service/reader/CbxReaderServiceTest.javabooklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
🧠 Learnings (3)
📚 Learning: 2026-03-26T01:46:48.863Z
Learnt from: CR
Repo: grimmory-tools/grimmory PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-26T01:46:48.863Z
Learning: Applies to booklore-api/src/test/**/*.java : Prefer focused unit tests; use `SpringBootTest` only when the Spring context is required
Applied to files:
booklore-api/src/test/java/org/booklore/service/metadata/writer/CbxComicInfoComplianceTest.javabooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionIntegrationTest.javabooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionServiceTest.javabooklore-api/src/test/java/org/booklore/service/Rar5IntegrationTest.javabooklore-api/src/test/java/org/booklore/service/metadata/writer/CbxMetadataWriterTest.java
📚 Learning: 2026-03-24T18:58:08.199Z
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:291-294
Timestamp: 2026-03-24T18:58:08.199Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a Hardcover edition has no page count, falling back to the book-level page count (book.pages) is the preferred behavior. Skipping the sync entirely in this case is considered worse than using a slightly approximate total, because the page count is only a proxy for converting progress % to page numbers for the API. The Hardcover API allows setting page-based progress even when the edition lacks a page count, and progress displays in at least some UI views (e.g., the book detail dropdown).
Applied to files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
📚 Learning: 2026-03-24T18:46:47.249Z
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:188-199
Timestamp: 2026-03-24T18:46:47.249Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a `hardcoverBookId` is stored in book metadata but is malformed (non-numeric), the preferred behavior is to return null and skip the sync rather than falling back to ISBN-based lookup. The reasoning is that an explicitly set bookId represents deliberate user intent, and silently resolving via ISBN could sync progress to a different book than intended.
Applied to files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
🔇 Additional comments (27)
Dockerfile (1)
71-76: LGTM!The
libarchiveinstallation, symlink for library discovery, and the--enable-native-access=ALL-UNNAMEDJVM flag are correctly configured to enable NightCompress's native library access..github/workflows/test-suite.yml (1)
59-66: LGTM with a minor note on architecture.The libarchive setup correctly installs the library and makes it discoverable to Java. The hardcoded
x86_64-linux-gnupath works becauseubuntu-latestrunners are x86_64, but if ARM runners are ever used, this would need adjustment.booklore-api/build.gradle.kts (1)
140-144: LGTM!The
--enable-native-access=ALL-UNNAMEDflag is correctly added to the test JVM arguments, enabling FFM API access required by NightCompress.booklore-api/src/test/java/org/booklore/service/metadata/writer/CbxComicInfoComplianceTest.java (1)
33-54: LGTM!The
@EnabledIfannotation correctly gates test execution onArchiveServiceavailability. The direct instantiation ofArchiveServiceis appropriate here since this compliance test validates actual archive writing behavior rather than mocking the service.booklore-api/src/test/java/org/booklore/service/kobo/CbxConversionIntegrationTest.java (1)
28-43: LGTM!The integration test is correctly updated:
@EnabledIfensures graceful skipping when libarchive is unavailable- Direct
ArchiveServiceinstantiation is appropriate for integration tests- Removal of
RarExceptionfrom throws clause aligns with the junrar removalbooklore-api/src/test/java/org/booklore/service/kobo/CbxConversionServiceTest.java (1)
29-41: LGTM!Test setup correctly updated with
@EnabledIfannotation andArchiveServicedependency injection. The test methods properly handle the removal ofRarExceptionfrom the throws clauses.booklore-api/src/test/java/org/booklore/service/metadata/writer/CbxMetadataWriterTest.java (1)
39-59: LGTM!Test class correctly gated with
@EnabledIfand updated to use the newArchiveServicedependency.booklore-api/src/test/java/org/booklore/service/Rar5IntegrationTest.java (2)
31-33: LGTM!The
@EnabledIfannotation correctly references the staticArchiveService#isAvailablemethod, and the test class is appropriately named and structured as an integration test that exercises real RAR5 archive handling through the newArchiveServiceabstraction.
63-69: LGTM!The test correctly uses
findByIdWithBookFiles(99L)which matches the production code inCbxReaderService.getBookPath()that requires eagerly fetchedbookFiles. The service instantiations properly include the newArchiveServicedependency.Also applies to: 85-93
booklore-api/src/main/java/org/booklore/service/kobo/CbxConversionService.java (2)
72-77: LGTM!Constructor injection pattern is correctly used for the
ArchiveServicedependency, following the codebase conventions.
185-209: LGTM!The extraction logic is cleanly refactored to use
ArchiveServicefor entry enumeration and extraction. Error handling appropriately logs warnings for individual entry failures while continuing to process remaining entries.booklore-api/src/main/java/org/booklore/service/reader/CbxReaderService.java (2)
40-42: LGTM!The
ArchiveServiceis properly injected using Lombok's@RequiredArgsConstructorpattern, following the codebase's dependency injection conventions. As per coding guidelines, constructor injection via Lombok patterns is preferred.
182-192: LGTM!The image entry retrieval is cleanly refactored to use
archiveService.streamEntryNames()with appropriate filtering and natural sorting.booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java (1)
27-36: LGTM!Good use of
@MockitoSettings(strictness = Strictness.LENIENT)given the test helper methods that set up mocks that may not all be exercised in every test. The test class properly uses focused unit testing with mocks rather than@SpringBootTest. Based on learnings, focused unit tests are preferred;@SpringBootTestshould only be used when the Spring context is required.booklore-api/src/test/java/org/booklore/service/reader/CbxReaderServiceTest.java (2)
31-35: LGTM!Properly uses
@MockforArchiveServiceand@InjectMocksforCbxReaderService, allowing focused unit testing without requiring libarchive to be installed.
56-71: LGTM!Test correctly mocks
archiveService.streamEntryNames()andFiles.getLastModifiedTime()to isolate the unit under test. The static mocking pattern with try-with-resources ensures proper cleanup.booklore-api/src/main/java/org/booklore/service/fileprocessor/CbxProcessor.java (2)
54-59: LGTM!Constructor properly injects
ArchiveServicedependency following the established pattern in this codebase.
117-142: LGTM!The extraction logic is cleanly refactored:
- Uses
archiveService.streamEntryNames()for entry discovery- Filters and sorts image entries
- Iterates candidates and returns first successfully decoded image
- Appropriately logs warnings for individual entry failures while continuing
booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.java (2)
43-47: LGTM!Constructor properly injects
ArchiveServicefollowing the established pattern.
500-511: LGTM!The cover extraction logic uses a clean Stream pipeline with lazy evaluation via
Supplier. It tries ComicInfo-based extraction first, then falls back to filename heuristics, and finally generates a placeholder if no valid cover is found.booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java (4)
56-56: LGTM!
ArchiveServiceis properly injected via Lombok's@RequiredArgsConstructorpattern.
115-139: LGTM!The
loadOrCreateComicInfomethod cleanly delegates toArchiveServicefor entry discovery and byte reading, with appropriate fallback to a newComicInfoobject when the entry is missing or unreadable.
546-575: LGTM!The archive rebuild logic correctly:
- Iterates source entries via
archiveService.getEntryNames()- Skips existing ComicInfo.xml entries
- Validates path safety to prevent traversal attacks
- Copies entries via
archiveService.transferEntryTo()- Adds the new ComicInfo.xml entry at the end
585-598: LGTM!The
replaceFileExtensionutility correctly handles the case where the file already has the target extension, avoiding unnecessary renaming.booklore-api/src/main/java/org/booklore/service/ArchiveService.java (3)
64-86: LGTM!Stream methods correctly wrap
LibArchiveExceptionasIOException. SinceArchive.getEntries()returns a materialized list, the returned stream doesn't require explicit closing.
88-101: LGTM!Resource handling is correct with try-with-resources. The null check and subsequent exception for missing entries provides clear error reporting.
103-111: LGTM!Clean composition using
transferEntryTofor the implementation.
booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.java
Outdated
Show resolved
Hide resolved
booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java
Show resolved
Hide resolved
booklore-api/src/main/java/org/booklore/service/ArchiveService.java
Outdated
Show resolved
Hide resolved
before it was basically useless because of fallback
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java (1)
1157-1164:⚠️ Potential issue | 🟡 MinorUse a real PNG payload here.
new byte[]{}can never passcanDecode(), so this test only verifies the placeholder fallback rather than PNG support. A tiny serialized PNG would make this assertion meaningful.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java` around lines 1157 - 1164, The test recognizesPngExtension in CbxMetadataExtractorTest currently uses an empty byte array which cannot be decoded; replace the placeholder new byte[]{} with a minimal valid PNG byte sequence (e.g., the standard 8-byte PNG signature plus a tiny IHDR/IDAT/ IEND sequence) or load a small test PNG fixture so that extractor.extractCover(cbzPath.toFile()) exercises real PNG decoding (keep references to mockArchiveContents and extractor.extractCover to locate the change).
♻️ Duplicate comments (1)
booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java (1)
72-75:⚠️ Potential issue | 🟠 MajorReturn the converted
.cbzpath to the caller.For non-CBZ inputs this writes a sibling
.cbz, may delete the original, and still returnsnullwhilesaveMetadataToFile()returnsvoid.MetadataManagementService.java:73-76hashesbook.getFullFilePath()immediately after this call, so a successful conversion can leave the caller hashing a deleted file or persisting the old path. SwallowingdeleteIfExists()failures here makes the inconsistency even harder to detect. ReturntargetPath(or a result object) and fail the write when cleanup is incomplete.Also applies to: 449-469
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java` around lines 72 - 75, CbxMetadataWriter currently writes a sibling .cbz for non-CBZ input but still returns null and swallows delete failures; change the writer to return the resulting targetPath (the .cbz path) rather than null so callers (e.g., MetadataManagementService) can use the new file path, and make cleanup failures propagate as errors instead of being ignored. Concretely, update the method signature that calls updateArchive/write flow (the method in CbxMetadataWriter that currently sets writeSucceeded and returns null) to return the Path/String targetPath (or a small result object), ensure updateArchive(file, xmlContent) usage returns the tempArchive/targetPath and that any Files.deleteIfExists(original) failure throws or causes the method to fail (do not swallow exceptions), and adjust callers to handle the returned path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.java`:
- Around line 553-562: In CbxMetadataExtractor where you parse Page@Image (the
block using page.getAttribute("Image")), the current fallback only uses index-1
when the original index is out of range which misselects for 1-based archives
when both indices are valid; change the logic to try both conventions: parse the
image index, add entryNames.get(index) if index is within range, and also add
entryNames.get(index-1) if index-1 is within range and it refers to a different
entry (so both possibleCoverImages and entryNames are checked and you avoid
duplicates); reference the variables/methods image, index, entryNames,
possibleCoverImages and page.getAttribute("Image") when implementing.
- Around line 57-64: The code in CbxMetadataExtractor calls
buildSecureDocument(is) even when findComicInfoEntryInputStream(path) returns
null, causing a warning for archives without ComicInfo.xml; change the flow to
short-circuit: after obtaining InputStream is from
findComicInfoEntryInputStream(path) check if is == null and immediately return
the filename fallback (BookMetadata.builder().title(baseName).build()) before
calling buildSecureDocument or mapDocumentToMetadata, keeping the existing
try/catch for real parsing errors and only logging exceptions from
buildSecureDocument/mapDocumentToMetadata.
In
`@booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java`:
- Around line 545-569: The code currently writes xmlEntryName returned by
findComicInfoEntryName() without validating it, allowing traversal names like
'../../ComicInfo.xml'; update the CbxMetadataWriter write logic to check
isPathSafe(xmlEntryName) (the same predicate used earlier) and if it returns
false or xmlEntryName is null, replace it with
CbxMetadataWriter.DEFAULT_COMICINFO_XML before calling zipOutput.putNextEntry;
ensure the validated/sanitized name is what you pass to ZipEntry and used for
writing the ComicInfo XML so untrusted archives cannot inject unsafe paths.
In
`@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java`:
- Around line 920-930: The test extractsCoverFromFirstAlphabeticalImage
currently only asserts a non-null cover, so it can pass for placeholders; update
the test to build an expected byte[] from the known entry "page001.jpg" (e.g.,
call createMinimalJpeg(2) or obtain the bytes used in mockArchiveContents) and
assert that extractor.extractCover(cbzPath) equals that expected byte[] to
ensure the alphabetical selection logic in
extractsCoverFromFirstAlphabeticalImage (and method extractor.extractCover) is
verified.
- Around line 73-79: mockComicInfo currently only exposes "ComicInfo.xml" so
extractCover falls back to generatePlaceholderCover; update the test to include
the actual image entry and bytes in the archiveService stubs. Specifically, in
mockComicInfo add "path_001.jpg" (or the expected image entry name) to the
stream returned by archiveService.streamEntryNames(path) and stub
archiveService.getEntryBytes(path, "path_001.jpg") to return realistic image
bytes; this ensures extractCover will find and extract the image instead of
returning the placeholder. Reference: mockComicInfo,
archiveService.streamEntryNames, archiveService.getEntryBytes, extractCover,
generatePlaceholderCover, and "path_001.jpg".
- Around line 67-70: The test helper mockRaisesException currently only stubs
archiveService.getEntryBytes(...) so archiveService.streamEntryNames(path)
returns null and causes an NPE before getEntryBytes is invoked; update
mockRaisesException to also stub archiveService.streamEntryNames(path) to return
a collection/stream containing "ComicInfo.xml" (so the extractor will look up
that entry) and keep the getEntryBytes(path, "ComicInfo.xml") stub to throw the
intended IOException (preferably throw new IOException("...") to include
details) — change references inside mockRaisesException to call
when(archiveService.streamEntryNames(path)).thenReturn(List.of("ComicInfo.xml"))
and keep the when(...getEntryBytes(...)).thenThrow(...) for the exception path.
---
Outside diff comments:
In
`@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java`:
- Around line 1157-1164: The test recognizesPngExtension in
CbxMetadataExtractorTest currently uses an empty byte array which cannot be
decoded; replace the placeholder new byte[]{} with a minimal valid PNG byte
sequence (e.g., the standard 8-byte PNG signature plus a tiny IHDR/IDAT/ IEND
sequence) or load a small test PNG fixture so that
extractor.extractCover(cbzPath.toFile()) exercises real PNG decoding (keep
references to mockArchiveContents and extractor.extractCover to locate the
change).
---
Duplicate comments:
In
`@booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java`:
- Around line 72-75: CbxMetadataWriter currently writes a sibling .cbz for
non-CBZ input but still returns null and swallows delete failures; change the
writer to return the resulting targetPath (the .cbz path) rather than null so
callers (e.g., MetadataManagementService) can use the new file path, and make
cleanup failures propagate as errors instead of being ignored. Concretely,
update the method signature that calls updateArchive/write flow (the method in
CbxMetadataWriter that currently sets writeSucceeded and returns null) to return
the Path/String targetPath (or a small result object), ensure
updateArchive(file, xmlContent) usage returns the tempArchive/targetPath and
that any Files.deleteIfExists(original) failure throws or causes the method to
fail (do not swallow exceptions), and adjust callers to handle the returned
path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 36f3a0ca-b94e-40eb-801f-2cf31e6486d1
📒 Files selected for processing (4)
booklore-api/src/main/java/org/booklore/service/ArchiveService.javabooklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.javabooklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.javabooklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Packaging Smoke Test
🧰 Additional context used
📓 Path-based instructions (2)
booklore-api/src/**/*.java
📄 CodeRabbit inference engine (AGENTS.md)
booklore-api/src/**/*.java: Use 4-space indentation in Java files and match surrounding Java style
Prefer constructor injection via Lombok patterns already used in the codebase. Do not introduce@Autowiredfield injection
Use MapStruct for entity/DTO mapping
Files:
booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.javabooklore-api/src/main/java/org/booklore/service/ArchiveService.javabooklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.javabooklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
booklore-api/src/test/**/*.java
📄 CodeRabbit inference engine (AGENTS.md)
Prefer focused unit tests; use
@SpringBootTestonly when the Spring context is required
Files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
🧠 Learnings (3)
📚 Learning: 2026-03-24T18:46:47.249Z
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:188-199
Timestamp: 2026-03-24T18:46:47.249Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a `hardcoverBookId` is stored in book metadata but is malformed (non-numeric), the preferred behavior is to return null and skip the sync rather than falling back to ISBN-based lookup. The reasoning is that an explicitly set bookId represents deliberate user intent, and silently resolving via ISBN could sync progress to a different book than intended.
Applied to files:
booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.javabooklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
📚 Learning: 2026-03-26T01:46:48.863Z
Learnt from: CR
Repo: grimmory-tools/grimmory PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-26T01:46:48.863Z
Learning: Applies to booklore-api/src/**/*.java : Prefer constructor injection via Lombok patterns already used in the codebase. Do not introduce `Autowired` field injection
Applied to files:
booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.java
📚 Learning: 2026-03-24T18:58:08.199Z
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:291-294
Timestamp: 2026-03-24T18:58:08.199Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a Hardcover edition has no page count, falling back to the book-level page count (book.pages) is the preferred behavior. Skipping the sync entirely in this case is considered worse than using a slightly approximate total, because the page count is only a proxy for converting progress % to page numbers for the API. The Hardcover API allows setting page-based progress even when the edition lacks a page count, and progress displays in at least some UI views (e.g., the book detail dropdown).
Applied to files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.java
Show resolved
Hide resolved
booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.java
Show resolved
Hide resolved
| String comicInfoEntryName = findComicInfoEntryName(sourceArchive); | ||
|
|
||
| try ( | ||
| ZipOutputStream zipOutput = new ZipOutputStream(Files.newOutputStream(targetZip)) | ||
| ) { | ||
| for (String entryName : archiveService.getEntryNames(sourceArchive)) { | ||
| if (isComicInfoXml(entryName)) { | ||
| // Skip copying over any existing comic info entry | ||
| continue; | ||
| } | ||
|
|
||
| if (!isPathSafe(entryName)) { | ||
| log.warn("Skipping unsafe ZIP entry name: {}", entryName); | ||
| log.warn("Skipping unsafe CBZ entry name: {}", entryName); | ||
| continue; | ||
| } | ||
|
|
||
| zipOutput.putNextEntry(new ZipEntry(entryName)); | ||
| try (InputStream entryStream = zipFile.getInputStream(entry)) { | ||
| copyStream(entryStream, zipOutput); | ||
| } | ||
|
|
||
| archiveService.transferEntryTo(sourceArchive, entryName, zipOutput); | ||
|
|
||
| zipOutput.closeEntry(); | ||
| } | ||
| String xmlEntryName = (existingXml != null ? existingXml.getName() : "ComicInfo.xml"); | ||
|
|
||
| String xmlEntryName = (comicInfoEntryName != null ? comicInfoEntryName : CbxMetadataWriter.DEFAULT_COMICINFO_XML); | ||
| zipOutput.putNextEntry(new ZipEntry(xmlEntryName)); |
There was a problem hiding this comment.
Don't reuse an unsafe ComicInfo.xml entry name.
findComicInfoEntryName() accepts any entry that ends with /ComicInfo.xml, including traversal paths like ../../ComicInfo.xml. The copy loop skips unsafe entries, but Line 568 writes xmlEntryName back without isPathSafe(), so a malicious source archive can still produce a CBZ with a traversal entry.
🔒 Suggested change
- String xmlEntryName = (comicInfoEntryName != null ? comicInfoEntryName : CbxMetadataWriter.DEFAULT_COMICINFO_XML);
+ String xmlEntryName = (comicInfoEntryName != null && isPathSafe(comicInfoEntryName))
+ ? comicInfoEntryName
+ : CbxMetadataWriter.DEFAULT_COMICINFO_XML;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@booklore-api/src/main/java/org/booklore/service/metadata/writer/CbxMetadataWriter.java`
around lines 545 - 569, The code currently writes xmlEntryName returned by
findComicInfoEntryName() without validating it, allowing traversal names like
'../../ComicInfo.xml'; update the CbxMetadataWriter write logic to check
isPathSafe(xmlEntryName) (the same predicate used earlier) and if it returns
false or xmlEntryName is null, replace it with
CbxMetadataWriter.DEFAULT_COMICINFO_XML before calling zipOutput.putNextEntry;
ensure the validated/sanitized name is what you pass to ZipEntry and used for
writing the ComicInfo XML so untrusted archives cannot inject unsafe paths.
There was a problem hiding this comment.
Is a traversal something we care about? This is in the context of the archive.
There was a problem hiding this comment.
@imnotjames — Yes, it does matter, though the danger is in the output ZIP rather than the current write call itself.
Consider a crafted source archive whose ComicInfo entry is named ../../ComicInfo.xml. Here's what happens:
isComicInfoXml("../../ComicInfo.xml")→ normalises to../../comicinfo.xml→ matches.endsWith("/comicinfo.xml")→true- The copy loop therefore skips it (correct), but
comicInfoEntryNamenow holds"../../ComicInfo.xml". - Line 568 writes it back to the output
ZipOutputStreamwithoutisPathSafe(), embedding the traversal path into the newly produced CBZ.
The resulting CBZ is a well-formed ZIP with a path-traversal entry baked in. When any tool or user later extracts that file, it can write ComicInfo.xml two directories above the extraction target — the classic Zip Slip vulnerability. The application itself is also at risk when it re-reads the archive later.
The one-liner fix keeps the name only when it is safe, otherwise falling back to the constant:
String xmlEntryName = (comicInfoEntryName != null && isPathSafe(comicInfoEntryName))
? comicInfoEntryName
: CbxMetadataWriter.DEFAULT_COMICINFO_XML;There was a problem hiding this comment.
Yes, that can be used as a zip-slip - that wasn't what I was referring to. There's some pretty significant caveats and context to all of this.
If you use a vulnerable version of a library, which we aren't, and if a user uploads an unsafe zip file, then we would indeed create a possibly unsafe zip file.. for that user who uploaded the vulnerable file.
Which we'd already have, because we aren't taking arbitrary user paths for the comicinfo; we're taking the files from a zip and creating a copy of the zip. The same problem applies to literally every other file in the zips that we are recreating - which this doesn't approach any resolution for.
If we want to create canonical paths for everything we can, but not only is this possibly a breaking change from something that was not introduced by this code, We'd be mutating the structure of the zip silently. it's also a breaking change to fix for a vulnerability that was patched in most systems more than 8 years ago.
I am not asking the AI, "is this something we care about". I am asking maintainers.
If we were to want to fix this, I would suggest a separate issue.
There was a problem hiding this comment.
@imnotjames — Understood, and the points are well-taken. The inconsistency is real: sanitising only the ComicInfo.xml entry name while leaving every other entry name untouched would give a false sense of safety without actually closing the surface, and silently renaming entries could break readers that rely on the original paths. I'll defer to the maintainers on whether a holistic path-canonicalisation pass is worth pursuing separately. Withdrawing the suggestion.
...lore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
Outdated
Show resolved
Hide resolved
...lore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
Show resolved
Hide resolved
...lore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
Outdated
Show resolved
Hide resolved
a55c998 to
206b6f6
Compare
7c577ca to
df91323
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java (1)
121-127:⚠️ Potential issue | 🟡 Minor
fallsBackToFilenameWhenNoComicInfois currently testing archive I/O failure, not missing ComicInfo.Using
mockRaisesException()here duplicates the corrupt-file path and leaves the true “ComicInfo entry absent” case uncovered.🔧 Suggested test adjustment
`@Test` void fallsBackToFilenameWhenNoComicInfo() throws IOException { - Path cbz = mockRaisesException(); + Path cbz = mockArchiveContents(Map.of( + "page001.jpg", createMinimalJpeg(1), + "readme.txt", "no comic info".getBytes() + )); BookMetadata metadata = extractor.extractMetadata(cbz); assertThat(metadata.getTitle()).isEqualTo("test"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java` around lines 121 - 127, The test fallsBackToFilenameWhenNoComicInfo incorrectly uses mockRaisesException() (which simulates a corrupt archive) instead of the missing-ComicInfo case; update the test to use a helper that returns a valid CBZ without a ComicInfo.xml entry (e.g., rename or add mockCbzWithoutComicInfo()/mockWithoutComicInfo()) and use that Path when calling extractor.extractMetadata(cbz) in the fallsBackToFilenameWhenNoComicInfo test so the extractor's fallback-to-filename behavior is exercised and the assertion assertThat(metadata.getTitle()).isEqualTo("test") still holds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In
`@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java`:
- Around line 121-127: The test fallsBackToFilenameWhenNoComicInfo incorrectly
uses mockRaisesException() (which simulates a corrupt archive) instead of the
missing-ComicInfo case; update the test to use a helper that returns a valid CBZ
without a ComicInfo.xml entry (e.g., rename or add
mockCbzWithoutComicInfo()/mockWithoutComicInfo()) and use that Path when calling
extractor.extractMetadata(cbz) in the fallsBackToFilenameWhenNoComicInfo test so
the extractor's fallback-to-filename behavior is exercised and the assertion
assertThat(metadata.getTitle()).isEqualTo("test") still holds.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 68652ae2-75d2-4554-8b2f-b46156a67e52
📒 Files selected for processing (2)
booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.javabooklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
🚧 Files skipped from review as they are similar to previous changes (1)
- booklore-api/src/main/java/org/booklore/service/metadata/extractor/CbxMetadataExtractor.java
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Packaging Smoke Test
🧰 Additional context used
📓 Path-based instructions (2)
booklore-api/src/**/*.java
📄 CodeRabbit inference engine (AGENTS.md)
booklore-api/src/**/*.java: Use 4-space indentation in Java files and match surrounding Java style
Prefer constructor injection via Lombok patterns already used in the codebase. Do not introduce@Autowiredfield injection
Use MapStruct for entity/DTO mapping
Files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
booklore-api/src/test/**/*.java
📄 CodeRabbit inference engine (AGENTS.md)
Prefer focused unit tests; use
@SpringBootTestonly when the Spring context is required
Files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
🧠 Learnings (2)
📚 Learning: 2026-03-24T18:58:08.199Z
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:291-294
Timestamp: 2026-03-24T18:58:08.199Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a Hardcover edition has no page count, falling back to the book-level page count (book.pages) is the preferred behavior. Skipping the sync entirely in this case is considered worse than using a slightly approximate total, because the page count is only a proxy for converting progress % to page numbers for the API. The Hardcover API allows setting page-based progress even when the edition lacks a page count, and progress displays in at least some UI views (e.g., the book detail dropdown).
Applied to files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
📚 Learning: 2026-03-24T18:46:47.249Z
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:188-199
Timestamp: 2026-03-24T18:46:47.249Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a `hardcoverBookId` is stored in book metadata but is malformed (non-numeric), the preferred behavior is to return null and skip the sync rather than falling back to ISBN-based lookup. The reasoning is that an explicitly set bookId represents deliberate user intent, and silently resolving via ISBN could sync progress to a different book than intended.
Applied to files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
🔇 Additional comments (2)
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java (2)
27-36: Nice shift to focused unit tests withArchiveServicemocking.This keeps the suite deterministic and avoids filesystem coupling while exercising extractor logic directly.
As per coding guidelines:
booklore-api/src/test/**/*.java: Prefer focused unit tests; use@SpringBootTestonly when the Spring context is required.
900-1012: Cover extraction assertions are much stronger now.Great improvement validating exact selected bytes (named cover, alphabetical fallback, front-cover page hints, and filtered entries) instead of only non-null checks.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java (2)
82-96: Stream stubs may fail if consumed multiple times.
mockEmptyArchive(line 84) andmockComicInfo(line 93) usethenReturn(Stream.of(...))which returns the same stream instance on every call. After first consumption, subsequent calls get an exhausted stream. This is inconsistent withmockArchiveContents(line 66) which correctly usesthen((i) -> ...)to create a fresh stream per invocation.If the extractor ever evolves to call
streamEntryNames()more than once, these stubs will silently break.♻️ Align with mockArchiveContents pattern
private Path mockEmptyArchive() throws IOException { Path path = Path.of("test.cbz"); - when(archiveService.streamEntryNames(path)).thenReturn(Stream.empty()); + when(archiveService.streamEntryNames(path)).thenAnswer(i -> Stream.empty()); when(archiveService.getEntryBytes(path, any())).thenThrow(IOException.class); return path; } private Path mockComicInfo(String innerXml) throws IOException { Path path = Path.of("test.cbz"); String xml = wrapInComicInfo(innerXml); when(archiveService.getEntryBytes(path, "ComicInfo.xml")).thenReturn(xml.getBytes()); - when(archiveService.streamEntryNames(path)).thenReturn(Stream.of("ComicInfo.xml")); + when(archiveService.streamEntryNames(path)).thenAnswer(i -> Stream.of("ComicInfo.xml")); return path; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java` around lines 82 - 96, The stubs mockEmptyArchive and mockComicInfo return the same Stream instance via thenReturn, which will be exhausted after one use; change them to return a fresh stream per invocation (same pattern as mockArchiveContents) by replacing thenReturn(Stream.of(...)) with a thenAnswer / then(invocation -> Stream.of(...)) that builds a new Stream each call for archiveService.streamEntryNames(path), keeping the existing getEntryBytes stubs unchanged.
1156-1178: NormalizeextractCover()calls to usePathdirectly.Lines 1163 and 1175 use
extractor.extractCover(cbzPath.toFile()), creating an unnecessary Path → File → Path conversion. All other calls in this test file (10 instances) passPathdirectly. SinceCbxMetadataExtractor.extractCover(Path)is the primary implementation, remove the.toFile()conversions for consistency.Suggested change
`@Test` void recognizesJpgExtension() throws IOException { byte[] expected = createMinimalJpeg(1); Path cbzPath = mockArchiveContents(Map.of( "image.jpg", expected )); - byte[] actual = extractor.extractCover(cbzPath.toFile()); + byte[] actual = extractor.extractCover(cbzPath); assertThat(actual).isEqualTo(expected); } `@Test` void recognizesPngExtension() throws IOException { byte[] expected = createMinimalPng(); Path cbzPath = mockArchiveContents(Map.of( "image.png", expected )); - byte[] actual = extractor.extractCover(cbzPath.toFile()); + byte[] actual = extractor.extractCover(cbzPath); assertThat(actual).isEqualTo(expected); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java` around lines 1156 - 1178, The two tests call extractor.extractCover(cbzPath.toFile()), causing unnecessary Path→File→Path conversions; update both tests to call extractor.extractCover(cbzPath) so they pass a Path directly to the CbxMetadataExtractor.extractCover(Path) method for consistency with the other tests (replace occurrences in the recognizesJpgExtension and recognizesPngExtension test methods).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java`:
- Around line 82-96: The stubs mockEmptyArchive and mockComicInfo return the
same Stream instance via thenReturn, which will be exhausted after one use;
change them to return a fresh stream per invocation (same pattern as
mockArchiveContents) by replacing thenReturn(Stream.of(...)) with a thenAnswer /
then(invocation -> Stream.of(...)) that builds a new Stream each call for
archiveService.streamEntryNames(path), keeping the existing getEntryBytes stubs
unchanged.
- Around line 1156-1178: The two tests call
extractor.extractCover(cbzPath.toFile()), causing unnecessary Path→File→Path
conversions; update both tests to call extractor.extractCover(cbzPath) so they
pass a Path directly to the CbxMetadataExtractor.extractCover(Path) method for
consistency with the other tests (replace occurrences in the
recognizesJpgExtension and recognizesPngExtension test methods).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c63d5f80-ba6e-48b3-a7a7-756fa9f1b11a
📒 Files selected for processing (1)
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test Suite / Backend Tests
- GitHub Check: Test Suite / Frontend Tests
- GitHub Check: Analyze (java-kotlin)
🧰 Additional context used
📓 Path-based instructions (2)
booklore-api/src/**/*.java
📄 CodeRabbit inference engine (AGENTS.md)
booklore-api/src/**/*.java: Use 4-space indentation in Java files and match surrounding Java style
Prefer constructor injection via Lombok patterns already used in the codebase. Do not introduce@Autowiredfield injection
Use MapStruct for entity/DTO mapping
Files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
booklore-api/src/test/**/*.java
📄 CodeRabbit inference engine (AGENTS.md)
Prefer focused unit tests; use
@SpringBootTestonly when the Spring context is required
Files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
🧠 Learnings (2)
📚 Learning: 2026-03-24T18:58:08.199Z
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:291-294
Timestamp: 2026-03-24T18:58:08.199Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a Hardcover edition has no page count, falling back to the book-level page count (book.pages) is the preferred behavior. Skipping the sync entirely in this case is considered worse than using a slightly approximate total, because the page count is only a proxy for converting progress % to page numbers for the API. The Hardcover API allows setting page-based progress even when the edition lacks a page count, and progress displays in at least some UI views (e.g., the book detail dropdown).
Applied to files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
📚 Learning: 2026-03-24T18:46:47.249Z
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:188-199
Timestamp: 2026-03-24T18:46:47.249Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a `hardcoverBookId` is stored in book metadata but is malformed (non-numeric), the preferred behavior is to return null and skip the sync rather than falling back to ISBN-based lookup. The reasoning is that an explicitly set bookId represents deliberate user intent, and silently resolving via ISBN could sync progress to a different book than intended.
Applied to files:
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java
🔇 Additional comments (2)
booklore-api/src/test/java/org/booklore/service/metadata/extractor/CbxMetadataExtractorTest.java (2)
28-37: LGTM!Good refactoring to focused unit tests using Mockito without
@SpringBootTest. Constructor injection vianew CbxMetadataExtractor(archiveService)properly tests the class in isolation. TheLENIENTstrictness is reasonable given the shared helper methods that set up stubs not all tests may use.
905-1044: Good coverage of cover extraction edge cases.The
CoverExtractiontests now properly assert exact byte arrays (e.g.,assertThat(actual).isEqualTo(expected)) instead of just non-null checks, addressing the prior concern about placeholder fallback masking selection logic bugs. The tests cover:
- Alphabetical ordering
cover.jpgprecedenceFrontCoverpage element (both byImageFileand by index)__MACOSXand dot-file filtering- Error/corruption fallback
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dev.docker-compose.yml`:
- Line 6: Remove the unused unrar Docker stage and binary copy from
dev.docker-compose.yml: delete the build stage declaration "FROM
linuxserver/unrar:7.1.10 AS unrar-layer" and the corresponding "COPY
--from=unrar-layer /usr/bin/unrar-alpine /usr/local/bin/unrar" line so the image
no longer includes the unrar layer or copies the unrar binary; this aligns the
Docker config with ArchiveService.java and build.gradle.kts which now use
NightCompress and eliminates the dead, non-free unrar dependency.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7a7e3c19-a058-42f0-b18e-dac2e64e1f4a
📒 Files selected for processing (1)
dev.docker-compose.yml
📜 Review details
🔇 Additional comments (1)
dev.docker-compose.yml (1)
8-9: LGTM! libarchive setup is consistent with the production Dockerfile.The
libarchiveinstallation and symlink match the production Dockerfile configuration (lines 71-75), ensuring dev/prod parity for NightCompress library loading via JNA.
No, this is the way, I think. There are a few things we need to rip the band aid off on, ASAP. As for the parser, I would leave that as is for now. It has several problems, the fact that it's deeply integrated into Extractor is partly one, but there are others too. I think I mentioned this for EPUBs, but the same goes for PDFs and a lot of CBX as well. I'd like to move a lot of the more general file manipulation tasks down to their own packages and publish them to Maven Central, instead of doing weird things here. For PDFs, we are already mostly done with #281, though that PR still needs a few hours of work. But "mostly done" is an apt description. |
Description
Refactors CBX (comic book formats) to use NightCompress instead of a variety of other tools including the
unrarbinary, sevenzip helpers, and junrar. This simplifies the code paths so that each should be more consistent.This also drops updating metadata into a
CBRvia the rar CLI tool and instead always converts everything toCBZ, so that when updatingCB7andCBRwe can follow the same code path.Updated testing suite to validate and also tested manually by uploading a variety of CBZ, CBR, and CB7 files.
Linked Issue: Fixes #29
Changes
ArchiveServiceto consolidate archive reading.CbxMetadataExtractorTestandCbxReaderServiceTestto not touch the filesystem.unrarbin to instead useNightCompress.EnabledIfannotations to disable tests that use nightcompress unless it's available.Summary by CodeRabbit
Refactor
Chores
Tests