Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
package org.booklore.config.security.filter;

import jakarta.servlet.FilterChain;
import jakarta.servlet.ServletException;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.servlet.http.HttpServletResponse;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import org.springframework.web.filter.OncePerRequestFilter;

import java.io.IOException;

/**
* Enables SharedArrayBuffer by setting Cross-Origin isolation headers.
* Required by pdfium WASM (Emscripten-compiled with thread support) used by EmbedPDF.
* Without these headers the WASM module stalls on instantiation.
*/
@Component
@Order(1)
public class SharedArrayBufferHeaderFilter extends OncePerRequestFilter {

@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException {
response.setHeader("Cross-Origin-Opener-Policy", "same-origin");
response.setHeader("Cross-Origin-Embedder-Policy", "credentialless");
filterChain.doFilter(request, response);
Comment on lines +25 to +27
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

For browser COEP/COOP behavior: What are the practical effects of applying Cross-Origin-Embedder-Policy: credentiallessandCross-Origin-Opener-Policy: same-origin to all HTTP responses (including API endpoints), and is route-scoping recommended?

💡 Result:

Applying Cross-Origin-Embedder-Policy: credentialless (COEP) and Cross-Origin-Opener-Policy: same-origin (COOP) to all HTTP responses, including API endpoints, enables cross-origin isolation for documents served by those responses, unlocking features like SharedArrayBuffer and high-precision timers. However, COEP applies only to documents (HTML), workers, and their subresource loading contexts—not directly to non-document API responses like JSON APIs. Practical effects: For HTML documents (pages): - Enables cross-origin isolation when combined with COOP: same-origin, allowing access to powerful APIs (e.g., SharedArrayBuffer, precise Performance.now). - COEP: credentialless allows loading cross-origin no-cors subresources (e.g., images, scripts via , <script>) without CORP headers by omitting credentials (cookies, client certs). Credentialed cross-origin loads require CORS (e.g., via crossorigin="use-credentials" or fetch(mode: 'cors')). - Cross-origin iframes still need CORP or credentialless attribute for embedding. - Protects against side-channel attacks (e.g., Spectre) by restricting embedder capabilities. For API endpoints (JSON/XML/etc.): - No direct behavioral change, as COEP/COOP do not apply to non-document responses. JSON APIs are unaffected. - If an API response is navigated to (rare), or used in a worker/service worker with COEP inherited, subresource fetches from that context follow credentialless rules. - Safe to apply universally, as it only affects documents/workers served by the origin. Security benefits (both headers): - COOP: same-origin isolates the browsing context group, severing window.opener links for cross-origin popups/windows, preventing XS-Leaks and tabnabbing. Same-origin windows share context only if matching COOP. - Together: Ensures process isolation, blocking malicious cross-origin interactions. Potential breakage: - Breaks cross-origin popup communication (e.g., OAuth flows relying on window.opener; use BroadcastChannel/SharedWorker instead). - COEP blocks non-compliant cross-origin subresources unless credentialless or CORS/CORP opted-in. - Recursive: Child iframes/documents must comply with COEP. Route-scoping recommendation: Yes, route-scoping is recommended over applying to all responses. Use COEP/COOP only on HTML pages needing isolation (e.g., for SharedArrayBuffer). Exclude: - API endpoints (unnecessary, no effect). - Legacy pages/popups needing cross-origin opener access. Test with report-only mode (COEP-Report-Only, COOP-Report-Only) first. Universal application is safe but suboptimal—scoping avoids unnecessary enforcement on non-documents and reduces breakage risk.

Citations:


🏁 Script executed:

find . -name "SharedArrayBufferHeaderFilter.java" -type f

Repository: grimmory-tools/grimmory

Length of output: 166


🏁 Script executed:

cat -n ./booklore-api/src/main/java/org/booklore/config/security/filter/SharedArrayBufferHeaderFilter.java

Repository: grimmory-tools/grimmory

Length of output: 1399


Scope cross-origin isolation headers to viewer/PDF routes instead of all responses.

The filter currently applies COOP and COEP headers unconditionally to every HTTP response. While these headers are technically safe on API/WebSocket endpoints (they only affect documents and workers, not JSON responses), applying them globally is not a best practice. Since the headers are required only for the EmbedPDF WASM feature, restrict this filter to routes that actually need SharedArrayBuffer. Additionally, COOP: same-origin can break cross-origin opener communication in OAuth flows and other popup-based integrations.

🔧 Example scoping approach
 public class SharedArrayBufferHeaderFilter extends OncePerRequestFilter {
+    `@Override`
+    protected boolean shouldNotFilter(HttpServletRequest request) {
+        String path = request.getRequestURI();
+        return path.startsWith("/api/")
+                || path.startsWith("/ws/")
+                || path.startsWith("/actuator/");
+    }
+
     `@Override`
     protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
             throws ServletException, IOException {
         response.setHeader("Cross-Origin-Opener-Policy", "same-origin");
         response.setHeader("Cross-Origin-Embedder-Policy", "credentialless");
         filterChain.doFilter(request, response);
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@booklore-api/src/main/java/org/booklore/config/security/filter/SharedArrayBufferHeaderFilter.java`
around lines 25 - 27, The SharedArrayBufferHeaderFilter is setting COOP/COEP
unconditionally; change its doFilter logic in SharedArrayBufferHeaderFilter to
only set response.setHeader("Cross-Origin-Opener-Policy", "same-origin") and
response.setHeader("Cross-Origin-Embedder-Policy", "credentialless") when the
incoming HttpServletRequest matches the viewer/PDF routes used by the EmbedPDF
WASM feature (e.g., path patterns like the viewer or /pdf endpoints or a
specific query/flag used for embedded PDFs). Also ensure the filter skips
setting COOP for OAuth/popup flows or other known cross-origin-opener routes
(detect by path/pattern or presence of OAuth parameters) so opener communication
isn’t broken; otherwise call filterChain.doFilter(request, response) without
adding headers.

}
}
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,21 @@ public void getBookContent(
bookService.streamBookContent(bookId, bookType, request, response);
}

@Operation(summary = "Replace book content", description = "Overwrite the primary PDF file for a book with the uploaded content. Used by the document viewer to persist annotation changes.")
@ApiResponses({
@ApiResponse(responseCode = "204", description = "Book content replaced successfully"),
@ApiResponse(responseCode = "404", description = "Book not found")
})
@PutMapping("/{bookId}/content")
@CheckBookAccess(bookIdParam = "bookId")
public ResponseEntity<Void> replaceBookContent(
@Parameter(description = "ID of the book") @PathVariable long bookId,
@Parameter(description = "Optional book type for alternative format") @RequestParam(required = false) String bookType,
HttpServletRequest request) throws java.io.IOException {
bookService.replaceBookContent(bookId, bookType, request.getInputStream());
return ResponseEntity.noContent().build();
}
Comment on lines +156 to +169
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new PUT /api/v1/books/{bookId}/content endpoint is guarded only by @CheckBookAccess, which appears to be a read-level check elsewhere in this controller. As written, any user who can access a book could overwrite the book file. Consider adding an explicit write permission check (e.g., admin / upload / manage-library) or a dedicated permission for modifying book content.

Copilot uses AI. Check for mistakes.
Comment on lines +156 to +169
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Missing write permission check allows unauthorized file modification.

The @CheckBookAccess annotation only verifies library membership (read access), not write permissions. Based on BookAccessAspect (lines 31-64), a read-only user who has access to the library can overwrite book files through this endpoint.

Other mutating endpoints in this controller use explicit permission checks:

  • deleteBooks: @PreAuthorize("@securityUtil.canDeleteBook() or @securityUtil.isAdmin()")
  • attachBookFiles: @PreAuthorize("@securityUtil.canManageLibrary() or @securityUtil.isAdmin()")
🔒 Proposed fix to add write permission check
 `@Operation`(summary = "Replace book content", description = "Overwrite the primary PDF file for a book with the uploaded content. Used by the document viewer to persist annotation changes.")
 `@ApiResponses`({
         `@ApiResponse`(responseCode = "204", description = "Book content replaced successfully"),
-        `@ApiResponse`(responseCode = "404", description = "Book not found")
+        `@ApiResponse`(responseCode = "404", description = "Book not found"),
+        `@ApiResponse`(responseCode = "403", description = "Forbidden")
 })
 `@PutMapping`("/{bookId}/content")
+@PreAuthorize("@securityUtil.canManageLibrary() or `@securityUtil.isAdmin`()")
 `@CheckBookAccess`(bookIdParam = "bookId")
 public ResponseEntity<Void> replaceBookContent(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@booklore-api/src/main/java/org/booklore/controller/BookController.java`
around lines 156 - 169, The replaceBookContent endpoint currently uses
`@CheckBookAccess` (read-only) and thus allows users without write permissions to
overwrite files; update the controller method replaceBookContent to enforce
write/manage permissions by adding the same PreAuthorize check used by other
mutating endpoints (e.g. `@PreAuthorize`("@securityUtil.canManageLibrary() or
`@securityUtil.isAdmin`()") or an equivalent permission check), ensuring the
authorization is evaluated before calling bookService.replaceBookContent;
reference the CheckBookAccess/BookAccessAspect to confirm existing behavior and
mirror the permission used on attachBookFiles/deleteBooks.

Copy link
Copy Markdown
Member Author

@balazs-szucs balazs-szucs Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed in later PR.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@balazs-szucs, understood — thanks for the heads-up! Would you like me to open a follow-up GitHub issue to track adding the write permission check (@PreAuthorize("@securityUtil.canManageLibrary() or @securityUtil.isAdmin()")) to the replaceBookContent endpoint, so it doesn't get lost between PRs?


@Operation(summary = "Download book", description = "Download the book file. Requires download permission or admin.")
@ApiResponses({
@ApiResponse(responseCode = "200", description = "Book downloaded successfully"),
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
package org.booklore.controller;

import org.booklore.model.dto.response.CbxPageDimension;
import org.booklore.model.dto.response.CbxPageInfo;
import org.booklore.service.reader.CbxReaderService;
import io.swagger.v3.oas.annotations.Operation;
Expand Down Expand Up @@ -36,4 +37,13 @@ public List<CbxPageInfo> getPageInfo(
@Parameter(description = "Optional book type for alternative format (e.g., PDF, CBX)") @RequestParam(required = false) String bookType) {
return cbxReaderService.getPageInfo(bookId, bookType);
}

@Operation(summary = "Get page dimensions for a CBX book", description = "Retrieve width, height, and wide flag for each page in a CBX book.")
@ApiResponse(responseCode = "200", description = "Page dimensions returned successfully")
@GetMapping("/{bookId}/page-dimensions")
public List<CbxPageDimension> getPageDimensions(
@Parameter(description = "ID of the book") @PathVariable Long bookId,
@Parameter(description = "Optional book type for alternative format (e.g., PDF, CBX)") @RequestParam(required = false) String bookType) {
return cbxReaderService.getPageDimensions(bookId, bookType);
}
Comment on lines +41 to +48
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new CBX page-dimensions endpoint does not enforce per-book access control (no @CheckBookAccess), matching the existing CBX endpoints but expanding the exposed surface. If book access should be restricted (as it is for /api/v1/books/{bookId}/content), add the appropriate access check here (and ideally align the other CBX endpoints as well).

Copilot uses AI. Check for mistakes.
}
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,5 @@ public class PdfViewerPreferences {
private Long bookId;
private String zoom;
private String spread;
private Boolean isDarkTheme;
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
package org.booklore.model.dto.response;

import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class CbxPageDimension {
private int pageNumber;
private int width;
private int height;
private boolean wide;
}
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,7 @@ public class PdfViewerPreferencesEntity {

@Column(name = "spread")
private String spread;

@Column(name = "is_dark_theme")
private Boolean isDarkTheme;
}
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,6 @@

public enum CbxPageScrollMode {
PAGINATED,
INFINITE
INFINITE,
LONG_STRIP
}
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
import java.util.*;
import java.util.stream.Collectors;

Expand Down Expand Up @@ -205,6 +206,7 @@ public BookViewerSettings getBookViewerSetting(long bookId, long bookFileId) {
.bookId(bookId)
.zoom(pdfPref.getZoom())
.spread(pdfPref.getSpread())
.isDarkTheme(pdfPref.getIsDarkTheme())
.build()));
newPdfViewerPreferencesRepository.findByBookIdAndUserId(bookId, user.getId())
.ifPresent(pdfPref -> settingsBuilder.newPdfSettings(NewPdfViewerPreferences.builder()
Expand Down Expand Up @@ -382,6 +384,25 @@ public void streamBookContent(long bookId, String bookType, HttpServletRequest r
fileStreamingService.streamWithRangeSupport(path, contentType, request, response);
}

public void replaceBookContent(long bookId, String bookType, java.io.InputStream content) throws IOException {
BookEntity bookEntity = bookRepository.findByIdWithBookFiles(bookId)
.orElseThrow(() -> ApiError.BOOK_NOT_FOUND.createException(bookId));

Path filePath;
if (bookType != null) {
BookFileType requestedType = BookFileType.valueOf(bookType.toUpperCase());
BookFileEntity bookFile = bookEntity.getBookFiles().stream()
.filter(bf -> bf.getBookType() == requestedType)
.findFirst()
.orElseThrow(() -> ApiError.FILE_NOT_FOUND.createException("No file of type " + bookType + " found for book"));
filePath = bookFile.getFullFilePath();
} else {
filePath = FileUtils.getBookFullPath(bookEntity);
}

Files.copy(content, filePath, StandardCopyOption.REPLACE_EXISTING);
}
Comment on lines +387 to +404
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replaceBookContent overwrites the book file in-place via Files.copy(..., REPLACE_EXISTING). If the upload is interrupted (client disconnect, disk full, etc.), this can corrupt the book for all readers. Consider writing to a temporary file in the same directory and then doing an atomic move/replace to the final path, with appropriate fsync/error handling.

Copilot uses AI. Check for mistakes.
Comment on lines +387 to +404
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing content validation and audit logging for file replacement.

The replaceBookContent method directly overwrites book files without:

  1. Content validation: No check that the uploaded content is a valid PDF or matches expected format
  2. Size limits: Large uploads could exhaust disk space
  3. Backup mechanism: If the replacement fails mid-write or content is corrupted, the original file is lost
  4. Audit logging: Unlike deleteBooks (line 462: auditService.log(AuditAction.BOOK_DELETED, ...)), file replacements are not logged
🛡️ Suggested improvements
 public void replaceBookContent(long bookId, String bookType, java.io.InputStream content) throws IOException {
     BookEntity bookEntity = bookRepository.findByIdWithBookFiles(bookId)
             .orElseThrow(() -> ApiError.BOOK_NOT_FOUND.createException(bookId));

     Path filePath;
     if (bookType != null) {
         BookFileType requestedType = BookFileType.valueOf(bookType.toUpperCase());
         BookFileEntity bookFile = bookEntity.getBookFiles().stream()
                 .filter(bf -> bf.getBookType() == requestedType)
                 .findFirst()
                 .orElseThrow(() -> ApiError.FILE_NOT_FOUND.createException("No file of type " + bookType + " found for book"));
         filePath = bookFile.getFullFilePath();
     } else {
         filePath = FileUtils.getBookFullPath(bookEntity);
     }

+    // Create backup before replacement
+    Path backupPath = filePath.resolveSibling(filePath.getFileName() + ".bak");
+    if (Files.exists(filePath)) {
+        Files.copy(filePath, backupPath, StandardCopyOption.REPLACE_EXISTING);
+    }
+    
+    try {
         Files.copy(content, filePath, StandardCopyOption.REPLACE_EXISTING);
+        Files.deleteIfExists(backupPath);
+        auditService.log(AuditAction.BOOK_UPDATED, "Replaced content for book " + bookId);
+    } catch (IOException e) {
+        // Restore backup on failure
+        if (Files.exists(backupPath)) {
+            Files.move(backupPath, filePath, StandardCopyOption.REPLACE_EXISTING);
+        }
+        throw e;
+    }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@booklore-api/src/main/java/org/booklore/service/book/BookService.java` around
lines 387 - 404, The replaceBookContent method should validate and safely
replace files: first, enforce a size limit on the incoming InputStream and
validate the content is a PDF (e.g., check MIME/signature) before writing; write
the incoming bytes to a temporary file in the same directory, verify the temp
file (PDF validation and optional checksum), back up the current target file
(e.g., rename to .bak) then atomically move the temp file to the target path
(use Files.move with ATOMIC_MOVE or replace and delete backup on success) to
avoid mid-write corruption and preserve the original on failure; finally, record
the operation with auditService.log(AuditAction.BOOK_REPLACED, ...) including
bookId, bookType and size/checksum metadata. Ensure these changes are applied in
replaceBookContent and use FileUtils.getBookFullPath / BookFileType to locate
targets.

Copy link
Copy Markdown
Member Author

@balazs-szucs balazs-szucs Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically also addressed in a later PR.

But also completely unnecessary it is implied via UI

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@balazs-szucs Understood — happy to defer to the later PR for the safety improvements, and your point about the UI context making explicit audit logging less critical here is fair. No further action needed on this PR.


🧠 Learnings used
Learnt from: pedronave
Repo: grimmory-tools/grimmory PR: 173
File: booklore-api/src/main/java/org/booklore/service/hardcover/HardcoverSyncService.java:188-199
Timestamp: 2026-03-24T18:46:47.249Z
Learning: In grimmory-tools/grimmory (HardcoverSyncService.java), when a `hardcoverBookId` is stored in book metadata but is malformed (non-numeric), the preferred behavior is to return null and skip the sync rather than falling back to ISBN-based lookup. The reasoning is that an explicitly set bookId represents deliberate user intent, and silently resolving via ISBN could sync progress to a different book than intended.


@Transactional
public ResponseEntity<BookDeletionResponse> deleteBooks(Set<Long> ids) {
BookLoreUser user = authenticationService.getAuthenticatedUser();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ private void updatePdfViewerSettings(long bookId, Long userId, BookViewerSetting
PdfViewerPreferences pdfSettings = settings.getPdfSettings();
prefs.setZoom(pdfSettings.getZoom());
prefs.setSpread(pdfSettings.getSpread());
prefs.setIsDarkTheme(pdfSettings.getIsDarkTheme());
pdfViewerPreferencesRepository.save(prefs);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
import org.apache.commons.compress.archivers.zip.ZipArchiveEntry;
import org.apache.pdfbox.io.IOUtils;
import org.booklore.exception.ApiError;
import org.booklore.model.dto.response.CbxPageDimension;
import org.booklore.model.dto.response.CbxPageInfo;
import org.booklore.model.entity.BookEntity;
import org.booklore.model.entity.BookFileEntity;
Expand All @@ -19,6 +20,11 @@
import org.booklore.util.UnrarHelper;
import org.springframework.stereotype.Service;

import javax.imageio.ImageIO;
import javax.imageio.ImageReader;
import javax.imageio.stream.ImageInputStream;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
Expand Down Expand Up @@ -113,6 +119,63 @@ public List<CbxPageInfo> getPageInfo(Long bookId, String bookType) {
}
}

public List<CbxPageDimension> getPageDimensions(Long bookId) {
return getPageDimensions(bookId, null);
}

public List<CbxPageDimension> getPageDimensions(Long bookId, String bookType) {
Path cbxPath = getBookPath(bookId, bookType);
try {
CachedArchiveMetadata metadata = getCachedMetadata(cbxPath);
List<String> imageEntries = metadata.imageEntries;
List<CbxPageDimension> dimensions = new ArrayList<>();
for (int i = 0; i < imageEntries.size(); i++) {
String entryName = imageEntries.get(i);
CbxPageDimension dim = readEntryDimension(cbxPath, entryName, metadata, i + 1);
dimensions.add(dim);
}
return dimensions;
} catch (IOException e) {
log.error("Failed to read page dimensions for book {}", bookId, e);
throw ApiError.FILE_READ_ERROR.createException("Failed to read page dimensions: " + e.getMessage());
}
}
Comment on lines +126 to +142
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Performance concern: Sequential processing of all pages may cause slow response times.

This endpoint reads and decodes every image in the archive sequentially to extract dimensions. For a 500-page comic book:

  • Each page is fully extracted to memory via ByteArrayOutputStream
  • ImageIO decodes each image to read width/height
  • No parallelization or caching of dimension results

This could result in multi-second response times for large archives, potentially causing frontend timeouts or poor UX.

Consider these optimizations:

  1. Parallel processing using virtual threads or parallel streams:
List<CbxPageDimension> dimensions = IntStream.range(0, imageEntries.size())
    .parallel()
    .mapToObj(i -> readEntryDimension(cbxPath, imageEntries.get(i), metadata, i + 1))
    .toList();
  1. Cache dimension results similar to how CachedArchiveMetadata caches entry lists, or store dimensions in the database after first calculation.

  2. Lazy loading on frontend - fetch dimensions only when needed for layout decisions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@booklore-api/src/main/java/org/booklore/service/reader/CbxReaderService.java`
around lines 126 - 142, The getPageDimensions method currently reads and decodes
every image sequentially (via readEntryDimension) which is slow for large
archives; update getPageDimensions to process imageEntries in parallel (e.g.,
use IntStream.range(...).parallel() or a virtual-thread Executor) to call
readEntryDimension concurrently, and add memoization of results into
CachedArchiveMetadata (or persist dimensions) so subsequent calls return cached
CbxPageDimension lists without re-decoding; ensure thread-safety when populating
the cache and that readEntryDimension and any shared resources (metadata access,
archive reads) are safe for concurrent access.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CBX is a streaming reader that processes images 1 by 1, the perf there negligible.


private CbxPageDimension readEntryDimension(Path cbxPath, String entryName, CachedArchiveMetadata metadata, int pageNumber) {
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
streamEntryFromArchive(cbxPath, entryName, baos, metadata);
byte[] imageBytes = baos.toByteArray();
try (ImageInputStream iis = ImageIO.createImageInputStream(new ByteArrayInputStream(imageBytes))) {
Iterator<ImageReader> readers = ImageIO.getImageReaders(iis);
if (readers.hasNext()) {
ImageReader reader = readers.next();
try {
reader.setInput(iis);
int width = reader.getWidth(0);
int height = reader.getHeight(0);
return CbxPageDimension.builder()
.pageNumber(pageNumber)
.width(width)
.height(height)
.wide(width > height)
.build();
} finally {
reader.dispose();
}
}
}
} catch (IOException e) {
log.warn("Failed to read dimensions for page {} (entry: {}): {}", pageNumber, entryName, e.getMessage());
}
return CbxPageDimension.builder()
.pageNumber(pageNumber)
.width(0)
.height(0)
.wide(false)
.build();
}
Comment on lines +144 to +177
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readEntryDimension() fully decompresses each image entry into a byte[] before reading width/height. For large CBZ/CBR files this can be very slow and memory-intensive (potential OOM), especially since it loops all pages. Consider extracting dimensions without buffering the full image (e.g., stream to an ImageInputStream directly, read only headers, or cache computed dimensions per archive on disk/in-memory with limits).

Copilot uses AI. Check for mistakes.

private String extractDisplayName(String entryPath) {
String fileName = baseName(entryPath);
int lastDotIndex = fileName.lastIndexOf('.');
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ private BookLoreUser.UserSettings.PerBookSetting buildDefaultPerBookSetting() {

private BookLoreUser.UserSettings.PdfReaderSetting buildDefaultPdfReaderSetting() {
return BookLoreUser.UserSettings.PdfReaderSetting.builder()
.pageSpread("odd")
.pageSpread("off")
.pageZoom("page-fit")
.build();
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
ALTER TABLE pdf_viewer_preference ADD COLUMN is_dark_theme BOOLEAN DEFAULT TRUE;
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
-- Change default PDF page spread from 'odd' to 'off' (single page view)
UPDATE user_settings
SET setting_value = REPLACE(setting_value, '"pageSpread":"odd"', '"pageSpread":"off"')
WHERE setting_key = 'PDF_READER_SETTING'
AND setting_value LIKE '%"pageSpread":"odd"%';

-- Update existing per-book PDF viewer preferences
UPDATE pdf_viewer_preference
SET spread = 'off'
WHERE spread = 'odd';
4 changes: 4 additions & 0 deletions frontend/.yarnrc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
nodeLinker: node-modules
nmMode: hardlinks-local
enableGlobalCache: false
cacheFolder: .yarn/cache
18 changes: 17 additions & 1 deletion frontend/angular.json
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,16 @@
"glob": "**/*",
"input": "libs/foliate-js",
"output": "/assets/foliate/"
},
{
"glob": "**/*",
"input": "node_modules/@embedpdf/pdfium/dist/",
"output": "/assets/pdfium/"
},
{
"glob": "**/*",
"input": "node_modules/@embedpdf/snippet/dist/",
"output": "/assets/embedpdf/"
}
],
"styles": [
Expand Down Expand Up @@ -98,6 +108,12 @@
},
"serve": {
"builder": "@angular/build:dev-server",
"options": {
"headers": {
"Cross-Origin-Opener-Policy": "same-origin",
"Cross-Origin-Embedder-Policy": "credentialless"
}
},
"configurations": {
"production": {
"buildTarget": "grimmory:build:production"
Expand Down Expand Up @@ -135,4 +151,4 @@
],
"analytics": false
}
}
}
2 changes: 2 additions & 0 deletions frontend/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@
"@angular/platform-browser-dynamic": "^21.2.6",
"@angular/router": "^21.2.6",
"@angular/service-worker": "^21.2.6",
"@embedpdf/pdfium": "^2.11.1",
"@embedpdf/snippet": "^2.11.1",
"@iharbeck/ngx-virtual-scroller": "^20.0.0",
"@jsverse/transloco": "^8.2.1",
"@primeuix/themes": "^2.0.3",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,13 @@ export class BookFilterComponent implements OnInit, OnDestroy {

readonly filterLabelKeys = FILTER_LABEL_KEYS;

private readonly userSettingsEffect = effect(() => {
const user = this.userService.currentUser();
if (!user) return;
this._visibleFilters = user.userSettings.visibleFilters ?? [...DEFAULT_VISIBLE_FILTERS];
this.updateVisibleFilterTypes();
});

getFilterLabel(type: FilterType): string {
const key = this.filterLabelKeys[type];
return key ? this.t.translate(key) : type;
Expand All @@ -99,7 +106,6 @@ export class BookFilterComponent implements OnInit, OnDestroy {
ngOnInit(): void {
this.updateVisibleFilterTypes();
this.updateExpandedPanels();
this.subscribeToUserSettings();
this.subscribeToReset();
}

Expand Down Expand Up @@ -164,15 +170,6 @@ export class BookFilterComponent implements OnInit, OnDestroy {
return String(value ?? '');
}

private subscribeToUserSettings(): void {
effect(() => {
const user = this.userService.currentUser();
if (!user) return;
this._visibleFilters = user.userSettings.visibleFilters ?? [...DEFAULT_VISIBLE_FILTERS];
this.updateVisibleFilterTypes();
});
}

private updateVisibleFilterTypes(): void {
this.visibleFilterTypes = this._visibleFilters.filter(
vf => this.filterTypes.includes(vf as FilterType)
Expand Down
8 changes: 7 additions & 1 deletion frontend/src/app/features/book/model/book.model.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import {Shelf} from './shelf.model';
import {CbxBackgroundColor, CbxFitMode, CbxPageSpread, CbxPageViewMode, CbxScrollMode, NewPdfReaderSetting} from '../../settings/user-management/user.service';
import {CbxBackgroundColor, CbxFitMode, CbxPageSpread, CbxPageSplitOption, CbxPageViewMode, CbxScrollMode, NewPdfReaderSetting} from '../../settings/user-management/user.service';
import {BookReview} from '../components/book-reviews/book-review-service';
import {ZoomType} from 'ngx-extended-pdf-viewer';

Expand Down Expand Up @@ -305,6 +305,7 @@ export interface MetadataUpdateWrapper {
export interface PdfViewerSetting {
zoom: ZoomType;
spread: 'off' | 'even' | 'odd';
isDarkTheme?: boolean;
}

export interface EpubViewerSetting {
Expand Down Expand Up @@ -339,6 +340,11 @@ export interface CbxViewerSetting {
fitMode: CbxFitMode;
scrollMode?: CbxScrollMode;
backgroundColor?: CbxBackgroundColor;
pageSplitOption?: CbxPageSplitOption;
brightness?: number;
emulateBook?: boolean;
clickToPaginate?: boolean;
autoCloseMenu?: boolean;
}

export interface BookSetting {
Expand Down
Loading
Loading