Skip to content

Conversation

catfromplan9
Copy link

@catfromplan9 catfromplan9 commented Aug 16, 2025

Add animated image support to the media thumbnailer.

@catfromplan9 catfromplan9 requested a review from a team as a code owner August 16, 2025 20:19
@CLAassistant
Copy link

CLAassistant commented Aug 16, 2025

CLA assistant check
All committers have signed the CLA.

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds animated image support to the media thumbnailer by enabling generation of animated thumbnails that preserve animation, loop count, and frame durations. The changes extend the existing thumbnail functionality to handle animated images like GIFs while maintaining backward compatibility with static images.

Key changes:

  • Refactored image resizing logic to support both single frames and animated sequences
  • Added animated thumbnail encoding that preserves animation properties
  • Enhanced both scale and crop operations to handle animated images

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
synapse/media/thumbnailer.py Core implementation adding animated image processing to scale/crop methods and new encoding functionality
changelog.d/18831.feature Changelog entry documenting the new animated image support feature

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +185 to +199
frames = []
durations = []
loop = self.image.info.get("loop", 0)
transparency = self.image.info.get("transparency", None)
for frame in ImageSequence.Iterator(self.image):
# Copy the frame to avoid referencing the original image memory
f = frame.copy()
if f.mode != "RGBA":
f = f.convert("RGBA")
resized = self._resize_image(f, width, height)
frames.append(resized)
durations.append(
frame.info.get("duration") or self.image.info.get("duration") or 100
)
return self._encode_animated(frames, durations, loop, transparency)
Copy link
Preview

Copilot AI Aug 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The animated processing logic is duplicated between scale() and crop() methods. Consider extracting this into a shared helper method to reduce code duplication.

Suggested change
frames = []
durations = []
loop = self.image.info.get("loop", 0)
transparency = self.image.info.get("transparency", None)
for frame in ImageSequence.Iterator(self.image):
# Copy the frame to avoid referencing the original image memory
f = frame.copy()
if f.mode != "RGBA":
f = f.convert("RGBA")
resized = self._resize_image(f, width, height)
frames.append(resized)
durations.append(
frame.info.get("duration") or self.image.info.get("duration") or 100
)
return self._encode_animated(frames, durations, loop, transparency)
return self._process_animated(width, height, self._resize_image)

Copilot uses AI. Check for mistakes.

if transparency is not None:
save_kwargs["transparency"] = transparency

paletted_frames[0].save(output_bytes_io, **cast(dict, save_kwargs))
Copy link
Preview

Copilot AI Aug 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cast(dict, save_kwargs) is unnecessary since save_kwargs is already declared as a dict. This cast adds complexity without benefit.

Suggested change
paletted_frames[0].save(output_bytes_io, **cast(dict, save_kwargs))
paletted_frames[0].save(output_bytes_io, **save_kwargs)

Copilot uses AI. Check for mistakes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't pass the linter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants