Releases: pydio/cells
Bugfix Release
As a result of active testing and community reports, we continue to correct issues and improve v4.
Important Bugfixes
- New files detection on structured datasource (local FS) was partially broken
- Fixed a possible concurrent map access during massive uploads
- Refix sync task cancellation for long datasource resync
- Appending query params to URL breaks public links
Minor improvements
- Exif rotation correct support for avatar pictures
- Make sure binaries are not indexed by search engine
- Improve share screen tabs display
- Scheduler: much better log display and debugging level
- Fixed Scp Put action (support for templates)
Feature
- Integrate Uppy.io to directly capture webcam/mic/screencast
Change log
You can find a summary of the change log here.
Bugfix Release
This weekly release fixes glitches on v4 branch and introduces a couple of new features.
Bugfixes
- Important issue with non-ascii characters breaking headers
- Issues with structured datasources and events
- Various ETCD glitches when services are stopped or leases are lost (cluster mode)
New
- Interface libraries updates: pdfjs (better support of signed pdf), soundmanager (add flac support)
- New parameter on flat datasources to automatically distribute files inside subfolder(s)
- New filter types for log levels.
Change log
You can find a summary of the change log here.
Bugfix Release
v4.0.1
Cells v4 is a major leap forward for clustered deployments. It features a brand-new microservices engine for unparalleled performance, and a new dependency management making it much easier to delegate core services to cloud standards.
Pydio Cells 4.0 is fully cloud-ready
To get cloud- and multi-node ready, we’ve retired our original microservices framework, which was complex and limiting, to use our own. Working from a clean slate, we developed a brand new, truly stateless instance of Cells 4.0 that decouples and externalizes storage and service layers to create an image that can easily be distributed and replicated.
Cells 4.0’s new simplified core works in single-node as well as complex multi-node configurations – all the way from Raspberry Pi to Kubernetes clusters.
Codebase Major Changes
The V4 branch was a long-term development project that started with our desire to adopt Go Modules. This redesign led to a big reduction in our dependencies, a huge simplification of the architecture and, as a result, a lot of interesting features!
The code now relies on Go 1.18+ and Go Modules, and the master branch is now main
. The main dependencies are updated to more recent versions, including Caddy, Hydra and Minio, etc. We now make a better use of resources within a process by sharing servers (http|gRPC) between services. This greatly reduces the number of network ports used by Cells and it has a huge impact on performances.
Resources
- Upgrade Instructions : Must Read!
- Support: Forum
- Administration Guide : Quick Start
- Developer Docs: API, Knowledge Base
- Show your love: Star on Github, share on Twitter or follow us on LinkedIn!
Contributions
This release comes with fully translated Brazilian version thanks to Claudio Ferreira and Japanese version thanks to Satoshi Yuza.
If you want to help us and participate by adding translation to your language, it is really easy: just navigate to Pydio Cells project in Crowdin, create an account and get started!
Change log
You can find a summary of the change log here.
Cells v4.0.0-rc7
New relase candidate for v4. Genaral Availability is really getting closer 🤞!
- Fixing fork deregister
- Scheduler: RPC error handling
- Fix a few forgotten messages + almost 100% Brazilian thanks to @filhocf
- Build ARM binaries from AMD build agent
- Fix keystore permissions on Windows (upgrade v3 => v4)
- Recover websocket handler errors (closed chan)
- Fix missing uuid for hash copy
- Scheduler: clean tasks/subscriber code, simplify ListJobs dao signature
- Capture context cancelled while listing endpoints in sync, allowing to stop long indexation processes
- Add mutexes on ACL internals
- Monitor zaps slices / remove hard limit on roles listing (cherry pick from v3)
- Avoid any login issue after upgrade by using a new local token name.
- Improve README for docker/compose/ha
Learn more about v4 here
Change log
You can find a summary of the change log here.
Cells v4 release candidate
New Release Candidate for v4.
See previous release candidate RC5 for information about v4 major changes.
Learn more about the new clustering approach in the updated Administration Guide. Currently a work-in-progress.
This sprint was focused on cleaning the new APIs, cluster deployment glitches and fixing various bugs.
- API: handle new path parameters vs. body parameters constraints implied by OpenAPI updates
- Checks and warning on "public" address usage : finally use loopback as default for binding grpc and other internal services
- Fix various users creations glitches (CLI)
- Refactoring: clean use of deprecated ioutil package, refactor sync lib for better context/error handling
- [Cluster] Recheck or implement TLS support on all external dependencies (mysql, etcd, nats, redis, mongo)
- [Cluster] Fix Queue mechanism in Nats broker
- [Cluster] Update docker-compose to use a dedicated minio node for storage
Change log
You can find a summary of the change log here.
Cells v4 release candidate
Release Candidate 5 for Cells v4
Cells v4 is a major leap forward for clustered deployments. It features a brand-new microservices engine for unparalleled performance, and a new dependency management making it much easier to delegate core services (such as configs, registry, caches, etc.) to cloud standards.
RC3, RC4, RC5
We have released many packages since RC2 to validate and fix many glitches. This is really getting closer to stability !
For developers, one important change occurred during the last weeks: we now rely on Go18 (tested on Go19) to build the code. All dependencies versions were bumped for security. This should not impact users or API consumers though.
We also re-organized some CLI commands to make the hierarchy more consistent.
Moving to Go modules at last
The V4 branch was a long-term development project that started with our desire to adopt Go modules. Cells was written at a time when modules were not yet part of the language (we used the vendor folder...). This made the migration to modules complex: Go's auto-migration tool was not usable for us (it simply crashed), and we ended up recreating the code base from scratch using modules, re-adding our libraries and dependencies one by one.
As expected, this redesign led to a big reduction in our dependencies, a huge simplification of the architecture and, as a result, a lot of interesting features!
To make a long story short:
- We got rid of the microservices framework we were using (Micro), regaining control of the gRPC layer.
- We updated the main dependencies to their latest versions, including Caddy, Hydra and Minio.
- We make a better use of resources within a process by sharing "servers" (http|gRPC) between "services". This greatly reduces the number of network ports used by Cells. This has a huge impact on performance, typically for single-node deployments.
- These "servers" and "services" are much better managed and can be more easily started/stopped on different nodes, making cluster deployments easier than ever (see below).
MongoDB as a drop-in replacement for all services based on on-file storages
Historically we've been using BoltDB and BleveSearch and we like it. They are pure GO key/value stores and indexers and it allows Cells to provide full indexing functionality without external dependencies. By default, the search engine, activity stream or logs use these JSON document shops to provide rich, out-of-the-box functionality. But these stores are disk and memory intensive, and while they are suitable for small and medium-sized deployments, they create bottlenecks for large deployments.
We therefore looked at alternatives for implementing new 'drivers' for the data abstraction layer of each of these services, and chose MongoDB as a feature-rich, scalable and indexed JSON document store. All services using BoltDB/Bleve as storage now offer an alternative MongoDB implementation, a migration path from one another, and the ability to scale horizontally drastically.
File-based storage is still a very good option for small/medium instances, avoiding the need to manage another dependency, but the Cells installation steps will now offer to configure a MongoDB connection to avoid the need to migrate data in the long term. Note that Mongo does not replace MySQL storages, that DB is still required for Cells.
Cluster Me Please!
Cells was developed from day one as a set of microservices, but we had to face the fact that deploying Cells in a multi-node, highly available environment was extremely complex and almost nobody could really make it work... The v4 was the perfect time to tackle this problem!
We took a step back, learned our lesson from v1 to v3, and looked closely at cloud-native DevOps best practices (yes K8s, we're looking at you). The main objective was: how to create a fully stateless instance of Cells (image, container, you name it...) that can be easily distributed and replicated.
Similar to the move from BoltDB to Mongo, we implemented DAOs to decouple and externalize many layers, making Cells V4 finally cloud-ready. To achieve that without re-inventing the wheel, Cells V4 stands on the shoulders of giants :
- ETCD for configs and services registry
- NATS for message broadcasting (pub/sub)
- REDIS for shared cache
- MONGO for JSON documents
- HASHICORP VAULT for secrets and certificates management
Again, all these are optional and Cells can still be deployed as a standalone, dependency-free binary on a Rasperry Pi (even the older 32bits versions) !
Migration and testing
Single-node deployments
Upgrade process is standard and should be straight-forward (and we would really love to hear from you on that).
There are a couple of important notes during this upgrade :
- Hydra JWKs will be regenerated in the DB, with effect of invalidating all existing authentication token. You will be logged out after upgrade, and if you are using Personal Access Tokens, you will have to regenerate new ones.
- Cells Sites Bind URL should not use a domain name but should declare a binding PORT, eventually including the Network Interface IP. If you have connection issue after upgrade, make sure to edit sites (
cells configure sites
) to bind to e.g. 0.0.0.0:8080 instead of domain.name:8080
Migrating Bolt/Bleve storages to Mongo
Migrate from existing on-file storage to MongoDB using the following steps:
- Install MongoDB, currently tested against version 5.0.X, prepare a Mongo database for cells data
- Stop cells, as Bolt/Bleve files must not be opened by the application during the migration process
- Use
cells admin config db add
command to configure a connection:- Setup connection using mongo connection string like mongodb://user:pass@ip:port/dbname
- Accept prompt for using this connection as default document DSN
- Accept prompt to perform data migration from existing bolt/bleve files to mongo. This can take some time.
- Now restart Cells. You should see "Successfully pinged and connected to MongoDB" in the logs.
- As search engine data are not migrated, you have to relaunch indexation on the pydio.grpc.search service using
cells admin resync --service=pydio.grpc.search --path=/
Now you should be good to go. Try searching for *
in Cells search engine, you should have blazing fast results.
Cluster deployments
We will provide dedicated blog posts on this topic very soon. You can already have a look at this sample Docker Compose file that shows the required dependencies and how to specify their endpoints to Cells using environment variables.
Change log
You can find a summary of the change log here.
Release Candidate for Cells v4
Release Candidate for Cells v4
Cells v4 is a major leap forward for clustered deployments. It features a brand new microservices engine for unparalleled performance, and a new dependency management making it much easier to delegate core services (such as configs, registry, caches, etc.) to cloud standards.
Moving to Go modules at last
The V4 branch was a long-term development project that started with our desire to adopt Go modules. Cells was written at a time when modules were not yet part of the language (we used the vendor folder...). This made the migration to modules complex: Go's auto-migration tool was not usable for us (it simply crashed), and we ended up recreating the code base from scratch using modules, re-adding our libraries and dependencies one by one.
As expected, this redesign led to a big reduction in our dependencies, a huge simplification of the architecture and, as a result, a lot of interesting features!
To make a long story short:
- We got rid of the microservices framework we were using (Micro), regaining control of the gRPC layer.
- We updated the main dependencies to their latest versions, including Caddy, Hydra and Minio.
- We make a better use of resources within a process by sharing "servers" (http|gRPC) between "services". This greatly reduces the number of network ports used by Cells. This has a huge impact on performance, typically for single-node deployments.
- These "servers" and "services" are much better managed and can be more easily started/stopped on different nodes, making cluster deployments easier than ever (see below).
MongoDB as a drop-in replacement for all services based on on-file storages
Historically we've been using BoltDB and BleveSearch and we like it. They are pure GO key/value stores and indexers and it allows Cells to provide full indexing functionality without external dependencies. By default, the search engine, activity stream or logs use these JSON document shops to provide rich, out-of-the-box functionality. But these stores are disk and memory intensive, and while they are suitable for small and medium-sized deployments, they create bottlenecks for large deployments.
We therefore looked at alternatives for implementing new 'drivers' for the data abstraction layer of each of these services, and chose MongoDB as a feature-rich, scalable and indexed JSON document store. All services using BoltDB/Bleve as storage now offer an alternative MongoDB implementation, a migration path from one another, and the ability to scale horizontally drastically.
File-based storage is still a very good option for small/medium instances, avoiding the need to manage another dependency, but the Cells installation steps will now offer to configure a MongoDB connection to avoid the need to migrate data in the long term. Note that Mongo does not replace MySQL storages, that DB is still required for Cells.
Cluster Me Please!
Cells was developed from day one as a set of microservices, but we had to face the fact that deploying Cells in a multi-node, highly available environment was extremely complex and almost nobody could really make it work... The v4 was the perfect time to tackle this problem!
We took a step back, learned our lesson from v1 to v3, and looked closely at cloud-native DevOps best practices (yes K8s, we're looking at you). The main objective was: how to create a fully stateless instance of Cells (image, container, you name it...) that can be easily distributed and replicated.
Similar to the move from BoltDB to Mongo, we implemented DAOs to decouple and externalize many layers, making Cells V4 finally cloud-ready. To achieve that without re-inventing the wheel, Cells V4 stands on the shoulders of giants :
- ETCD for configs and services registry
- NATS for message broadcasting (pub/sub)
- REDIS for shared cache
- MONGO for JSON documents
- HASHICORP VAULT for secrets and certificates management
Again, all these are optional and Cells can still be deployed as a standalone, dependency-free binary on a Rasperry Pi (even the older 32bits versions) !
Migration and testing
Single-node deployments
Upgrade process is standard and should be straight-forward (and we would really love to hear from you on that).
There are a couple of important notes during this upgrade :
- Hydra JWKs will be regenerated in the DB, with effect of invalidating all existing authentication token. You will be logged out after upgrade, and if you are using Personal Access Tokens, you will have to regenerate new ones.
- Cells Sites Bind URL should not use a domain name but should declare a binding PORT, eventually including the Network Interface IP. If you have connection issue after upgrade, make sure to edit sites (
cells configure sites
) to bind to e.g. 0.0.0.0:8080 instead of domain.name:8080
Migrating Bolt/Bleve storages to Mongo
Migrate from existing on-file storage to MongoDB using the following steps:
- Install MongoDB, currently tested against version 5.0.X, prepare a Mongo database for cells data
- Stop cells, as Bolt/Bleve files must not be opened by the application during the migration process
- Use
cells admin config db add
command to configure a connection:- Setup connection using mongo connection string like mongodb://user:pass@ip:port/dbname
- Accept prompt for using this connection as default document DSN
- Accept prompt to perform data migration from existing bolt/bleve files to mongo. This can take some time.
- Now restart Cells. You should see "Successfully pinged and connected to MongoDB" in the logs.
- As search engine data are not migrated, you have to relaunch indexation on the pydio.grpc.search service using
cells admin resync --service=pydio.grpc.search --path=/
Now you should be good to go. Try searching for *
in Cells search engine, you should have blazing fast results.
Cluster deployments
We will provide dedicated blog posts on this topic very soon. You can already have a look at this sample Docker Compose file that shows the required dependencies and how to specify their endpoints to Cells using environment variables.
Change log
You can find a summary of the change log here.
Bugfix for v3
This is a bugfix release for v3 branch.
- Fix possible UX error when sharing a folder with users
- Fix edge-cases on move from flat to structured datasource
- Fix incorrect root node handling in specific cases (deny on personal files)
- Fix regression to users with too many roles attached
- Add CELLS_BROKER_TRYPUB environment variable for high-load systems
- Backport SQL Meta Filtering from v4 branch for better performances
Change log
You can find a summary of the change log here.
Bugfixes for v3
This release fixes small issues for the v3 branch.
Bug fixed
- CLI full-S3 configuration panic
- Goroutine leak in GRPC gateway (sync)
- Config wrongly saved when updating smtp password
- Typo in environment variable (tls_key_file)
- Versions files not properly removed from storage
Cells Enterprise
- 2FA plugin new configurations to force users to setup 2FA unless they cannot access the platform.
Change log
You can find a summary of the change log here.