Releases: percona/percona-server-mongodb-operator
v1.21.1
Release Highlights
This release resolves the MongoDB connection leak issue that occurred during PBM operations. It also addresses issues with the Operator’s retrieval and display of the PBM version.
Changelog
Fixed bugs
- K8SPSMDB-1504 - Fixed connection leaks issue during PBM operations and the Operator crashing with out-of-memory error by properly closing connections after the PBM operation is complete.
- K8SPSMDB-1506 - Fixed an issue with the Operator not being able to get PBM version due to the change of log where latest PBM versions print the version. The Operator now combines both stderr and stdout to correctly retrieve the PBM version.
Supported software
The Operator was developed and tested with the following software:
- Percona Server for MongoDB 6.0.25-20, 7.0.24-13, and 8.0.12-4
- Percona Backup for MongoDB 2.11.0
- PMM Client: 2.44.1
- PMM3 Client: 3.4.1
- cert-manager: 1.18.2
- LogCollector based on fluent-bit 4.0.1
Other options may also work but have not been tested.
Supported platforms
Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below:
- Google Kubernetes Engine (GKE) 1.31-1.33
- Amazon Elastic Container Service for Kubernetes (EKS) 1.31-1.34
- OpenShift Container Platform 4.16 - 4.19
- Azure Kubernetes Service (AKS 1.31-1.33
- Minikube 1.37.0 based on Kubernetes 1.34.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.21.0
Release Highlights
This release of Percona Operator for MongoDB includes the following new features and improvements:
Percona Server for MongoDB 8.0 is now the default version
For you to enjoy all features and improvements that come with the latest major version out of the box, the Operator now deploys the cluster with Percona Server for MongoDB 8.0 by default. You can always change the version to your desired one for the installation and update. Check the list of Percona certified images for the database versions available for this release. For previous Operator versions, learn how to query the Version Service and retrieve the available images from it.
PMM3 support
The Operator is natively integrated with PMM 3, enabling you to monitor the health and performance of your Percona Distribution for MongoDB deployment and at the same time enjoy enhanced performance, new features, and improved security that PMM 3 provides.
Note that the Operator supports both PMM2 and PMM3. The decision on what PMM version is used depends on the authentication method you provide in the Operator configuration: PMM2 uses API keys while PMM3 uses service account tokens. If the Operator configuration contains both authentication methods with non-empty values, PMM3 takes the priority.
To use PMM, ensure that the PMM client image is compatible with the PMM Server version. Check Percona certified images for the correct client image.
For how to configure monitoring with PMM see the documentation.
Hidden nodes support
In addition to arbiters and non-voting nodes, you can now deploy hidden nodes in your Percona Server for MongoDB cluster. These nodes hold a full copy of the data but remain invisible to client applications. They are good for tasks like backups and reporting, since they access the data without affecting normal traffic.
Hidden nodes are added as voting members and can participate in primary elections. Therefore, the Operator enforces rules to ensure the number of voting members is odd and doesn't exceed seven, which is the maximum allowed number of voting members:
- If the total number of voting members is even, the Operator converts one node to non-voting to maintain an odd number of voters. The node to convert is typically the last Pod in the list.
- If the number of voting members is odd and not more than 7, all nodes participate in elections.
- If the number of voting members exceeds 7, the Operator automatically converts some nodes to non-voting to stay within MongoDB’s limit.
To inspect the current configuration, connect to the cluster with the clusterAdmin privileges and run the rs.config().members command.
Support for Google Cloud Client library in PBM
The Operator comes with the latest PBM version 2.11.0, which includes the support of Google Cloud Client library and authentication with service account keys.
To use Google Cloud Storage for backups with service account keys, you need to do the following:
- Create a service account key
- Create a Secrets object with this key
- Configure the storage in the Custom Resource
See the Configure Google Cloud Storage documentation for detailed steps.
The configuration of Google Cloud Storage with HMAC keys remains unchanged.
However, PBM has a known issue for using HMAC keys with GCS, which was
reported in PBM-1605. The issue is in uploading large files (~512MB+) to the storage when the network is unstable. Such backups may be corrupted or incomplete but they are incorrectly treated as valid backups and pose a risk of restore failures. Therefore, we recommend migrating to the native GCS connection type with service account (JSON) keys after the upgrade.
Improve operational resilience and observability with persistent cluster-level logging for MongoDB Pods
Debugging distributed systems just got easier. The Percona Operator for MongoDB now supports cluster-level logging, ensuring that logs from your mongod instances are stored persistently, even across Pod restarts.
Cluster-level logging is done with Fluent Bit, running as a sidecar container within each database Pods.
Currently, logs are collected only for the mongod instances. All other logs are ephemeral, meaning they will not persist after a Pod restart. Logs are stored for 7 days and are rotated afterwards.
Learn more about cluster-level logging in the documentation
Improved backup retention for streamlined management of scheduled backups in cloud storage
A new backup retention configuration gives you more control over how backups are managed in storage and retained in Kubernetes.
With the deleteFromStorage flag, you can disable automatic deletion from AWS S3 or Azure Blob storage and instead rely on native cloud lifecycle policies. This makes backup cleanup more efficient and better aligned with flexible storage strategies.
The legacy keep option is now deprecated and mapped to the new retention block for compatibility. We encourage you to start using the backup.tasks.retention configuration:
spec:
backup:
tasks:
- name: daily-s3-us-west
enabled: true
schedule: "0 0 ** *"
retention:
count: 3
type: count
deleteFromStorage: true
storageName: s3-us-west
compressionType: gzip
compressionLevel: 6Improve operational efficiency with the support for concurrent cluster reconciliation
Reconciliation is a Kubernetes mechanism to keep your cluster in sync with its desired state. Previously, the Operator ran only one reconciliation loop at a time. This sequential processing meant that other clusters managed by the same Operator had to wait for the current reconciliation to complete before receiving updates.
With this release, the Operator supports concurrent reconciling and can process several clusters simultaneously. You can define the maximum number of concurrent reconciles as the environment variable for the Operator deployment.
This enhancement significantly improves scalability and responsiveness, especially in multi-cluster environments.
Added labels to identify the version of the Operator
Custom Resource Definition (CRD) is compatible with the last three Operator versions. To know which Operator version is attached to it, we've added labels to all Custom Resource Definitions. The labels help you identify the current Operator version and decide if you need to update the CRD.
To view the labels, run:
$ kubectl get crd perconaservermongodbs.psmdb.percona.com --show-labelsView backup size
You can now see the size of each backup when viewing the backup list either via the command line or from Everest or other apps integrated with the Operator. This improvement makes it easier to monitor storage usage and manage your backups efficiently.
Delegate PVC resizing to an external autoscaler
You can now configure the Operator to use an external storage autoscaler instead of its own resizing logic. This ability may be useful for organizations needing centralized, advanced, or cross-application scaling policies.
To use an external autoscaler, set the spec.enableExternalVolumeAutoscaling option to true in the Custom Resource manifest.
Deprecation, rename and removal
-
The
backup.schedule.keepfield is deprecated and will be removed in future releases. We recommend using thebackup.schedule.retentioninstead as follows:schedule: - name: "sat-night-backup" schedule: "0 0 ** 6" retention: count: 3 type: count deleteFromStorage: true storageName: s3-us-west
-
The S3-compatible implementation of Google Cloud Storage (GCS) with using HMAC keys is deprecated in the Operator. We encourage you to switch to using to the native GCS connection type with service account (JSON) keys after the upgrade.
Changelog
New features
- K8SPSMDB-297: Added cluster-wide logging with the Fluent Bit log collector
- K8SPSMDB-1268 - Added support for PMM v3.
- K8SPSMDB-723 - Added the ability to add hidden members to MongoDB replica sets for specialized purposes.
Improvements
- K8SPSMDB-1072 - Added the ability to configure retention policy for scheduled backups
- K8SPSMDB-1216 - Updated the command to describe the
mongodinstance role todb.hello(), which is the currently used one.
- K8SPSMDB-1243 - Added the ability to pass PBM restore configuration options to the Operator.
- K8SPSMDB-1261 - Improved the test suite for physical backups to run on every supported platform individually.
- K8SPSMDB-1262 - Improved the test suite foron demand backups to run on OpenShift
- K8SPSMDB-1272 - The
helm upgradecommand now displays warnings to clarify when CRDs ar...
v1.20.1
Release Highlights
This release of Percona Operator for MongoDB fixes the failing backup that was caused by the Operator sending multiple requests to PBM. The issue was fixed by bypassing the cache for the backup controller and enabling direct communication with the API server for sending backup requests.
Changelog
Bugs Fixed
- K8SPSMDB-1395 - Fixed the issue with failing backups due to the Operator sending multiple backup requests based on the stale status data
Supported software
The Operator was developed and tested with the following software:
- Percona Server for MongoDB 8.0.8-3, 7.0.18-11, and 6.0.21-18
- Percona Backup for MongoDB 2.9.1
- PMM Client 2.44.1
- cert-manager 1.17.2
Other options may also work but have not been tested.
Supported platforms
Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.20.1:
- Google Kubernetes Engine (GKE) 1.30 - 1.32
- Amazon Elastic Container Service for Kubernetes (EKS) 1.30 - 1.32
- OpenShift Container Platform 4.14 - 4.18
- Azure Kubernetes Service (AKS) 1.30 - 1.32
- Minikube 1.35.0 with Kubernetes 1.32.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.20.0
Release Highlights
This release of Percona Operator for MongoDB includes the following new features and improvements:
Point-in-time recovery from any backup storage
The Operator now natively supports multiple backup storages, inheriting this feature from Percona Backup for MongoDB (PBM). This enables you to make a point-in-time recovery from any backup stored on any storage - PBM and the Operator maintain the data consistency for you. And you no longer have to wait till the Operator reconfigures a cluster after you select a different storage for a backup or a restore. As a result, overall performance of your backup flow improves.
Improve RTO with the added support of incremental physical backups (tech preview)
Using incremental physical backups in the Operator, you can now back up only the changes happened since the previous backup. Since increments are smaller in size than the whole backup, the backup completion is faster and you also save on the storage and data transfer costs. Using incremental backups and point-in-time recovery improves your recovery time objective (RTO).
You do need the base backup to start the incremental backup chain and you must make the whole chain from the same storage. Also, note that the percona.com/delete-backup finalizer and the .spec.backup.tasks.[].keep option apply for the incremental base backup but are ignored for subsequent incremental backups.
Improved monitoring for clusters in multi-region or multi-namespace deployments in PMM
Now you can define a custom name for your clusters deployed in different data centers. This name helps Percona Management and Monitoring (PMM) Server to correctly recognize clusters as connected and monitor them as one deployment. Similarly, PMM Server identifies clusters deployed with the same names in different namespaces as separate ones and correctly displays performance metrics for you on dashboards.
To assign a custom name, define this configuration in the Custom Resource manifest for your cluster:
spec:
pmm:
customClusterName: mongo-clusterChangelog
New Features
-
K8SPSMDB-1237 - Added support for incremental physical backups
-
K8SPSMDB-1329 - Allowed setting loadBalancerClass service type and using a custom implementation of a load balancer rather than the cloud provider default one.
Improvements
-
K8SPSMDB-621 - Set
PBM_MONGODB_URIenv variable in PBM container to avoid defining it for every shell session and improve setup automation (Thank you Damiano Albani for reporting this issue) -
K8SPSMDB-1219 - Improved the support of multiple storages for backups by using the Multi Storage support functionality in PBM. This enables users to make point-in-time recovery from any storage
-
K8SPSMDB-1223 - Improved the
MONGODB_PBM_URIconnection string construction by enabling everypbm-agentto connect to local mongoDB directly -
K8SPSMDB-1226 - Documented how to pass custom configuration for PBM
-
K8SPSMDB-1234 - Added the ability to use non default ports 27017 for MongoDB cluster components:
mongod,mongosandconfigsvrReplSetPods -
K8SPSMDB-1236 - Added a check for a username to be unique when defining it via the Custom Resource manifest
-
K8SPSMDB-1253 - Made the
SmartUpdatethe default update strategy -
K8SPSMDB-1276 - Added logic to the getMongoUri function to compare the content of the existing TLS and CA certificate files with the secret data. Files are only overwritten if the data has changed, preventing redundant writes and ensuring smoother operations during backup checks. (Thank you Anton Averianov for reporting and contributing to this issue)
-
K8SPSMDB-1316 - Added the ability to define a custom cluster name for
pmm-admincomponent -
K8SPSMDB-1325 Added the
directShardOperationsrole for amongouser used for monitoring MongoDB 8 and above -
K8SPSMDB-1337 Add imagePullSecrets for PMM and backup images
Bugs Fixed
-
K8SPSMDB-1197 - Fixed the healthcheck log rotation routine to delete log file created 1 day before.
-
K8SPSMDB-1231 - Fixed the issue with a single-node cluster to temporarily report the Error state during initial provisioning by ignoring the
No mongod containers in running stateerror. -
K8SPSMDB-1239 - Fixed the issue with cron jobs running simultaneously
-
K8SPSMDB-1245 - Improved Telemetry for cluster-wide deployments to handle both an empty value and a comma-separated list of namespaces
-
K8SPSMDB-1256 - Fixed the issue with PBM failing with the
length of read message too largeerror by verifying the existence of TLS files when constructing thePBM_MONGODB_URIconnection string URI -
K8SPSMDB-1263 - Fixed the issue with the Operator losing connection to
mongodpods during backup and throwing an error by retrying to connect and proceed with the backup -
K8SPSMDB-1274 - Disable balancer before logical restore to meet the PBM restore requirements
-
K8SPSMDB-1275 - Fixed the issue with the Operator failing when the
getLastErrorModeswrite concern value is set for a replica set by using the data type for a value that matches MongoDB behavior (Thank you userclrxblfor reporting and contributing to this issue) -
K8SPSMDB-1294 - Fixed the API mismatch error with the multi-cluster Services (MCS) enabled in the Operator by using the
DiscoveryClient.ServerPreferredResourcesmethod to align with thekubectlbehavior. -
K8SPSMDB-1302 - Fixed the issue with the Operator being stuck during physical restore when the update strategy is set to SmartUpdate
-
K8SPSMDB-1306 - Fixed the Operator panics if a user configures PBM priorities without timeouts
-
K8SPSMDB-1347 - Fixed the issue with the Operator throwing errors when auto generating password for multiple users by properly updating the secret after a password generation
Upgrade considerations
The added support for multiple backup storages requires specifying the main storage. If you use a single storage, it will automatically be marked as main in the Custom Resource manifest during the upgrade. If you use multiple storages, you must define one of them as the main storage when you upgrade to version 1.20.0. The following command shows how to set the s3-us-west storage as the main one:
$ kubectl patch psmdb my-cluster-name --type=merge --patch '{
"spec": {
"crVersion": "1.20.0",
"image": "percona/percona-server-mongodb:7.0.18-11",
"backup": {
"image": "percona/percona-backup-mongodb:2.9.1",
"storages": {
"s3-us-west": {
"main": true
}
}
},
"pmm": {
"image": "percona/pmm-client:2.44.1"
}
}
}'
Supported software
The Operator was developed and tested with the following software:
- Percona Server for MongoDB 6.0.21-18, 7.0.18-11, and 8.0.8-3.
- Percona Backup for MongoDB 2.9.1.
Other options may also work but have not been tested.
Supported platforms
Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.20.0:
- Google Kubernetes Engine (GKE) 1.30-1.32
- Amazon Elastic Container Service for Kubernetes (EKS) 1.30-1.32
- OpenShift Container Platform 4.14 - 4.18
- Azure Kubernetes Service (AKS) 1.30-1.32
- Minikube 1.35.0 based on Kubernetes 1.32.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.19.1
Bugs Fixed
- K8SPSMDB-1274: Revert to disabling MongoDB balancer during restores to follow requirements of Percona Backup for MongoDB 2.8.0.
Known limitations
- PBM-1493: Operator versions 1.19.0 and 1.19.1 have a recommended MongoDB version set to 7.0 because point-in-time recovery may fail on MongoDB 8.0 if sharding is enabled and the Operator version is 1.19.x. Therefore, upgrading to Operator 1.19.0/1.19.1 is not recommended for sharded MongoDB 8.0 clusters.
Supported Platforms
The Operator was developed and tested with Percona Server for MongoDB 6.0.19-16, 7.0.15-9, and 8.0.4-1. Other options may also work but have not been tested. The Operator also uses Percona Backup for MongoDB 2.8.0.
Percona Operators are designed for compatibility with all CNCF-certified
Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.19.1:
- Google Kubernetes Engine (GKE) 1.28-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.29-1.31
- OpenShift Container Platform 4.14.44 - 4.17.11
- Azure Kubernetes Service (AKS) 1.28-1.31
- Minikube 1.34.0 based on Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.19.0
Release Highlights
Using remote file server for backups (tech preview)
The new filesystem backup storage type was added in this release in addition to already existing s3 and azure types.
It allows users to mount a remote file server to a local directory, and make Percona Backup for MongoDB using this directory as a storage for backups. The approach is based on common Network File System (NFS) protocol, and should be useful in network-restricted environments without S3-compatible storage or in cases with a non-standard storage service supporting NFS access.
To use NFS-capable remote file server as a backup storage, user needs to mount the remote storage as a sidecar volume in the replsets section of the Custom Resource (and also configsvrReplSet in case of a sharded cluster):
replsets:
...
sidecarVolumes:
- name: backup-nfs
nfs:
server: "nfs-service.storage.svc.cluster.local"
path: "/psmdb-some-name-rs0"
...Finally, this new storage needs to be configured in the same Custom Resource as a normal storage for backups:
backup:
...
storages:
backup-nfs:
filesystem:
path: /mnt/nfs/
type: filesystem
...
volumeMounts:
- mountPath: /mnt/nfs/
name: backup-nfsSee more in our documentation about this storage type.
Generated passwords for custom MongoDB users
A new improvement for the declarative management of custom MongoDB users brings the possibility to use automatic generation of users passwords. When you specify a new user in deploy/cr.yaml configuration file, you can omit specifying a reference to an already existing Secret with the user’s password, and the Operator will generate it automatically:
...
users:
- name: my-user
db: admin
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: adminFind more details on this automatically created Secret in our documentation.
Percona Server for MongoDB 8.0 support
Percona Server for MongoDB 8.0 is now supported by the Operator in addition to 6.0 and 7.0 versions. The appropriate images are now included into the list of Percona-certified images. See this blogpost for details about the latest MongoDB 8.0 features with the added reliability and performance improvements.
New Features
- K8SPSMDB-1109: Backups can now be stored on a remote file server
- K8SPSMDB-921: IAM Roles for Service Accounts (IRSA) allow automating access to AWS S3 buckets based on Identity Access Management with no need to specify the S3 credentials explicitly
- K8SPSMDB-1133: Manual change of Replica Set Member Priority in Percona Server MongoDB Operator is now possible with the new
replsetOverrides.MEMBER-NAME.priorityCustom Resource option - K8SPSMDB-1164: Add the possibility to create users in the
$externaldatabase for external authentication purposes
Improvements
- K8SPSMDB-1123: Percona Server for MongoDB 8.0 is now supported
- K8SPSMDB-1171: The declarative user management was enchanced with the possibility to automatically generate passwords
- K8SPSMDB-1174: Telemetry was improved to to track whether the custom users and roles management, automatic volume expansion, and multi-cluster services features are enabled
- K8SPSMDB-1179: It is now possible to configure externalTrafficPolicy for mongod, configsvr and mongos instances
- K8SPSMDB-1205: Backups in unmanaged clusters are now supported, removing a long-standing limitation of cross-site replication that didn’t allow backups on replica clusters
Bugs Fixed
- K8SPSMDB-1215: Fix a bug where ExternalTrafficPolicy was incorrectly set for LoadBalancer and NodePort services (Thanks to Anton Averianov for contributing)
- K8SPSMDB-675: Fix a bug where disabling sharding failed on a running cluster with enabled backups
- K8SPSMDB-754: Fix a bug where some error messages had “INFO” log level and therefore were not seen in logs with the “ERROR” log level turned on
- K8SPSMDB-1088: Fix a bug which caused the Operator starting two backup operations if the user patches the backup object while its state is empty or Waiting
- K8SPSMDB-1156: Fix a bug that prevented the Operator with enabled backups to recover from invalid TLS configurations (Thanks to KOS for reporting)
- K8SPSMDB-1172: Fix a bug where backup user’s password username with special characters caused Percona Backup for MongoDB to fail
- K8SPSMDB-1212: Stop disabling balancer during restores, because it is not required for Percona Backup for MongoDB 2.x
Deprecation, Rename and Removal
- The
psmdbClusteroption from thedeploy/backup/backup.yamlmanifest used for on-demand backups, which was deprecated since the Operator version 1.12.0 in favor of theclusterNameoption, has been removed and is no longer supported. - Percona Server for MongoDB 5.0 has reached its end of life and in no longer supported by the Operator
Supported Platforms
The Operator was developed and tested with Percona Server for MongoDB 6.0.19-16, 7.0.15-9, and 8.0.4-1. Other options may also work but have not been tested. The Operator also uses Percona Backup for MongoDB 2.8.0.
Percona Operators are designed for compatibility with all CNCF-certified
Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.19.0:
- Google Kubernetes Engine (GKE) 1.28-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.29-1.31
- OpenShift Container Platform 4.14.44 - 4.17.11
- Azure Kubernetes Service (AKS) 1.28-1.31
- Minikube 1.34.0 based on Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.18.0
Release Highlights
Enhancements of the declarative user management
The declarative management of custom MongoDB users was improved compared to its initial implementation in the previous release, where the Operator did not track and sync user-related changes in the Custom Resource and the database. Also, starting from now you can create custom MongoDB roles on various databases just like users in the deploy/cr.yaml manifest:
...
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: adminSee the documentation to find more details about this feature.
Support for selective restores
Percona Backup for MongoDB 2.0.0 has introduced a new functionality that allows partial restores, which means selectively restoring only with the desired subset of data. Now the Operator also supports this feature, allowing you to restore a specific database or a collection from a backup. You can achieve this by using an additional selective section in the PerconaServerMongoDBRestore Custom Resource:
spec:
selective:
withUsersAndRoles: true
namespaces:
- "db.collection"You can find more on selective restores and their limitations in our documentation.
Splitting the replica set of the database cluster over multiple Kubernetes clusters
Recent improvements in cross-site replication made it possible to keep the replica set of the database cluster in different data centers. The Operator itself cannot deploy MongoDB replicas to other data centers, but this still can be achieved with a number of Operator deployments, equal to the size of your replica set: one Operator to control the replica set via cross-site replication, and at least two Operators to bootstrap the unmanaged clusters with other MongoDB replica set instances. Splitting the replica set of the database cluster over multiple Kubernetes clusters can be useful to get a fault-tolerant system in which all replicas are in different data centers.
You can find more about configuring such a multi-datacenter MongoDB cluster and the limitations of this solution on the dedicated documentation page.
New Features
K8SPSMDB-894: It is now possible to restore a subset of data (a specific database or a collection) from a backup which is useful to reduce time on restore operations when fixing corrupted data fragment
K8SPSMDB-1113: The new percona.com/delete-pitr-chunks finalizer allows the deletion of PITR log files from the backup storage when deleting a cluster so that leftover data does not continue to take up space in the cloud
K8SPSMDB-1124 and K8SPSMDB-1146: Declarative user management now covers creating and managing user roles, and syncs user-related changes between the Custom Resource and the database
K8SPSMDB-1140 and K8SPSMDB-1141: Multi-datacenter cluster deployment is now possible
Improvements
K8SPSMDB-739: A number of Service exposure options in the replsets, sharding.configsvrReplSet, and sharding.mongos were renamed for unification with other Percona Operators
K8SPSMDB-1002: New Custom Resource options under the replsets.primaryPreferTagSelector` subsection allow providing Primary instance selection preferences based on specific zone and region, which may be especially useful within the planned zone switchover process (Thanks to sergelogvinov for contribution)
K8SPSMDB-1096: Restore logs were improved to contain pbm-agent logs in mongod containers, useful to debug failures in the backup restoration process
K8SPSMDB-1135: Split-horizon DNS for external (unmanaged) nodes is now configurable via the replsets.externalNodes subsection in Custom Resource
K8SPSMDB-1152: Starting from now, the Operator uses multi-architecture images of Percona Server for MongoDB and Percona Backup for MongoDB, making it easier to deploy a cluster on ARM
K8SPSMDB-1160: The PVC resize feature introduced in previous release can now be enabled or disabled via the enableVolumeExpansion Custom Resource option (false by default), which protects the cluster from storage resize triggered by mistake
K8SPSMDB-1132: A new secrets.keyFile Custom Resource option allows to configure custom name for the Secret with the MongoDB internal auth key file
Bugs Fixed
K8SPSMDB-912: Fix a bug where the full backup connection string including the password was visible in logs in case of the Percona Backup for MongoDB errors
K8SPSMDB-1047: Fix a bug where the Operator was changing writeConcernMajorityJournalDefault to “true” during the replica set reconfiguring, ignoring the value set by user
K8SPSMDB-1168: Fix a bug where successful backups could obtain a failed state in case of the Operator configured with watchAllNamespaces: true and having the same name for MongoDB clusters across multiple namespaces (Thanks to Markus Küffner for contribution)
K8SPSMDB-1170: Fix a bug that prevented deletion of a cluster with the active percona.com/delete-psmdb-pods-in-order finalizer in case of the cluster error state (e.g. when mongo replset failed to reconcile)
K8SPSMDB-1184: Fix a bug where the Operator failed to reconcile when using the container security context with readOnlyRootFilesystem set to true (Thanks to applejag for contribution)
Deprecation, Rename and Removal
-
The new
enableVolumeExpansionCustom Resource option allows users to disable the automated storage scaling with Volume Expansion capability. The default value of this option isfalse, which means that the automated scaling is turned off by default. -
A number of Service exposure Custom Resource options in the
replsets,sharding.configsvrReplSet, andsharding.mongossubsections were renamed to provide a unified experience with other Percona Operators:expose.serviceAnnotationsoption renamed toexpose.annotationsexpose.serviceLabelsoption renamed toexpose.labelsexpose.exposeTypeoption renamed toexpose.type
Supported Platforms
The Operator was developed and tested with Percona Server for MongoDB 5.0.29-25,
6.0.18-15, and 7.0.14-8. Other options may also work but have not been tested. The
Operator also uses Percona Backup for MongoDB 2.7.0.
The following platforms were tested and are officially supported by the Operator
1.18.0:
- Google Kubernetes Engine (GKE) 1.28-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28-1.31
- OpenShift Container Platform 4.13.52 - 4.17.3
- Azure Kubernetes Service (AKS) 1.28-1.31
- Minikube 1.34.0 based on Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.17.0
Release Highlights
Declarative user management (technical preview)
Before the Operator version 1.17.0 custom MongoDB users had to be created manually. Now the declarative creation of custom MongoDB users is supported via the users subsection in the Custom Resource. You can specify a new user in deploy/cr.yaml manifest, setting the user’s login name and database, PasswordSecretRef (a reference to a key in a Secret resource containing user’s password) and as well as MongoDB roles on various databases which should be assigned to this user:
...
users:
- name: my-user
db: admin
passwordSecretRef:
name: my-user-password
key: my-user-password-key
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: adminSee documentation to find more details about this feature with additional explanations and the list of current limitations.
Liveness check improvements
Several improvements in logging were made related to the liveness checks, to allow getting more information for debugging, and to make these logs persist on failures to allow further examination.
Liveness check logs are stored in the /data/db/mongod-data/logs/mongodb-healthcheck.log file, which can be accessed in the corresponding Pod if needed. Starting from now, Liveness check generates more log messages, and the default log level is set to DEBUG.
Each time the health check fails, the current log is saved to a gzip compressed file named mongodb-healthcheck-<timestamp>.log.gz, and the mongodb-healthcheck.log log file is reset.
Logs older than 24 hours are automatically deleted.
New Features
- K8SPSMDB-253: It is now possible to create and manage users via the Custom Resource
Improvements
- K8SPSMDB-899: Add Labels for all Kubernetes objects created by Operator (backups/restores, Secrets, Volumes, etc.) to make them clearly distinguishable
- K8SPSMDB-919: The Operator now checks if the needed Secrets exist and connects to the storage to check the validity of credentials and the existence of a backup before starting the restore process
- K8SPSMDB-934: Liveness checks are providing more debug information and keeping separate log archives for each failure with the 24 hours retention
- K8SPSMDB-1057: Finalizers were renamed to contain fully qualified domain names (FQDNs), avoiding potential conflicts with other finalizer names in the same Kubernetes environment
- K8SPSMDB-1108: The new Custom Resource option allows setting custom containerSecurityContext for PMM containers
- K8SPSMDB-994: Remove a limitation where it wasn’t possible to create a new cluster with splitHorizon enabled, leaving the only way to enable it later on the running cluster
Bugs Fixed
- K8SPSMDB-925: Fix a bug where the Operator generated “failed to start balancer” and “failed to get mongos connection” log messages when using Mongos with servicePerPod and LoadBalancer services, while the cluster was operating properly
- K8SPSMDB-1105: The memory requests and limits for backups were increased in the deploy/cr.yaml configuration file example to reflect the Percona Backup for MongoDB minimal pbm-agents requirement of 1 Gb RAM needed for stable operation
- K8SPSMDB-1074: Fix a bug where MongoDB Cluster could not failover in case of all Pods downtime and exposeType Custom Resource option set to either NodePort or LoadBalancer
- K8SPSMDB-1089: Fix a bug where it was impossible to delete a cluster in error state with finalizers present
- K8SPSMDB-1092: Fix a bug where Percona Backup for MongoDB log messages during physical restore were not accessible with the kubectl logs command
- K8SPSMDB-1094: Fix a bug where it wasn’t possible to create a new cluster with upgradeOptions.setFCV Custom Resource option set to true
- K8SPSMDB-1110: Fix a bug where nil Custom Resource annotations were causing the Operator panic
Deprecation, Rename and Removal
Finalizers were renamed to contain fully qualified domain names to comply with the Kubernetes standards.
PerconaServerMongoDBCustom Resource:delete-psmdb-pods-in-orderfinalizer renamed topercona.com/delete-psmdb-pods-in-orderdelete-psmdb-pvcfinalizer renamed topercona.com/delete-psmdb-pvc
PerconaServerMongoDBBackupCustom Resource:delete-backupfinalizer renamed topercona.com/delete-backup
Supported Platforms
The Operator was developed and tested with Percona Server for MongoDB 5.0.28-24,
6.0.16-13, and 7.0.12-7. Other options may also work but have not been tested. The
Operator also uses Percona Backup for MongoDB 2.5.0.
The following platforms were tested and are officially supported by the Operator
1.17.0:
- Google Kubernetes Engine (GKE) 1.27-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28-1.30
- OpenShift Container Platform 4.13.48 - 4.16.9
- Azure Kubernetes Service (AKS) 1.28-1.30
- Minikube 1.33.1
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.