-
Notifications
You must be signed in to change notification settings - Fork 47
PMM-14242: Implement automatic SSL certificate configuration for OpenShift clusters #3504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Add Let's Encrypt support via cert-manager for OpenShift routes - Add AWS ACM certificate management for LoadBalancer services - Implement DNS automation with Route53 - Auto-switch PMM to LoadBalancer when SSL is enabled - Add ACM certificate creation with DNS validation Changes: - New vars/openshiftSSL.groovy library for Let's Encrypt management - New vars/awsCertificates.groovy library for ACM and Route53 - Enhanced openshift_cluster_create pipeline with SSL parameters - Modified deployPMM() to support ACM certificates automatically Related to: PMM-14242
53b1487
to
0c5b1eb
Compare
- Add newline to vars/openshiftSSL.groovy - Add newline to vars/awsCertificates.groovy Per code style guidelines, all files should end with a newline character.
EvgeniyPatlan
approved these changes
Aug 29, 2025
ademidoff
reviewed
Sep 1, 2025
nogueiraanderson
added a commit
that referenced
this pull request
Sep 1, 2025
- Moved all OpenShift cluster management files from cloud/jenkins to pmm/openshift - Updated script paths in YAML files to reflect new location - Aligns with organizational structure changes from PR #3504
nogueiraanderson
added a commit
that referenced
this pull request
Sep 1, 2025
…e job (#3517) * Add PMM_IP environment variable to openshift-cluster-create job - Modified deployPMM function to retrieve PMM service IP address - Added logic to get IP from monitoring-service LoadBalancer or ClusterIP - Export PMM_IP as environment variable in the pipeline - Display PMM IP address in post-creation output - Store PMM IP in cluster metadata for reference * Add PMM IP to job description and fix parameter references - Display PMM IP in the Jenkins job description HTML - Fix incorrect parameter references (MASTER_NODES -> 3, WORKER_NODES -> WORKER_COUNT, etc.) - Fix password reference to use env.PMM_PASSWORD instead of env.PMM_ADMIN_PASSWORD * Move OpenShift cluster management jobs to pmm/openshift directory - Moved all OpenShift cluster management files from cloud/jenkins to pmm/openshift - Updated script paths in YAML files to reflect new location - Aligns with organizational structure changes from PR #3504 * Update Jenkins job configurations to use master branch and official repository - Changed branch from 'feature/openshift-shared-libraries' to 'master' - Updated repository URL from personal fork to official Percona-Lab repository * Fix PMM IP retrieval to use public ingress IP instead of internal ClusterIP The monitoring-service in PMM namespace is a ClusterIP service (internal only), not a LoadBalancer. The actual public access goes through the OpenShift ingress controller (router-default service in openshift-ingress namespace). This fix: - Gets the public IP from the ingress controller LoadBalancer - Resolves the hostname to IP address for external access - Removes the incorrect fallback to ClusterIP which returns internal IPs Testing confirmed: - Old approach returned 172.30.x.x (internal, not accessible) - New approach returns actual public IP (e.g., 3.129.202.84) - External tools can now connect to PMM using the correct IP * Remove trailing whitespace * Fix fallback logic: only resolve hostname if no direct IP available The previous logic was flawed - it would try hostname first and only fall back to IP if hostname wasn't available. But if a provider gives us a direct IP, we should just use it without DNS resolution. New logic: 1. First check if LoadBalancer provides direct IP (GCP, some bare metal) 2. If no IP, check for hostname (AWS, Azure) and resolve it 3. If neither, try to get external IP from worker node (on-prem setups) This ensures we get a working IP address across all cloud providers and deployment scenarios. * Simplify PMM IP retrieval for AWS-only environment Since this is entirely AWS, remove unnecessary fallback logic: - Remove check for direct IP (AWS always provides hostname) - Remove worker node external IP fallback (not applicable to AWS) - Keep only the AWS ELB hostname resolution logic This makes the code cleaner and more maintainable by focusing only on the AWS use case that actually applies. * Use getent for DNS resolution with nslookup fallback - getent is available by default on Oracle Linux 9 Jenkins agents - nslookup requires bind-utils package which may not be installed - Added fallback to nslookup if getent fails for any reason - Tested with real AWS ELB hostnames and confirmed working This ensures DNS resolution works reliably on Jenkins agents without requiring additional packages.
3d5da75
to
458c52d
Compare
hors
approved these changes
Sep 2, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Automatic SSL Certificate Configuration for OpenShift Clusters
Jira Ticket: PMM-14242 - Add SSL certificate automation for OpenShift clusters
Problem
OpenShift clusters created through Jenkins require manual SSL certificate configuration for both the console and PMM services. This involves:
Solution
Implemented comprehensive SSL automation that:
Changes
New Libraries:
vars/openshiftSSL.groovy
- Let's Encrypt certificate management via cert-managervars/awsCertificates.groovy
- AWS ACM certificate and Route53 DNS automationModified Files:
cloud/jenkins/openshift-cluster-create.yml
- Added SSL configuration parameters sectioncloud/jenkins/openshift_cluster_create.groovy
- Added SSL configuration stagevars/openshiftCluster.groovy
- Enhanced deployPMM() with automatic SSL supportExample
Before:
After:
Notes
Validation