Skip to content

Commit 3b7bc24

Browse files
PMM-14287: Add PMM_IP environment variable to openshift-cluster-create job (#3517)
* Add PMM_IP environment variable to openshift-cluster-create job - Modified deployPMM function to retrieve PMM service IP address - Added logic to get IP from monitoring-service LoadBalancer or ClusterIP - Export PMM_IP as environment variable in the pipeline - Display PMM IP address in post-creation output - Store PMM IP in cluster metadata for reference * Add PMM IP to job description and fix parameter references - Display PMM IP in the Jenkins job description HTML - Fix incorrect parameter references (MASTER_NODES -> 3, WORKER_NODES -> WORKER_COUNT, etc.) - Fix password reference to use env.PMM_PASSWORD instead of env.PMM_ADMIN_PASSWORD * Move OpenShift cluster management jobs to pmm/openshift directory - Moved all OpenShift cluster management files from cloud/jenkins to pmm/openshift - Updated script paths in YAML files to reflect new location - Aligns with organizational structure changes from PR #3504 * Update Jenkins job configurations to use master branch and official repository - Changed branch from 'feature/openshift-shared-libraries' to 'master' - Updated repository URL from personal fork to official Percona-Lab repository * Fix PMM IP retrieval to use public ingress IP instead of internal ClusterIP The monitoring-service in PMM namespace is a ClusterIP service (internal only), not a LoadBalancer. The actual public access goes through the OpenShift ingress controller (router-default service in openshift-ingress namespace). This fix: - Gets the public IP from the ingress controller LoadBalancer - Resolves the hostname to IP address for external access - Removes the incorrect fallback to ClusterIP which returns internal IPs Testing confirmed: - Old approach returned 172.30.x.x (internal, not accessible) - New approach returns actual public IP (e.g., 3.129.202.84) - External tools can now connect to PMM using the correct IP * Remove trailing whitespace * Fix fallback logic: only resolve hostname if no direct IP available The previous logic was flawed - it would try hostname first and only fall back to IP if hostname wasn't available. But if a provider gives us a direct IP, we should just use it without DNS resolution. New logic: 1. First check if LoadBalancer provides direct IP (GCP, some bare metal) 2. If no IP, check for hostname (AWS, Azure) and resolve it 3. If neither, try to get external IP from worker node (on-prem setups) This ensures we get a working IP address across all cloud providers and deployment scenarios. * Simplify PMM IP retrieval for AWS-only environment Since this is entirely AWS, remove unnecessary fallback logic: - Remove check for direct IP (AWS always provides hostname) - Remove worker node external IP fallback (not applicable to AWS) - Keep only the AWS ELB hostname resolution logic This makes the code cleaner and more maintainable by focusing only on the AWS use case that actually applies. * Use getent for DNS resolution with nslookup fallback - getent is available by default on Oracle Linux 9 Jenkins agents - nslookup requires bind-utils package which may not be installed - Added fallback to nslookup if getent fails for any reason - Tested with real AWS ELB hostnames and confirmed working This ensures DNS resolution works reliably on Jenkins agents without requiring additional packages.
1 parent b0f504d commit 3b7bc24

7 files changed

+42
-14
lines changed

cloud/jenkins/openshift-cluster-create.yml renamed to pmm/openshift/openshift-cluster-create.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -159,12 +159,12 @@
159159
pipeline-scm:
160160
scm:
161161
- git:
162-
url: https://github.com/nogueiraanderson/jenkins-pipelines.git
162+
url: https://github.com/Percona-Lab/jenkins-pipelines.git
163163
branches:
164-
- "feature/openshift-shared-libraries"
164+
- "master"
165165
wipe-workspace: false
166166
lightweight-checkout: true
167-
script-path: cloud/jenkins/openshift_cluster_create.groovy
167+
script-path: pmm/openshift/openshift_cluster_create.groovy
168168

169169
concurrent: true
170170

cloud/jenkins/openshift-cluster-destroy.yml renamed to pmm/openshift/openshift-cluster-destroy.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,12 +41,12 @@
4141
pipeline-scm:
4242
scm:
4343
- git:
44-
url: https://github.com/nogueiraanderson/jenkins-pipelines.git
44+
url: https://github.com/Percona-Lab/jenkins-pipelines.git
4545
branches:
46-
- 'feature/openshift-shared-libraries'
46+
- 'master'
4747
wipe-workspace: false
4848
lightweight-checkout: true
49-
script-path: cloud/jenkins/openshift_cluster_destroy.groovy
49+
script-path: pmm/openshift/openshift_cluster_destroy.groovy
5050

5151
concurrent: true
5252

cloud/jenkins/openshift-cluster-list.yml renamed to pmm/openshift/openshift-cluster-list.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,10 @@
2020
pipeline-scm:
2121
scm:
2222
- git:
23-
url: https://github.com/nogueiraanderson/jenkins-pipelines.git
23+
url: https://github.com/Percona-Lab/jenkins-pipelines.git
2424
branches:
25-
- feature/openshift-shared-libraries
26-
script-path: cloud/jenkins/openshift_cluster_list.groovy
25+
- master
26+
script-path: pmm/openshift/openshift_cluster_list.groovy
2727
lightweight-checkout: true
2828

2929
concurrent: true

cloud/jenkins/openshift_cluster_create.groovy renamed to pmm/openshift/openshift_cluster_create.groovy

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -388,6 +388,7 @@ Starting cluster creation process...
388388

389389
if (clusterInfo.pmm) {
390390
env.PMM_URL = clusterInfo.pmm.url
391+
env.PMM_IP = clusterInfo.pmm.ip ?: 'N/A'
391392
env.PMM_PASSWORD = clusterInfo.pmm.password
392393
env.PMM_PASSWORD_GENERATED = clusterInfo.pmm.passwordGenerated.toString()
393394
}
@@ -458,6 +459,7 @@ Starting cluster creation process...
458459
echo "Helm Chart: ${params.PMM_HELM_CHART_VERSION}"
459460
echo "Namespace: pmm-monitoring"
460461
echo "Access URL: ${env.PMM_URL}"
462+
echo "IP Address: ${env.PMM_IP}"
461463
echo "Username: admin"
462464
echo "Password: ${passwordInfo}"
463465
echo ""
@@ -517,22 +519,23 @@ Starting cluster creation process...
517519
if (env.PMM_URL) {
518520
descriptionHtml.append("<b>PMM Monitoring:</b><br/>")
519521
descriptionHtml.append("• URL: <a href='${env.PMM_URL}'>${env.PMM_URL}</a><br/>")
522+
descriptionHtml.append("• IP: <code>${env.PMM_IP}</code><br/>")
520523
descriptionHtml.append("• User: <code>admin</code><br/>")
521-
descriptionHtml.append("• Password: <code>${env.PMM_ADMIN_PASSWORD ?: 'Check deployment logs'}</code><br/>")
524+
descriptionHtml.append("• Password: <code>${env.PMM_PASSWORD ?: 'Check deployment logs'}</code><br/>")
522525
descriptionHtml.append("• Version: ${params.PMM_IMAGE_TAG}<br/>")
523526
descriptionHtml.append("<br/>")
524527
}
525528

526529
// Cluster resources
527530
descriptionHtml.append("<b>Resources:</b><br/>")
528-
descriptionHtml.append("• Masters: ${params.MASTER_NODES} × ${params.MASTER_INSTANCE_TYPE}<br/>")
529-
descriptionHtml.append("• Workers: ${params.WORKER_NODES} × ${params.WORKER_INSTANCE_TYPE}<br/>")
531+
descriptionHtml.append("• Masters: 3 × ${params.MASTER_INSTANCE_TYPE}<br/>")
532+
descriptionHtml.append("• Workers: ${params.WORKER_COUNT} × ${params.WORKER_INSTANCE_TYPE}<br/>")
530533
descriptionHtml.append("<br/>")
531534

532535
// Lifecycle details
533536
descriptionHtml.append("<b>Lifecycle:</b><br/>")
534-
descriptionHtml.append("• Auto-delete: ${params.AUTO_DELETE_HOURS} hours<br/>")
535-
descriptionHtml.append("• Team: ${params.TEAM}<br/>")
537+
descriptionHtml.append("• Auto-delete: ${params.DELETE_AFTER_HOURS} hours<br/>")
538+
descriptionHtml.append("• Team: ${params.TEAM_NAME}<br/>")
536539
descriptionHtml.append("• S3 Backup: <code>s3://${env.S3_BUCKET}/${env.FINAL_CLUSTER_NAME}/</code><br/>")
537540

538541
currentBuild.description = descriptionHtml.toString()
File renamed without changes.
File renamed without changes.

vars/openshiftCluster.groovy

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -217,6 +217,7 @@ def create(Map config) {
217217
metadata.pmmDeployed = true
218218
metadata.pmmImageTag = params.pmmImageTag
219219
metadata.pmmUrl = pmmInfo.url
220+
metadata.pmmIp = pmmInfo.ip
220221
metadata.pmmNamespace = pmmInfo.namespace
221222

222223
// Update metadata in S3 with PMM information
@@ -706,6 +707,29 @@ def deployPMM(Map params) {
706707
returnStdout: true
707708
).trim()
708709

710+
// Get the public IP address from the OpenShift ingress controller
711+
// The monitoring-service is a ClusterIP (internal only), so we need the ingress IP
712+
def pmmIp = sh(
713+
script: """
714+
export PATH="\$HOME/.local/bin:\$PATH"
715+
716+
# AWS provides hostname in LoadBalancer status, not direct IP
717+
INGRESS_HOSTNAME=\$(oc get service -n openshift-ingress router-default \
718+
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}' 2>/dev/null)
719+
720+
if [[ -n "\$INGRESS_HOSTNAME" ]]; then
721+
# Resolve AWS ELB hostname to IP address
722+
# Use getent (available by default on Oracle Linux) with nslookup fallback
723+
getent hosts "\$INGRESS_HOSTNAME" 2>/dev/null | awk '{print \$1; exit}' || \
724+
nslookup "\$INGRESS_HOSTNAME" 2>/dev/null | grep -A 1 "^Name:" | grep "Address" | head -1 | awk '{print \$2}'
725+
else
726+
# No hostname found - ingress might not be ready yet
727+
echo ''
728+
fi
729+
""",
730+
returnStdout: true
731+
).trim()
732+
709733
// Get the actual password from the secret (either set or generated)
710734
def actualPassword = sh(
711735
script: """
@@ -717,6 +741,7 @@ def deployPMM(Map params) {
717741

718742
return [
719743
url: "https://${pmmUrl}",
744+
ip: pmmIp ?: 'N/A', // Return 'N/A' if we couldn't determine the IP
720745
username: 'admin',
721746
password: actualPassword,
722747
namespace: params.pmmNamespace,

0 commit comments

Comments
 (0)