Thursday, April 18, 2024
HomeJavaMonitoring WebLogic Server for Oracle Container Engine for Kubernetes

Monitoring WebLogic Server for Oracle Container Engine for Kubernetes


use open supply instruments to maintain tabs on enterprise functions

 

Everybody ought to monitor their manufacturing system to grasp how the system is behaving. Displays show you how to perceive the workloads and make sure you get notifications when one thing fails—or is about to fail.

In Java EE functions, you possibly can select to watch many metrics in your servers that can establish workloads and points with functions. For instance, you would monitor the Java heap, lively threads, open sockets, CPU utilization, and reminiscence utilization.

You probably have a Java EE software deployed to Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes, this text is for you.

Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes may also help you shortly create Oracle WebLogic configurations on Oracle Cloud, for instance, to allocate community assets, reuse present digital cloud networks or subnets, configure the load balancer, combine with Id Cloud Supervisor, or configure Oracle Database.

On this article, I’ll present you how one can use two open supply instruments—Grafana and Prometheus—to watch an Oracle WebLogic area deployed in Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes.
By the way in which, this process will use a number of Helm charts to stroll by the person steps required to put in and configure Prometheus and Grafana. In your personal deployment, it’s as much as you to create a single Helm chart to deploy Prometheus or Grafana.

Stipulations

Earlier than you get began, you must have put in at the very least one in all these Oracle Cloud Market functions. (UCM refers back to the Common Credit mannequin; BYOL stands for carry your individual license.)

Deploy WebLogic Monitoring Exporter to your Oracle WebLogic area

Listed below are the step-by-step directions.

1. Open a terminal window and entry the administration occasion that’s created with Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes. You’ll be able to see detailed directions right here.

2. Go to the foundation Oracle Cloud Infrastructure File Storage Service folder, which is /u01/shared.

cd /u01/shared

3. Obtain the WebLogic Monitoring Exporter conflict file from GitHub into the wlsdeploy folder.

wget https://github.com/oracle/weblogic-monitoring-exporter/releases/obtain/v2.0.0/wls-exporter.conflict -P wlsdeploy/functions

4. Embrace the pattern exporter configuration file.

zip -r weblogic-exporter-archive.zip wlsdeploy/

wget https://uncooked.githubusercontent.com/oracle/weblogic-monitoring-exporter/grasp/samples/kubernetes/end2end/dashboard/exporter-config.yaml -O config.yml

zip wlsdeploy/functions/wls-exporter.conflict -m config.yml

5. Create a WebLogic Server Deploy Tooling archive the place you’ll place the weblogic-exporter-archive.conflict file.

zip -r weblogic-exporter-archive.zip wlsdeploy/

6. Create a WebLogic Server Deploy Tooling mannequin to deploy the WebLogic Monitoring Exporter software to your area.

ADMIN_SERVER_NAME=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.wls_admin_server_name’)

DOMAIN_CLUSTER_NAME=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.wls_cluster_name’)

cat > deploy-monitoring-exporter.yaml << EOF

appDeployments:

  Software:

    ‘wls-exporter’ :

      SourcePath: ‘wlsdeploy/functions/wls-exporter.conflict’

      Goal: ‘$DOMAIN_CLUSTER_NAME,$ADMIN_SERVER_NAME’

      ModuleType: conflict

      StagingMode: nostage

EOF

7. Deploy the WebLogic Monitoring Exporter software to your area utilizing the Pipeline update-domain display.

8. From the Jenkins dashboard, open the Pipeline update-domain display and specify the parameters, as follows (and see Determine 1):

◉ For Archive_Source, choose Shared File System.

◉ For Archive_File_Location, enter /u01/shared/weblogic-exporter-archive.zip.

◉ For Domain_Model_Source, choose Shared File System.

◉ For Model_File_Location, enter /u01/shared/deploy-monitoring-exporter.yaml.

Determine 1. The Pipeline update-domain parameters display

Then click on the construct button. To confirm that the deployment is working, run the next instructions:

INGRESS_NS=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.ingress_namespace’)

SERVICE_NAME=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.service_name’)

WLS_CLUSTER_URL=$(kubectl get svc “$SERVICE_NAME-external” -n $INGRESS_NS -ojsonpath=”{.standing.loadBalancer.ingress[0].ip}”)

The output ought to look one thing like the next:

[opc@wlsoke-admin ~]$ curl -k https://$WLS_CLUSTER_URL/wls-exporter

<!DOCTYPE html>

<html lang=”en”>

<head>

    <meta charset=”UTF-8″>

    <title>Weblogic Monitoring Exporter</title>

</head>

Create PersistentVolume and PersistentVolumeClaim for Grafana, Prometheus Server, and Prometheus Alertmanager

Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes creates a shared file system utilizing Oracle Cloud Infrastructure File Storage Service, which is mounted throughout the completely different pods operating within the Oracle Container Engine for Kubernetes cluster and the administration host. To retailer information on that shared file system, the subsequent step is to create subpaths for Grafana and Prometheus to retailer information.

This process will create a Helm chart with PersistentVolume (PV) and PersistentVolumeClaim (PVC) for Grafana, Prometheus Server, and Prometheus Alertmanager. This step doesn’t use the Prometheus and Grafana charts for creating the PVC as a result of these don’t but help Oracle Cloud Infrastructure Container Engine for Kubernetes with Oracle Cloud Infrastructure File Storage Service.

1. Open a terminal window and entry the administration occasion that’s created with Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes.

2. Create folders for monitoringpv and templates. You’ll place the Helm chart right here.

mkdir -p monitoringpv/templates

3. Create the Chart.yaml file within the monitoringpv folder.

cat > monitoringpv/Chart.yaml << EOF

apiVersion: v1

appVersion: “1.0”

description: A Helm chart for creating pv and pvc for Grafana, Prometheus and Alertmanager

title: monitoringpv

model: 0.1.0

EOF

4. Equally, create the values.yaml file required for the chart utilizing the administration occasion metadata.

cat > monitoringpv/values.yaml << EOF

exportpath: $(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.fss_export_path’)

classname: $(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.fss_chart_name’)

serverip: $(kubectl get pv jenkins-oke-pv -o jsonpath=”{.spec.nfs.server}”)

EOF

5. Create the goal folders on the shared file system.

mkdir /u01/shared/alertmanager

mkdir /u01/shared/prometheus

mkdir /u01/shared/grafana

6. Create template recordsdata for PV and PVC for Grafana, Prometheus Server, and Prometheus Alertmanager.

cat > monitoringpv/templates/grafanapv.yaml << EOF

apiVersion: v1

variety: PersistentVolume

metadata:

  title: pv-grafana

spec:

  accessModes:

  – ReadWriteMany

  capability:

    storage: 10Gi

  mountOptions:

  – nosuid

  nfs:

    path: {{ .Values.exportpath }}{{“/grafana”}}

    server: “{{ .Values.serverip }}”

  persistentVolumeReclaimPolicy: Retain

  storageClassName: “{{ .Values.classname }}”

  volumeMode: Filesystem

EOF

cat > monitoringpv/templates/grafanapvc.yaml << EOF

apiVersion: v1

variety: PersistentVolumeClaim

metadata:

  title: pvc-grafana

  namespace: monitoring

spec:

  accessModes:

  – ReadWriteMany

  assets:

    requests:

      storage: 10Gi

  storageClassName: “{{ .Values.classname }}”

  volumeMode: Filesystem

  volumeName: pv-grafana

EOF

cat > monitoringpv/templates/prometheuspv.yaml << EOF

apiVersion: v1

variety: PersistentVolume

metadata:

  title: pv-prometheus

spec:

  accessModes:

  – ReadWriteMany

  capability:

    storage: 10Gi

  mountOptions:

  – nosuid

  nfs:

    path: {{ .Values.exportpath }}{{“/prometheus”}}

    server: “{{ .Values.serverip }}”

  persistentVolumeReclaimPolicy: Retain

  storageClassName: “{{ .Values.classname }}”

  volumeMode: Filesystem

EOF

cat > monitoringpv/templates/prometheuspvc.yaml << EOF

apiVersion: v1

variety: PersistentVolumeClaim

metadata:

  title: pvc-prometheus

  namespace: monitoring

spec:

  accessModes:

  – ReadWriteMany

  assets:

    requests:

      storage: 10Gi

  storageClassName: “{{ .Values.classname }}”

  volumeMode: Filesystem

  volumeName: pv-prometheus

EOF

cat > monitoringpv/templates/alertmanagerpv.yaml << EOF

apiVersion: v1

variety: PersistentVolume

metadata:

  title: pv-alertmanager

spec:

  accessModes:

  – ReadWriteMany

  capability:

    storage: 10Gi

  mountOptions:

  – nosuid

  nfs:

    path: {{ .Values.exportpath }}{{“/alertmanager”}}

    server: “{{ .Values.serverip }}”

  persistentVolumeReclaimPolicy: Retain

  storageClassName: “{{ .Values.classname }}”

  volumeMode: Filesystem

EOF

cat > monitoringpv/templates/alermanagerpvc.yaml << EOF

apiVersion: v1

variety: PersistentVolumeClaim

metadata:

  title: pvc-alertmanager

  namespace: monitoring

spec:

  accessModes:

  – ReadWriteMany

  assets:

    requests:

      storage: 10Gi

  storageClassName: “{{ .Values.classname }}”

  volumeName: pv-alertmanager

EOF

7. Set up the monitoringpv Helm chart you created.

helm set up monitoringpv monitoringpv –create-namespace –namespace monitoring –wait

8. Confirm that the output seems one thing like the next:

[opc@wlsoke-admin ~]$ helm set up monitoringpv monitoringpv –namespace monitoring –wait

NAME: monitoringpv

LAST DEPLOYED: Wed Apr  15 16:43:41 2021

NAMESPACE: default

STATUS: deployed

REVISION: 1

TEST SUITE: None

Set up the Prometheus Helm chart

These directions are a subset of these within the Prometheus Neighborhood Kubernetes Helm Charts GitHub mission. Do these steps in the identical terminal window the place you accessed the administration occasion created with Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes:

1. Add the required Helm repositories.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics

helm repo replace

At the moment, you would optionally examine all of Helm’s obtainable configurable choices by exhibiting Prometheus’ values.yaml file.

helm present values prometheus-community/prometheus

2. Copy the wanted values from the WebLogic Monitoring Exporter GitHub mission to the Prometheus listing.

wget https://uncooked.githubusercontent.com/oracle/weblogic-monitoring-exporter/grasp/samples/kubernetes/end2end/prometheus/values.yaml -P prometheus

3. To customise your Prometheus deployment with your individual area info, create a custom-values.yaml file to override a few of the values from the prior step.

DOMAIN_NS=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.wls_domain_namespace’)

DOMAIN_NAME=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.wls_domain_uid’)

DOMAIN_CLUSTER_NAME=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.wls_cluster_name’)

cat > prometheus/custom-values.yaml << EOF

alertmanager:

  prefixURL: ‘/alertmanager’

  baseURL: http://localhost:9093/alertmanager

nodeExporter:

  hostRootfs: false

server:

  prefixURL: ‘/prometheus’

  baseURL: “http://localhost:9090/prometheus”

extraScrapeConfigs: |

    – job_name: ‘$DOMAIN_NAME’

      kubernetes_sd_configs:

      – function: pod

      relabel_configs:

      – source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_label_weblogic_domainUID, __meta_kubernetes_pod_label_weblogic_clusterName]

        motion: hold

        regex: $DOMAIN_NS;$DOMAIN_NAME;$DOMAIN_CLUSTER_NAME

      – source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]

        motion: exchange

        target_label: __metrics_path__

        regex: (.+)

      – source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]

        motion: exchange

        regex: ([^:]+)(?::d+)?;(d+)

        alternative: $1:$2

        target_label: __address__

      – motion: labelmap

        regex: __meta_kubernetes_pod_label_(.+)

      – source_labels: [__meta_kubernetes_pod_name]

        motion: exchange

        target_label: pod_name

      basic_auth:

        username: –FIX ME–

        password: –FIX ME–

EOF

4. Open the custom-values.yaml file and replace the username and password. Use the credentials you employ to log in to the executive console.

basic_auth:

        username: myadminuser

        password: myadminpwd

5. Set up the Prometheus chart.

helm set up –wait prometheus prometheus-community/prometheus –namespace monitoring -f prometheus/values.yaml -f prometheus/custom-values.yaml

6. Confirm that the output seems one thing like the next:

[opc@wlsoke-admin ~]$ helm set up –wait prometheus prometheus-community/prometheus –namespace monitoring -f prometheus/values.yaml -f prometheus/custom-values.yaml

NAME: prometheus

LAST DEPLOYED: Wed Apr  15 22:35:15 2021

NAMESPACE: monitoring

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

. . .

7. Create an ingress file to reveal Prometheus by the inner load balancer.

cat << EOF | kubectl apply -f –

apiVersion: extensions/v1beta1

variety: Ingress

metadata:

  annotations:

    kubernetes.io/ingress.class: nginx

  title: prometheus

  namespace: monitoring

spec:

  guidelines:

  – http:

      paths:

      – backend:

          serviceName: prometheus-server

          servicePort: 80

        path: /prometheus

EOF

8. The Prometheus dashboard ought to now be obtainable on the identical IP deal with used to entry the Oracle WebLogic Server Administration Console or the Jenkins console however on the /Prometheus path (see Determine 2).

Determine 2. The Prometheus dashboard

Set up the Grafana Helm chart

The directions described listed here are a subset of these within the Grafana Neighborhood Kubernetes Helm Charts GitHub mission. As earlier than, do these steps inside the identical terminal window the place you accessed the administration occasion created with Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes.

1. Add the Grafana charts repository.

helm repo add grafana https://grafana.github.io/helm-charts

helm repo replace

2. Create a values.yaml file to customise the Grafana set up.

INGRESS_NS=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.ingress_namespace’)

SERVICE_NAME=$(curl -s -H “Authorization: Bearer Oracle” http://169.254.169.254/opc/v2/occasion/ | jq -r ‘.metadata.service_name’)

INTERNAL_LB_IP=$(kubectl get svc “$SERVICE_NAME-internal” -n $INGRESS_NS -ojsonpath=”{.standing.loadBalancer.ingress[0].ip}”)

mkdir grafana

cat > grafana/values.yaml << EOF

persistence:

  enabled: true

  existingClaim: pvc-grafana

admin:

  existingSecret: “grafana-secret”

  userKey: username

  passwordKey: password

grafana.ini:

  server:

    area: “$INTERNAL_LB_IP”

    root_url: “%(protocol)s://%(area)s:%(http_port)s/grafana/”

    serve_from_sub_path: true

EOF

3. Create a grafana-secret Kubernetes secret file containing admin credentials for Grafana server (with your individual credentials, after all).

kubectl –namespace monitoring create secret generic grafana-secret –from-literal=username=your username –from-literal=password=yourpassword

4. Set up the Grafana Helm chart.

helm set up –wait grafana grafana/grafana –namespace monitoring -f grafana/values.yaml

5. Confirm that the output seems one thing like the next:

[opc@wlsoke-admin ~]$ helm set up –wait grafana grafana/grafana –namespace monitoring -f grafana/values.yaml

NAME: grafana

LAST DEPLOYED: Fri Apr  16 16:40:21 2021

NAMESPACE: monitoring

STATUS: deployed

REVISION: 1

NOTES:

. . .

6. Expose the Grafana dashboard utilizing the ingress controller.

cat <<EOF | kubectl apply -f –

apiVersion: extensions/v1beta1

variety: Ingress

metadata:

  annotations:

    kubernetes.io/ingress.class: nginx

  title: grafana

  namespace: monitoring

spec:

  guidelines:

  – http:

      paths:

      – backend:

          serviceName: grafana

          servicePort: 80

        path: /grafana

EOF

7. The Grafana dashboard ought to now be obtainable on the identical IP deal with used to entry the Oracle WebLogic Server Administration Console or the Jenkins console and Prometheus however on the /Grafana path (see Determine 3). You need to log in with the credentials you configured within the secret file. 

Determine 3. The Grafana login display

Create the Grafana information supply

1. When you log in to the Grafana dashboard (as proven in Determine 3), go to Configuration > Information Sources (see Determine 4) and click on Add information supply to go to the display the place you add the brand new information supply (see Determine 5).

Determine 4. The Configuration menu with the Information Sources possibility

Determine 5.The display the place you add a brand new information supply

2. Choose Prometheus as the info supply sort (see Determine 6).

Determine 6. Select Prometheus as the info supply sort.

3. Set the URL to http://<INTERNAL_LB_IP>/prometheus and click on the Save&Take a look at button (see Determine 7).

Vital observe. INTERNAL_LB_IP is similar IP deal with you employ to entry Grafana, Prometheus, Jenkins, and Oracle WebLogic Server Administration Console. You’ll be able to see how one can get that deal with in this doc.

Determine 7. Set the URL for the info supply; you should definitely use your individual IP deal with.

Import the Oracle WebLogic Server dashboard into Grafana

1. Log in to the Grafana dashboard. Navigate to Dashboards > Handle and click on Import (see Determine 8).

Determine 8. The display for importing a brand new dashboard

2. Open this JSON code file in a browser. Copy the contents into the Import through panel json part of the dashboard display and click on Load (see Determine 9).

Determine 9. That is the place you’ll paste the JSON code.

3. Click on the Import button and confirm you possibly can see the Oracle WebLogic Server dashboard on Grafana (see Determine 10). That’s it! You’re completed!

Determine 10. The Oracle WebLogic Server dashboard operating inside Grafana

Supply: oracle.com

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments