OpenShift Logging
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
A guide to shipping logs and metrics on OpenShift
Prerequisites
- OpenShift CLI (oc)
- Rights to install operators on the cluster
Setup OpenShift Logging
This is for setup of centralized logging on OpenShift making use of Elasticsearch OSS edition. This largely follows the processes outlined in the OpenShift documentation here . Retention and storage considerations are reviewed in Red Hat’s primary source documentation.
This setup is primarily concerned with simplicity and basic log searching. Consequently it is insufficient for long-lived retention or for advanced visualization of logs. For more advanced observability setups, you’ll want to look at Forwarding Logs to Third Party Systems
Create a namespace for the OpenShift Elasticsearch Operator.
This is necessary to avoid potential conflicts with community operators that could send similarly named metrics/logs into the stack.
oc create -f - <<EOF apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" EOFCreate a namespace for the OpenShift Logging Operator
oc create -f - <<EOF apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" EOFInstall the OpenShift Elasticsearch Operator by creating the following objects:
Operator Group for OpenShift Elasticsearch Operator
oc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat spec: {} EOFSubscription object to subscribe a Namespace to the OpenShift Elasticsearch Operator
oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: "elasticsearch-operator" namespace: "openshift-operators-redhat" spec: channel: "stable" installPlanApproval: "Automatic" source: "redhat-operators" sourceNamespace: "openshift-marketplace" name: "elasticsearch-operator" EOFVerify Operator Installation
oc get csv --all-namespacesExample Output
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded kube-node-lease elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded kube-public elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded kube-system elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded openshift-apiserver-operator elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded openshift-apiserver elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded openshift-authentication-operator elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded openshift-authentication elasticsearch-operator.5.0. 0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded ...
Install the Red Hat OpenShift Logging Operator by creating the following objects:
The Cluster Logging OperatorGroup
oc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging EOFSubscription Object to subscribe a Namespace to the Red Hat OpenShift Logging Operator
oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace EOFVerify the Operator installation, the
PHASEshould beSucceeded
oc get csv -n openshift-loggingExample Output
NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.0.5-11 Red Hat OpenShift Logging 5.0.5-11 Succeeded elasticsearch-operator.5.0.5-11 OpenShift Elasticsearch Operator 5.0.5-11 SucceededCreate an OpenShift Logging instance:
NOTE: For the
storageClassNamebelow, you will need to adjust for the platform on which you’re running OpenShift.managed-premiumas listed below is for Azure Red Hat OpenShift (ARO). You can verify your available storage classes withoc get storageClassesoc create -f - <<EOF apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 storage: storageClassName: "managed-premium" size: 200G resources: requests: memory: "8Gi" proxy: resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: replicas: 1 curation: type: "curator" curator: schedule: "30 3 * * *" collection: logs: type: "fluentd" fluentd: {} EOFIt will take a few minutes for everything to start up. You can monitor this progress by watching the pods.
watch oc get pods -n openshift-loggingYour logging instances are now configured and recieving logs. To view them, you will need to log into your Kibana instance and create the appropriate index patterns. For more information on index patterns, see the Kibana documentation.
NOTE: The following restrictions and notes apply to index patterns:
- All users can view the
app-logs for namespaces they have access to - Only cluster-admins can view the
infra-andaudit-logs - For best accuracy, use the
@timestampfield for determining chronology
- All users can view the