Name: rook-ceph Namespace: rook-ceph Labels: Annotations: API Version: ceph.rook.io/v1 Kind: CephCluster Metadata: Creation Timestamp: 2021-03-02T03:23:34Z Finalizers: cephcluster.ceph.rook.io Generation: 2 Managed Fields: API Version: ceph.rook.io/v1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:cephVersion: .: f:image: f:cleanupPolicy: .: f:sanitizeDisks: .: f:dataSource: f:iteration: f:method: f:crashCollector: .: f:disable: f:dashboard: .: f:enabled: f:ssl: f:dataDirHostPath: f:disruptionManagement: .: f:machineDisruptionBudgetNamespace: f:osdMaintenanceTimeout: f:healthCheck: .: f:daemonHealth: f:livenessProbe: f:mgr: .: f:modules: f:mon: .: f:count: f:monitoring: .: f:rulesNamespace: f:network: .: f:hostNetwork: f:provider: f:removeOSDsIfOutAndSafeToRemove: f:storage: .: f:config: f:useAllDevices: f:waitTimeoutForHealthyOSDInMinutes: Manager: kubectl-create Operation: Update Time: 2021-03-02T03:23:34Z API Version: ceph.rook.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:"cephcluster.ceph.rook.io": f:spec: f:external: .: f:enable: f:healthCheck: f:daemonHealth: f:livenessProbe: f:logCollector: f:network: f:hostNetwork: f:provider: f:selectors: f:security: .: f:kms: f:storage: f:nodes: f:storageClassDeviceSets: f:status: .: f:ceph: f:conditions: f:message: f:phase: f:state: f:version: Manager: rook Operation: Update Time: 2021-03-02T12:20:27Z Resource Version: 495715 UID: 08c453ef-27b2-457b-8292-ec1304df7a92 Spec: Ceph Version: Image: ceph/ceph:v15.2.8 Cleanup Policy: Sanitize Disks: Data Source: zero Iteration: 1 Method: quick Crash Collector: Disable: false Dashboard: Enabled: true Ssl: true Data Dir Host Path: /var/lib/rook Disruption Management: Machine Disruption Budget Namespace: openshift-machine-api Osd Maintenance Timeout: 30 External: Enable: false Health Check: Daemon Health: Mon: Interval: 45s Osd: Interval: 60s Status: Interval: 60s Liveness Probe: Mgr: Mon: Osd: Log Collector: Mgr: Modules: Enabled: true Name: pg_autoscaler Mon: Count: 3 Monitoring: Rules Namespace: rook-ceph Network: Host Network: false Provider: Selectors: Remove OS Ds If Out And Safe To Remove: false Security: Kms: Storage: Config: Nodes: Config: Devices: Config: Name: vdb Name: kube1 Resources: Config: Devices: Config: Name: vdb Name: kube2 Resources: Config: Devices: Config: Name: vdb Name: kube3 Resources: Config: Devices: Config: Name: vdb Name: kube4 Resources: Config: Devices: Config: Name: vdb Name: kube5 Resources: Config: Devices: Config: Name: vdb Name: kube6 Resources: Storage Class Device Sets: Use All Devices: false Wait Timeout For Healthy OSD In Minutes: 10 Status: Ceph: Capacity: Details: MON_DISK_LOW: Message: mons b,d are low on available space Severity: HEALTH_WARN PG_AVAILABILITY: Message: Reduced data availability: 1 pg inactive Severity: HEALTH_WARN TOO_FEW_OSDS: Message: OSD count 0 < osd_pool_default_size 3 Severity: HEALTH_WARN Health: HEALTH_WARN Last Checked: 2021-03-03T17:55:13Z Conditions: Last Heartbeat Time: 2021-03-03T17:08:37Z Last Transition Time: 2021-03-02T03:23:39Z Message: Cluster progression is completed Reason: ProgressingCompleted Status: False Type: Progressing Last Heartbeat Time: 2021-03-02T03:33:52Z Last Transition Time: 2021-03-02T03:33:52Z Message: Failed to create cluster Reason: ClusterFailure Status: True Type: Failure Last Heartbeat Time: 2021-03-02T12:20:26Z Last Transition Time: 2021-03-02T12:20:26Z Message: Cluster created successfully Reason: ClusterCreated Status: True Type: Ready Message: Cluster created successfully Phase: Ready State: Created Version: Image: ceph/ceph:v15.2.8 Version: 15.2.8-0 Events: