-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

====================================================================                   Red Hat Security Advisory (PDC)

Synopsis:          Moderate: Red Hat Ceph Storage 3.0 security and bug fix update
Advisory ID:       RHSA-2018:2177-01
Product:           Red Hat Ceph Storage
Advisory URL:      https://access.redhat.com/errata/RHSA-2018:2177
Issue date:        2018-07-11
CVE Names:         CVE-2018-1128 CVE-2018-1129 CVE-2018-10861 
====================================================================
1. Summary:

An update for ceph is now available for Red Hat Ceph Storage 3.0 for Red
Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Relevant releases/architectures:

Red Hat Ceph Storage 3.0 MON - x86_64
Red Hat Ceph Storage 3.0 OSD - x86_64
Red Hat Ceph Storage 3.0 Tools - noarch, x86_64

3. Description:

Red Hat Ceph Storage is a scalable, open, software-defined storage platform
that combines the most stable version of the Ceph storage system with a
Ceph management platform, deployment utilities, and support services.

Security Fix(es):

* ceph: cephx protocol is vulnerable to replay attack (CVE-2018-1128)

* ceph: cephx uses weak signatures (CVE-2018-1129)

* ceph: ceph-mon does not perform authorization on OSD pool ops
(CVE-2018-10861)

For more details about the security issue(s), including the impact, a CVSS
score, and other related information, refer to the CVE page(s) listed in
the References section.

Bug Fix(es):

* Previously, Ceph RADOS Gateway (RGW) instances in zones configured for
multi-site replication would crash if configured to disable sync
("rgw_run_sync_thread = false"). Therefor, multi-site replication
environments could not start dedicated non-replication RGW instances. With
this update, the "rgw_run_sync_thread" option can be used to configure RGW
instances that will not participate in replication even if their zone is
replicated. (BZ#1552202)

* Previously, when increasing "max_mds" from "1" to "2", if the Metadata
Server (MDS) daemon was in the starting/resolve state for a long period of
time, then restarting the MDS daemon lead to assert. This caused the Ceph
File System (CephFS) to be in degraded state. With this update, increasing
"max_mds" no longer causes CephFS to be in degraded state. (BZ#1566016)

* Previously, the transition to containerized Ceph left some "ceph-disk"
unit files. The files were harmless, but appeared as failing. With this
update, executing the
"switch-from-non-containerized-to-containerized-ceph-daemons.yml" playbook
disables the "ceph-disk" unit files too. (BZ#1577846)

* Previously, the "entries_behind_master" metric output from the "rbd
mirror image status" CLI tool did not always reduce to zero under synthetic
workloads. This could cause a false alarm that there is an issue with RBD
mirroring replications. With this update, the metric is now updated
periodically without the need for an explicit I/O flush in the workload.
(BZ#1578509)

* Previously, when using the "pool create" command with
"expected_num_objects", placement group (PG) directories were not
pre-created at pool creation time as expected, resulting in performance
drops when filestore splitting occurred. With this update, the
"expected_num_objects" parameter is now passed through to filestore
correctly, and PG directories for the expected number of objects are
pre-created at pool creation time. (BZ#1579039)

* Previously, internal RADOS Gateway (RGW) multi-site sync logic behaved
incorrectly when attempting to sync containers with S3 object versioning
enabled. Objects in versioning-enabled containers would fail to sync in
some scenarios—for example, when using "s3cmd sync" to mirror a filesystem
directory. With this update, RGW multi-site replication logic has been
corrected for the known failure cases. (BZ#1580497)

* When restarting OSD daemons, the "ceph-ansible" restart script goes
through all the daemons by listing the units with systemctl list-units.
Under certain circumstances, the output of the command contains extra
spaces, which caused parsing and restart to fail. With this update, the
underlying code has been changed to handle the extra space.

* Previously, the Ceph RADOS Gateway (RGW) server treated negative
byte-range object requests ("bytes=0--1") as invalid. Applications that
expect the AWS behavior for negative or other invalid range requests saw
unexpected errors and could fail. With this update, a new option
"rgw_ignore_get_invalid_range" has been added to RGW. When
"rgw_ignore_get_invalid_range" is set to "true", the RGW behavior for
invalid range requests is backwards compatible with AWS.

4. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

https://access.redhat.com/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

1532645 - cephmetrics-collectd fails to start due to SELinux errors1534657 - [cephmetrics] Installation of cephmetrics on ceph3.0 fails
1549004 - [Ceph-ansible] Failure on TASK [igw_purge | purging the gateway configuration]
1552202 - RGW multi-site segfault received when 'rgw_run_sync_thread = False' is set in ceph.conf
1552509 - Ubuntu ansible version should be same as RHEL ansible version
1566016 - [cephfs]: MDS asserted while in Starting/resolve state
1569694 - RGW:  when using bucket request payer with boto3, NotImplemented error is seen.
1570597 - [CephFS]: MDS assert, ceph-12.2.1/src/mds/MDCache.cc: 5080: FAILED assert(isolated_inodes.empty())
1575024 - prevent ESTALE errors on clean shutdown in nfs-ganesha
1575866 - CVE-2018-1128 ceph: cephx protocol is vulnerable to replay attack
1576057 - CVE-2018-1129 ceph: cephx uses weak signatures
1576861 - CephFS mount ceph-fuse hang during unmount after eviction
1576908 - [CephFS]: Client IO's hung Fuse service asserted with error FAILED assert(oset.objects.empty()
1577846 - After latest environment update all ceph-disk@dev-sdXX.service are in failed state
1578509 - entries_behind_master metric output but "rbd mirror image status " never reduces to zero.
1578572 - [RFE]  Ceph-Ansible main.yml places restart scripts in /tmp  - causing failures running restart scripts
1579039 - Pool create cmd's expected_num_objects is not properly interpreted
1581403 - OSDs are restarted twice during rolling update
1581573 - ceph-radosgw: disable NSS PKI db when SSL is disabled
1585748 - objects in cache never refresh after rgw_cache_expiry_interval
1593308 - CVE-2018-10861 ceph: ceph-mon does not perform authorization on OSD pool ops
1594974 - [ceph-ansible 3.0.33-2redhat1] mgr repo not setup with ceph-ansible when using rhcs downstream settings
1598185 - [ceph-ansible] - RHEL and Ubuntu CDN based installation failing trying to enable/include mon repository

6. Package List:

Red Hat Ceph Storage 3.0 MON:

Source:
ceph-12.2.4-30.el7cp.src.rpm
cephmetrics-1.0.1-1.el7cp.src.rpm

x86_64:
ceph-base-12.2.4-30.el7cp.x86_64.rpm
ceph-common-12.2.4-30.el7cp.x86_64.rpm
ceph-debuginfo-12.2.4-30.el7cp.x86_64.rpm
ceph-mgr-12.2.4-30.el7cp.x86_64.rpm
ceph-mon-12.2.4-30.el7cp.x86_64.rpm
ceph-selinux-12.2.4-30.el7cp.x86_64.rpm
ceph-test-12.2.4-30.el7cp.x86_64.rpm
cephmetrics-collectors-1.0.1-1.el7cp.x86_64.rpm
libcephfs-devel-12.2.4-30.el7cp.x86_64.rpm
libcephfs2-12.2.4-30.el7cp.x86_64.rpm
librados-devel-12.2.4-30.el7cp.x86_64.rpm
librados2-12.2.4-30.el7cp.x86_64.rpm
libradosstriper1-12.2.4-30.el7cp.x86_64.rpm
librbd-devel-12.2.4-30.el7cp.x86_64.rpm
librbd1-12.2.4-30.el7cp.x86_64.rpm
librgw-devel-12.2.4-30.el7cp.x86_64.rpm
librgw2-12.2.4-30.el7cp.x86_64.rpm
python-cephfs-12.2.4-30.el7cp.x86_64.rpm
python-rados-12.2.4-30.el7cp.x86_64.rpm
python-rbd-12.2.4-30.el7cp.x86_64.rpm
python-rgw-12.2.4-30.el7cp.x86_64.rpm

Red Hat Ceph Storage 3.0 OSD:

Source:
ceph-12.2.4-30.el7cp.src.rpm
cephmetrics-1.0.1-1.el7cp.src.rpm

x86_64:
ceph-base-12.2.4-30.el7cp.x86_64.rpm
ceph-common-12.2.4-30.el7cp.x86_64.rpm
ceph-debuginfo-12.2.4-30.el7cp.x86_64.rpm
ceph-osd-12.2.4-30.el7cp.x86_64.rpm
ceph-selinux-12.2.4-30.el7cp.x86_64.rpm
ceph-test-12.2.4-30.el7cp.x86_64.rpm
cephmetrics-collectors-1.0.1-1.el7cp.x86_64.rpm
libcephfs-devel-12.2.4-30.el7cp.x86_64.rpm
libcephfs2-12.2.4-30.el7cp.x86_64.rpm
librados-devel-12.2.4-30.el7cp.x86_64.rpm
librados2-12.2.4-30.el7cp.x86_64.rpm
libradosstriper1-12.2.4-30.el7cp.x86_64.rpm
librbd-devel-12.2.4-30.el7cp.x86_64.rpm
librbd1-12.2.4-30.el7cp.x86_64.rpm
librgw-devel-12.2.4-30.el7cp.x86_64.rpm
librgw2-12.2.4-30.el7cp.x86_64.rpm
python-cephfs-12.2.4-30.el7cp.x86_64.rpm
python-rados-12.2.4-30.el7cp.x86_64.rpm
python-rbd-12.2.4-30.el7cp.x86_64.rpm
python-rgw-12.2.4-30.el7cp.x86_64.rpm

Red Hat Ceph Storage 3.0 Tools:

Source:
ceph-12.2.4-30.el7cp.src.rpm
ceph-ansible-3.0.39-1.el7cp.src.rpm
cephmetrics-1.0.1-1.el7cp.src.rpm
nfs-ganesha-2.5.5-6.el7cp.src.rpm

noarch:
ceph-ansible-3.0.39-1.el7cp.noarch.rpm

x86_64:
ceph-base-12.2.4-30.el7cp.x86_64.rpm
ceph-common-12.2.4-30.el7cp.x86_64.rpm
ceph-debuginfo-12.2.4-30.el7cp.x86_64.rpm
ceph-fuse-12.2.4-30.el7cp.x86_64.rpm
ceph-mds-12.2.4-30.el7cp.x86_64.rpm
ceph-radosgw-12.2.4-30.el7cp.x86_64.rpm
ceph-selinux-12.2.4-30.el7cp.x86_64.rpm
cephmetrics-1.0.1-1.el7cp.x86_64.rpm
cephmetrics-ansible-1.0.1-1.el7cp.x86_64.rpm
cephmetrics-collectors-1.0.1-1.el7cp.x86_64.rpm
cephmetrics-grafana-plugins-1.0.1-1.el7cp.x86_64.rpm
libcephfs-devel-12.2.4-30.el7cp.x86_64.rpm
libcephfs2-12.2.4-30.el7cp.x86_64.rpm
librados-devel-12.2.4-30.el7cp.x86_64.rpm
librados2-12.2.4-30.el7cp.x86_64.rpm
libradosstriper1-12.2.4-30.el7cp.x86_64.rpm
librbd-devel-12.2.4-30.el7cp.x86_64.rpm
librbd1-12.2.4-30.el7cp.x86_64.rpm
librgw-devel-12.2.4-30.el7cp.x86_64.rpm
librgw2-12.2.4-30.el7cp.x86_64.rpm
nfs-ganesha-2.5.5-6.el7cp.x86_64.rpm
nfs-ganesha-ceph-2.5.5-6.el7cp.x86_64.rpm
nfs-ganesha-debuginfo-2.5.5-6.el7cp.x86_64.rpm
nfs-ganesha-rgw-2.5.5-6.el7cp.x86_64.rpm
python-cephfs-12.2.4-30.el7cp.x86_64.rpm
python-rados-12.2.4-30.el7cp.x86_64.rpm
python-rbd-12.2.4-30.el7cp.x86_64.rpm
python-rgw-12.2.4-30.el7cp.x86_64.rpm
rbd-mirror-12.2.4-30.el7cp.x86_64.rpm

These packages are GPG signed by Red Hat for security.  Our key and
details on how to verify the signature are available from
https://access.redhat.com/security/team/key/

7. References:

https://access.redhat.com/security/cve/CVE-2018-1128
https://access.redhat.com/security/cve/CVE-2018-1129
https://access.redhat.com/security/cve/CVE-2018-10861
https://access.redhat.com/security/updates/classification/#moderate

8. Contact:

The Red Hat security contact is . More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2018 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUBW0ZIfdzjgjWX9erEAQgZaA//ZfvHqeatevIDC2vAR8R5xubsydScmrv8
l7rrj3KA/WfuWf5bB5WbWKQRgKXMBr3gnalJnTaxaGxCvSiJODvtmavnp1qRmx2r
1l2WmxsJ6sVD+FeQ8bA5ubSrTkXo23HHTAoutZmXSTDg68f+iMlXs96j9dXsL3wE
pVeitOdbyhzzbY7jGcqBgNKyvPDR6DcAbOpbVwxzAur5XqNwpZ9ghF/oJ4RMHXCB
yGNUpayER+l2vFTG5hYIHWvRJaYh+iwITWHpknGKVVN+XZ6Ru+tFQt6mRpvZ4sr3
8MGqj3Egc8amXwFwU37NkkplW+/0NGdqp/mCAZpGQS0++o8ZmioKs7ZG6iPf9YcU
B8LNqGHawUOtwy71dwhzB0Mb4J4//ZF1Drqp0d1Evc/f5LcudOAtigwYGMuJZLLg
1IY9M8n2LcSYXLreh2K8/9ghkhtZMeYl4hSgxx3aRlNk63gOYVGDorWO7Ap6VL0l
7KDRLPu7jytbUXZG0PtajUvlWds7GZfgInxoJ3Chh+w407NDyjIqoehusq32+66q
itV68wmqMJANUDgCtwW8d6XOXQndolcV87pA2wP51NreInxZOvVFMq1NVrTqcFSV
eXGMMTUK508z16N5PZ1TkSX6KIExhJSFsjrqUSP+2y12YQk66ErQEzaWZNKmzJQs
ZVNkXMqi768=h1yq
-----END PGP SIGNATURE-----

--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce

RedHat: RHSA-2018-2177:01 Moderate: Red Hat Ceph Storage 3.0 security and

An update for ceph is now available for Red Hat Ceph Storage 3.0 for Red Hat Enterprise Linux 7

Summary

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.
Security Fix(es):
* ceph: cephx protocol is vulnerable to replay attack (CVE-2018-1128)
* ceph: cephx uses weak signatures (CVE-2018-1129)
* ceph: ceph-mon does not perform authorization on OSD pool ops (CVE-2018-10861)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
* Previously, Ceph RADOS Gateway (RGW) instances in zones configured for multi-site replication would crash if configured to disable sync ("rgw_run_sync_thread = false"). Therefor, multi-site replication environments could not start dedicated non-replication RGW instances. With this update, the "rgw_run_sync_thread" option can be used to configure RGW instances that will not participate in replication even if their zone is replicated. (BZ#1552202)
* Previously, when increasing "max_mds" from "1" to "2", if the Metadata Server (MDS) daemon was in the starting/resolve state for a long period of time, then restarting the MDS daemon lead to assert. This caused the Ceph File System (CephFS) to be in degraded state. With this update, increasing "max_mds" no longer causes CephFS to be in degraded state. (BZ#1566016)
* Previously, the transition to containerized Ceph left some "ceph-disk" unit files. The files were harmless, but appeared as failing. With this update, executing the "switch-from-non-containerized-to-containerized-ceph-daemons.yml" playbook disables the "ceph-disk" unit files too. (BZ#1577846)
* Previously, the "entries_behind_master" metric output from the "rbd mirror image status" CLI tool did not always reduce to zero under synthetic workloads. This could cause a false alarm that there is an issue with RBD mirroring replications. With this update, the metric is now updated periodically without the need for an explicit I/O flush in the workload. (BZ#1578509)
* Previously, when using the "pool create" command with "expected_num_objects", placement group (PG) directories were not pre-created at pool creation time as expected, resulting in performance drops when filestore splitting occurred. With this update, the "expected_num_objects" parameter is now passed through to filestore correctly, and PG directories for the expected number of objects are pre-created at pool creation time. (BZ#1579039)
* Previously, internal RADOS Gateway (RGW) multi-site sync logic behaved incorrectly when attempting to sync containers with S3 object versioning enabled. Objects in versioning-enabled containers would fail to sync in some scenarios—for example, when using "s3cmd sync" to mirror a filesystem directory. With this update, RGW multi-site replication logic has been corrected for the known failure cases. (BZ#1580497)
* When restarting OSD daemons, the "ceph-ansible" restart script goes through all the daemons by listing the units with systemctl list-units. Under certain circumstances, the output of the command contains extra spaces, which caused parsing and restart to fail. With this update, the underlying code has been changed to handle the extra space.
* Previously, the Ceph RADOS Gateway (RGW) server treated negative byte-range object requests ("bytes=0--1") as invalid. Applications that expect the AWS behavior for negative or other invalid range requests saw unexpected errors and could fail. With this update, a new option "rgw_ignore_get_invalid_range" has been added to RGW. When "rgw_ignore_get_invalid_range" is set to "true", the RGW behavior for invalid range requests is backwards compatible with AWS.



Summary


Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258

References

https://access.redhat.com/security/cve/CVE-2018-1128 https://access.redhat.com/security/cve/CVE-2018-1129 https://access.redhat.com/security/cve/CVE-2018-10861 https://access.redhat.com/security/updates/classification/#moderate

Package List

Red Hat Ceph Storage 3.0 MON:
Source: ceph-12.2.4-30.el7cp.src.rpm cephmetrics-1.0.1-1.el7cp.src.rpm
x86_64: ceph-base-12.2.4-30.el7cp.x86_64.rpm ceph-common-12.2.4-30.el7cp.x86_64.rpm ceph-debuginfo-12.2.4-30.el7cp.x86_64.rpm ceph-mgr-12.2.4-30.el7cp.x86_64.rpm ceph-mon-12.2.4-30.el7cp.x86_64.rpm ceph-selinux-12.2.4-30.el7cp.x86_64.rpm ceph-test-12.2.4-30.el7cp.x86_64.rpm cephmetrics-collectors-1.0.1-1.el7cp.x86_64.rpm libcephfs-devel-12.2.4-30.el7cp.x86_64.rpm libcephfs2-12.2.4-30.el7cp.x86_64.rpm librados-devel-12.2.4-30.el7cp.x86_64.rpm librados2-12.2.4-30.el7cp.x86_64.rpm libradosstriper1-12.2.4-30.el7cp.x86_64.rpm librbd-devel-12.2.4-30.el7cp.x86_64.rpm librbd1-12.2.4-30.el7cp.x86_64.rpm librgw-devel-12.2.4-30.el7cp.x86_64.rpm librgw2-12.2.4-30.el7cp.x86_64.rpm python-cephfs-12.2.4-30.el7cp.x86_64.rpm python-rados-12.2.4-30.el7cp.x86_64.rpm python-rbd-12.2.4-30.el7cp.x86_64.rpm python-rgw-12.2.4-30.el7cp.x86_64.rpm
Red Hat Ceph Storage 3.0 OSD:
Source: ceph-12.2.4-30.el7cp.src.rpm cephmetrics-1.0.1-1.el7cp.src.rpm
x86_64: ceph-base-12.2.4-30.el7cp.x86_64.rpm ceph-common-12.2.4-30.el7cp.x86_64.rpm ceph-debuginfo-12.2.4-30.el7cp.x86_64.rpm ceph-osd-12.2.4-30.el7cp.x86_64.rpm ceph-selinux-12.2.4-30.el7cp.x86_64.rpm ceph-test-12.2.4-30.el7cp.x86_64.rpm cephmetrics-collectors-1.0.1-1.el7cp.x86_64.rpm libcephfs-devel-12.2.4-30.el7cp.x86_64.rpm libcephfs2-12.2.4-30.el7cp.x86_64.rpm librados-devel-12.2.4-30.el7cp.x86_64.rpm librados2-12.2.4-30.el7cp.x86_64.rpm libradosstriper1-12.2.4-30.el7cp.x86_64.rpm librbd-devel-12.2.4-30.el7cp.x86_64.rpm librbd1-12.2.4-30.el7cp.x86_64.rpm librgw-devel-12.2.4-30.el7cp.x86_64.rpm librgw2-12.2.4-30.el7cp.x86_64.rpm python-cephfs-12.2.4-30.el7cp.x86_64.rpm python-rados-12.2.4-30.el7cp.x86_64.rpm python-rbd-12.2.4-30.el7cp.x86_64.rpm python-rgw-12.2.4-30.el7cp.x86_64.rpm
Red Hat Ceph Storage 3.0 Tools:
Source: ceph-12.2.4-30.el7cp.src.rpm ceph-ansible-3.0.39-1.el7cp.src.rpm cephmetrics-1.0.1-1.el7cp.src.rpm nfs-ganesha-2.5.5-6.el7cp.src.rpm
noarch: ceph-ansible-3.0.39-1.el7cp.noarch.rpm
x86_64: ceph-base-12.2.4-30.el7cp.x86_64.rpm ceph-common-12.2.4-30.el7cp.x86_64.rpm ceph-debuginfo-12.2.4-30.el7cp.x86_64.rpm ceph-fuse-12.2.4-30.el7cp.x86_64.rpm ceph-mds-12.2.4-30.el7cp.x86_64.rpm ceph-radosgw-12.2.4-30.el7cp.x86_64.rpm ceph-selinux-12.2.4-30.el7cp.x86_64.rpm cephmetrics-1.0.1-1.el7cp.x86_64.rpm cephmetrics-ansible-1.0.1-1.el7cp.x86_64.rpm cephmetrics-collectors-1.0.1-1.el7cp.x86_64.rpm cephmetrics-grafana-plugins-1.0.1-1.el7cp.x86_64.rpm libcephfs-devel-12.2.4-30.el7cp.x86_64.rpm libcephfs2-12.2.4-30.el7cp.x86_64.rpm librados-devel-12.2.4-30.el7cp.x86_64.rpm librados2-12.2.4-30.el7cp.x86_64.rpm libradosstriper1-12.2.4-30.el7cp.x86_64.rpm librbd-devel-12.2.4-30.el7cp.x86_64.rpm librbd1-12.2.4-30.el7cp.x86_64.rpm librgw-devel-12.2.4-30.el7cp.x86_64.rpm librgw2-12.2.4-30.el7cp.x86_64.rpm nfs-ganesha-2.5.5-6.el7cp.x86_64.rpm nfs-ganesha-ceph-2.5.5-6.el7cp.x86_64.rpm nfs-ganesha-debuginfo-2.5.5-6.el7cp.x86_64.rpm nfs-ganesha-rgw-2.5.5-6.el7cp.x86_64.rpm python-cephfs-12.2.4-30.el7cp.x86_64.rpm python-rados-12.2.4-30.el7cp.x86_64.rpm python-rbd-12.2.4-30.el7cp.x86_64.rpm python-rgw-12.2.4-30.el7cp.x86_64.rpm rbd-mirror-12.2.4-30.el7cp.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/


Severity
Advisory ID: RHSA-2018:2177-01
Product: Red Hat Ceph Storage
Advisory URL: https://access.redhat.com/errata/RHSA-2018:2177
Issued Date: : 2018-07-11
CVE Names: CVE-2018-1128 CVE-2018-1129 CVE-2018-10861

Topic

An update for ceph is now available for Red Hat Ceph Storage 3.0 for RedHat Enterprise Linux 7.Red Hat Product Security has rated this update as having a security impactof Moderate. A Common Vulnerability Scoring System (CVSS) base score, whichgives a detailed severity rating, is available for each vulnerability fromthe CVE link(s) in the References section.


Topic


 

Relevant Releases Architectures

Red Hat Ceph Storage 3.0 MON - x86_64

Red Hat Ceph Storage 3.0 OSD - x86_64

Red Hat Ceph Storage 3.0 Tools - noarch, x86_64


Bugs Fixed

1532645 - cephmetrics-collectd fails to start due to SELinux errors1534657 - [cephmetrics] Installation of cephmetrics on ceph3.0 fails

1549004 - [Ceph-ansible] Failure on TASK [igw_purge | purging the gateway configuration]

1552202 - RGW multi-site segfault received when 'rgw_run_sync_thread = False' is set in ceph.conf

1552509 - Ubuntu ansible version should be same as RHEL ansible version

1566016 - [cephfs]: MDS asserted while in Starting/resolve state

1569694 - RGW: when using bucket request payer with boto3, NotImplemented error is seen.

1570597 - [CephFS]: MDS assert, ceph-12.2.1/src/mds/MDCache.cc: 5080: FAILED assert(isolated_inodes.empty())

1575024 - prevent ESTALE errors on clean shutdown in nfs-ganesha

1575866 - CVE-2018-1128 ceph: cephx protocol is vulnerable to replay attack

1576057 - CVE-2018-1129 ceph: cephx uses weak signatures

1576861 - CephFS mount ceph-fuse hang during unmount after eviction

1576908 - [CephFS]: Client IO's hung Fuse service asserted with error FAILED assert(oset.objects.empty()

1577846 - After latest environment update all ceph-disk@dev-sdXX.service are in failed state

1578509 - entries_behind_master metric output but "rbd mirror image status " never reduces to zero.

1578572 - [RFE] Ceph-Ansible main.yml places restart scripts in /tmp - causing failures running restart scripts

1579039 - Pool create cmd's expected_num_objects is not properly interpreted

1581403 - OSDs are restarted twice during rolling update

1581573 - ceph-radosgw: disable NSS PKI db when SSL is disabled

1585748 - objects in cache never refresh after rgw_cache_expiry_interval

1593308 - CVE-2018-10861 ceph: ceph-mon does not perform authorization on OSD pool ops

1594974 - [ceph-ansible 3.0.33-2redhat1] mgr repo not setup with ceph-ansible when using rhcs downstream settings

1598185 - [ceph-ansible] - RHEL and Ubuntu CDN based installation failing trying to enable/include mon repository


Related News