rook -开放、原生云、通用的分布式存储
rook - 开放、原生云、通用的分布式存储 travisn released this
Improvements
Rook v1.1.4 is a patch release limited in scope and focusing on bug fixes. Note that release v1.1.3 was skipped.
Ceph
- OSD config overrides were ignored for some upgraded OSDs (#4161, @leseb)
- Enable restoring a cluster after disaster recovery (#4021, @travisn)
- Enable upgrade of OSDs configured on PVCs (#3996, @sp98)
- Automatically removing OSDs requires setting:
removeOSDsIfOutAndSafeToRemove
(#4116, @rohgupta) - Rework csi keys and secrets to use minimal privileges (#4086, @leseb)
- Expose OSD prepare pod resource limits (#4083, @leseb)
- Minimum K8s version for running OSDs on PVCs is 1.13 (#4009, @sp98)
- Add 'rgw.buckets.non-ec' to list of RGW metadataPools (#4087, @OwenTuz)
- Hide wrong error for clusterdisruption controller (#4094, @leseb)
- Multiple integration test fixes to improve CI stability (#4098, @travisn)
- Detect mount fstype more accurately in the flex driver (#4109, @leseb)
- Do not override mgr annotations (#4110, @leseb)
- Add OSDs to proper buckets in crush hierarchy with topology awareness (#4099, @mateuszlos)
- More robust removal of cluster finalizer (#4090, @travisn)
- Take activeStandby into account for the CephFileSystem disruption budget (#4075, @rohantmp)
- Update the CSI CephFS registration directory name (#4070, @Madhu-1)
- Fix incorrect Ceph CSI doc links (#4081, @binoue)
- Remove decimal places for osdMemoryTargetValue monitoring setting (#4046, @eknudtson)
- Relax pre-requisites for external cluster to allow connections to Luminous (#4025, @leseb)
- Avoid nodes getting stuck in OrchestrationStatusStarting during OSD config (#3817, @mantissahz)
- Make metrics and liveness port configurable (#4005, @Madhu-1)
- Correct system namespace for CSI driver settings during upgrade (#4040, @Madhu-1)
Other
Assets
2
travisn released this
Improvements
Rook v1.1.2 is a patch release limited in scope and focusing on bug fixes.
Ceph
- Ceph config overrides were ignored for OSDs (#3926, @leseb)
- Resolved topologyAware error for clusterrole issue in helm chart (#3993, @rohan47)
- Fix encrypted osd startup (#3846, @mvollman)
- Various osd fixes and improvements with lvm and ceph-volume (#3955, @leseb)
- Reset OSD 'run dir' to default location (#3966, @leseb)
- Affinity on the ceph version job will use the affinity for mons (#4001, @travisn)
- Add check for csi volumes before cluster delete (#3967, @Madhu-1)
- Add Toleration and NodeAffinity to CSI provisioners (#3942, @Madhu-1)
- Use v1.2.1 cephcsi release (#3982, @Madhu-1)
- Update CSI service if already present (#3981, @Madhu-1)
- Fix umount issue when rbd-nodeplugin restarts (#3923, @Madhu-1)
- Set default filesystem to
ext4
on RBD device (#3971, @humblec) - Allow default Ceph CSI images to be configurable in the build (#4018, @BlaineEXE)
- Remove finalizer even if flex is disabled (#3912, @krig)
- Disable Flexdriver in helm chart and operator-openshift by default (#3945, @Madhu-1)
- Added validation for cephBlockPool,cephfs and cephnfs (#3480, @rajatsingh25aug)
- Set default dashboard ssl to true (#4016, @jtlayton)
YugabyteDB
Assets
2
travisn released this
Improvements
Rook v1.1.1 is a patch release limited in scope and focusing on bug fixes.
Ceph
- Disable the flex driver by default in new clusters (#3724, @leseb)
- MDB controller to use namespace for checking ceph status (#3879, @ashishranjan738)
- CSI liveness container socket file (#3886, @Madhu-1)
- Add list of unusable directories paths (#3569, @galexrt)
- Remove helm incompatible chars from values.yaml (#3863, @k0da)
- Fail nfs ganesha if CephFS is not configured (#3835, @leseb)
- Make lifecycle hook chown less verbose for OSDs (#3848, @leseb)
- Configure LVM settings for rhel8 base image (#3933, @travisn)
- Make kubelet path configurable in operator for csi (#3927, @Madhu-1)
- OSD pods should always use hostname for node selector (#3924, @travisn)
- Deactivate device from lvm when OSD pods are shutting down (#3755, @sp98)
- Add CephNFS to OLM's CSV (#3826, @rohantmp)
- Tolerations for drain detection canaries (#3813, @rohantmp)
- Enable ceph-volume debug logs (#3907, @leseb)
- Add documentation for CSI upgrades from v1.0 (#3868, @Madhu-1)
- Add a new skipUpgradeChecks property to allow forcing upgrades (#3872, @leseb)
- Include CSI image in helm chart values (#3855, @Madhu-1)
- Use HTTP port if SSL is disabled (#3810, @krig)
- Enable SSL for dashboard by default (#3626, @krig)
- Enable msgr2 properly during upgrades (#3870, @leseb)
- Nautilus v14.2.4 is the default Ceph image (#3889, @leseb)
- Ensure the ceph-csi secret exists on upgrade (#3874, @travisn)
- Disable the min PG warning if the pg_autoscaler is enabled (#3847, @leseb)
- Disable the warning for bluestore warn on legacy statfs (#3847, @leseb)
NFS
Assets
2
BlaineEXE released this
Major Themes
- Ceph: CSI driver is ready for production, use PVCs for mon and OSD storage, connect to an external Ceph cluster, ability to enable Ceph manager modules, and much more!
- EdgeFS: Operator CRDs graduated to v1 stable.
- YugabyteDB: This high-performance, distributed SQL database was added as a storage backend.
Action Required
If you are running a previous Rook version, please see the corresponding storage provider upgrade guide:
Notable Features
- The minimum version of Kubernetes supported by Rook changed from 1.10 to 1.11.
- Kubernetes now supports version 1.15.
- OwnerReferences are created with the fully qualified
apiVersion
- YugabyteDB is now supported by Rook with a new operator. You can deploy, configure and manage instances of this high-performance distributed SQL database. Create an instance of the new
ybcluster.yugabytedb.rook.io
custom resource to easily deploy a cluster of YugabyteDB Database. Checkout its user guide to get started with YugabyteDB. - Rook now supports Multi-Homed networking. This feature significantly improves performance and security by isolating a backend network as a separate network. Read more on Multi-Homed networking with Rook EdgeFS in this presentation at KubeCon China 2019.
Ceph
- The Ceph CSI driver is enabled by default and preferred over the flex driver
- The flex driver can be disabled in operator.yaml by setting ROOK_ENABLE_FLEX_DRIVER=false
- The CSI drivers can be disabled by setting ROOK_CSI_ENABLE_CEPHFS=false and ROOK_CSI_ENABLE_RBD=false
- FlexVolume plugin now supports dynamic PVC expansion.
- Rook can now connect to an external cluster. For more info about external cluster read the design as well as the documentation Ceph External cluster
- The device discovery daemon can be disabled in operator.yaml by setting ROOK_ENABLE_DISCOVERY_DAEMON=false
- The Rook Agent pods are now started when the CephCluster is created rather than immediately when the operator is started.
- The Cluster CRD now provides option to enable Prometheus-based monitoring with a prerequisite that Prometheus is already installed.
- During upgrades, Rook intelligently checks for each daemon state before and after upgrading. To learn more about the upgrade workflow see Ceph Upgrades
- Rook Operator now supports 2 new environment variables:
AGENT_TOLERATIONS
andDISCOVER_TOLERATIONS
. Each accept list of tolerations for agent and discover pods accordingly. - Ceph daemons now run as the 'ceph' user and not 'root' anymore (monitor or OSD stores already owned by 'root' will keep running under 'root'). Containers still initialized as the
root
user, however. NodeAffinity
can be applied torook-ceph-agent DaemonSet
withAGENT_NODE_AFFINITY
environment variable.NodeAffinity
can be applied torook-discover DaemonSet
withDISCOVER_AGENT_NODE_AFFINITY
environment variable.- Rook can now manage PodDisruptionBudgets for the following Daemons: OSD, mon, RGW, MDS. OSD budgets are dynamically managed as documented in the design. This can be enabled with the
managePodBudgets
flag in the cluster CR. When this is enabled, drains on OSDs will be blocked by default and dynamically unblocked in a safe manner one failureDomain at a time. When a failure domain is draining, it will be marked as no out for a longer time than the default DOWN/OUT interval. - Rook can now manage OpenShift MachineDisruptionBudgets for the OSDs. MachineDisruptionBudgets for OSDs are dynamically managed as documented in the
disruptionManagement
section of the CephCluster CR. This can be enabled with themanageMachineDisruptionBudgets
flag in the cluster CR. - Creation of storage pools through the custom resource definitions (CRDs) now allows users to optionally specify
deviceClass
property to enable
distribution of the data only across the specified device class. See Ceph Block Pool CRD for
an example usage
Mons
- Ceph monitor placement will now take failure zones into account see the
documentation
for more information. - The cluster CRD option to allow multiple monitors to be scheduled on the same
node---spec.Mon.AllowMultiplePerNode
---is now active when a cluster is first
created. Previously, it was ignored when a cluster was first installed. - Ceph monitors have initial support for running on PVC storage. See docs on
monitor settings for more detail.
Mgr
- Rook now has a new config CRD
mgr
to enable Ceph manager modules
OSDs
- Linear disk device can now be used for Ceph OSDs.
- Allow
metadataDevice
to be set per OSD device in the device specificconfig
section. - Added
deviceClass
to the per OSD device specificconfig
section for setting custom crush device class per OSD. - Use
--db-devices
with Ceph 14.2.1 and newer clusters to explicitly setmetadataDevice
per OSD. - Ceph OSDs can be created by using StorageClassDeviceSet. See docs on Storage Class Device Sets.
- The Rook-enforced minimum memory for OSD pods has been reduced from 4096M to 2048M
- Provisioning will fail if the user specifies a
metadataDevice
but that device is not used as a metadata device by Ceph. - Rook can now be configured to read "region" and "zone" labels on Kubernetes nodes and use that information as part of the CRUSH location for the OSDs.
- Added a new property in
storageClassDeviceSets
namedportable
:- If
true
, the OSDs will be allowed to move between nodes during failover. This requires a storage class that supports portability (e.g.aws-ebs
, but not the local storage provisioner). - If
false
, the OSDs will be assigned to a node permanently. Rook will configure Ceph's CRUSH map to support the portability.
- If
- Rook does not create an initial CRUSH map anymore and lets Ceph do it normally
- Ceph CRUSH tunable are not enforced to "firefly" anymore, Ceph picks the right tunable for its own version, to read more about tunable see the Ceph documentation
Object / RGW
- Buckets can be provisioned by defining an object bucket claim. This follows the standard Kubernetes pattern for PVCs, except now it's available for object storage as well! The admin just needs to create a storage class for object storage to make this available to consumers in the cluster.
- RGW pods have liveness probe enabled
- RGW instances have their own key and thus are properly reflected in the Ceph status
EdgeFS
- The minimum version supported by Rook is now EdgeFS v1.2.64.
- Graduate CRDs to stable v1 #3702
- Added support for useHostLocalTime option to synchronize time in service pods to host #3627
- Added support for Multi-homing networking to provide better storage backend security isolation #3576
- Allow users to define Kubernetes users to define ServiceType and NodePort via the service CRD spec #3516
- Added mgr pod liveness probes #3492
- Ability to add/remove nodes via EdgeFS cluster CRD #3462
- Support for device full name path spec i.e. /dev/disk/by-id/NAME #3374
- Rolling Upgrade support #2990
- Prevents multiple targets deployment on the same node #3181
- Enhance S3 compatibility support for S3X pods #3169
- Add K8S_NAMESPACE env to EdgeFS containers #3097
- Improved support for ISGW dynamicFetch configuring #3070
- OLM integration #3017
- Flexible Metadata Offload page size setting support #3776
YugabyteDB
- Rook now supports YugabyteDB as storage provider. YugaByteDB is a high-performance, cloud-native distributed SQL database which can tolerate disk, node, zone and region failures automatically. You can find more information about YugabyteDB here
- Newly added Rook operator for YugabyteDB lets you easily create a YugabyteDB cluster.
- Please follow Rook YugabyteDB operator quickstart guide to create a simple YugabyteDB cluster.
Breaking Changes
Ceph
- The minimum version supported by Rook is Ceph Mimic v13.2.4. Before upgrading to v1.1 it is required to update the version of Ceph to at least this version.
- The CSI driver is enabled by default. Documentation has been changed significantly for block and filesystem to use the CSI driver instead of flex.
While the flex driver is still supported, it is anticipated to be deprecated soon. - The
Mon.PreferredCount
setting has been removed. imagePullSecrets
option added to helm-chart
EdgeFS
- With Rook EdgeFS operator CRDs graduated to v1, please follow upgrade procedure on how to get CRDs and running setups converted.
- EdgeFS versions greater than v1.2.62 require full cluster restart.
Deprecations
Ceph
- For RGW, when deploying an object store with
object.yaml
, usingallNodes
is not supported anymore, and a transition path has been implemented in the code to migrate automatically without user intervention.
If you were usingallNodes: true
, Rook will replace each DaemonSet with a deployment (one for one replacement) gradually. This operation will be triggered on an update or when a new version of the operator is deployed. Once complete, it is expected that you edit your object CR withkubectl -n rook-ceph edit cephobjectstore.ceph.rook.io/my-store
and setallNodes: false
andinstances
with the current number of RGW instances.
Assets
2
travisn released this
Rook v1.0.6 is a patch release limited in scope and focusing on bug fixes.
Improvements
Ceph
Assets
2
travisn released this
Rook v1.0.5 is a patch release limited in scope and focusing on bug fixes.
Improvements
Ceph
- Set owner references properly to avoid unexpected K8s garbage collection (#3575, @renekalff)
- Add prometheus annotations properly to mgr deployment (#3204, @k0da)
Assets
2
travisn released this
Rook v1.0.4 is a patch release limited in scope and focusing on bug fixes.
Improvements
Ceph
Assets
2
travisn released this
Rook v1.0.3 is a patch release limited in scope and focusing on bug fixes.
Improvements
Ceph
- OSD startup on SDN for error "Cannot assign requested address" (#3140, @leseb)
- Change default frontend on nautilus to beast (#2707, @leseb)
- RGW daemon updates: (#3245 #2474 #2957, @leseb)
- Remove support for AllNodes where we would deploy one rgw per node on all the nodes
- Each rgw deployed has its own cephx key
- Upgrades will automatically transition these changes to the rgw daemons
- Correct
--ms-learn-addr-from-peer=false
argument for ceph-osd (@leseb) - When updating the CephCluster CR to run unsupported octopus, fix operator panic (#3137, @leseb)
- Add metrics for the flexvolume driver (#1659, @nabokihms)
- Set the fully qualified apiVersion on the OwnerReferences for cleanup on OpenShift (#2944, @travisn)
- Stop enforcing crush tunables for octopus warning (#3138, @leseb)
- Apply the osd nautilus flag for upgrade (#2960, @leseb)
EdgeFS
Assets
2
travisn released this
Rook v1.0.2 is a patch release limited in scope and focusing on bug fixes.
Improvements
Ceph
- Handle false positives for triggering orchestration after hotplugging devices (#3185 #3131 #3059, @noahdesu)
- Set fsgroup only on the top level of the cephfs mount (#2254, @travisn)
- Improved diff checking when the CR changes (#3166, @d-luu)
- Retry checking the ceph version if failed (#3227, @leseb)
- Resource limits: Document the minimum limits, add limits for rbd mirror daemons, and fix the memory check (@leseb)
- Deserialization of pg dump for nautilus when removing OSDs (#3178, @rohan47)
- Start OSDs only for correct ceph cluster when there are multiple ceph clusters on the same nodes (#2696, @sp98)
- Clarify documentation for creating OSDs (#3242 #3243, @travisn)
- Update the operator base image to v14.2.1 (#3120, @rohan47)
- Add separator to scc yaml example (#3096, @umangachapagain)
- Check before copying binaries in osd pods (#3099, @kshlm)
EdgeFS
Assets
2
travisn released this
Rook v1.0.1 is a patch release limited in scope and focusing on bug fixes.
Improvements
Ceph
- Support for
metadataDevice
for configuring OSDs (#3108, @mvollman) - Upgrades with host networking will be supported from v0.9 (#3111, @travisn)
- Upgrade documentation to enable msgr2 protocol, which requires failing over the mons to the new default port (#3104, @travisn)
- Teardown documentation updated with the new common.yaml and related changes (#3148, @galexrt)
EdgeFS
Assets
2
Major Themes
- Ceph: Nautilus is supported, improved automation for Ceph upgrades, experimental CSI driver, NFS, and much more!
- EdgeFS: CRDs declared beta, upgrade guide, new storage protocols, a new management experience, and much more!
- Minio: Responsive operator reconciliation loop and added health checks
- NFS: Dynamic provisioning of volumes
Action Required
If you are running a previous Rook version, please see the corresponding storage provider upgrade guide:
Notable Features
- The minimum version of Kubernetes supported by Rook changed from 1.8 to 1.10.
- K8s client packages updated from version 1.11.3 to 1.14.0
- Rook Operator switches from Extensions v1beta1 to use Apps v1 API for DaemonSet and Deployment.
Ceph
- Ceph Nautilus (
v14
) is now supported by Rook and is the default version deployed by the examples. - The Ceph-CSI driver is available for experimental mode.
- An operator restart is no longer needed for applying changes to the cluster in the following scenarios:
- When a node is added to the cluster, OSDs will be automatically configured as needed.
- When a device is attached to a storage node, OSDs will be automatically configured as needed.
- Any change to the CephCluster CR will trigger updates to the cluster.
- Upgrading the Ceph version will update all Ceph daemons (in v0.9, mds and rgw daemons were skipped)
- Ceph status is surfaced in the CephCluster CR and periodically updated by the operator (default is 60s). The interval can be configured with the
ROOK_CEPH_STATUS_CHECK_INTERVAL
env var. - A
CephNFS
CRD will start NFS daemon(s) for exporting CephFS volumes or RGW buckets. See the NFS documentation. - The flex driver can be configured to properly disable SELinux relabeling and FSGroup with the settings in operator.yaml.
- The number of mons can be increased automatically when new nodes come online. See the preferredCount setting in the cluster CRD documentation.
- New Kubernetes nodes or nodes which are not tainted
NoSchedule
anymore get added automatically to the existing rook cluster if useAllNodes is set. - Pod's logs can be written on the filesystem as of Ceph Nautilus 14.2.1 on demand (see common issues)
rook-version
andceph-version
labels are now applied to Ceph daemon Deployments, DaemonSets,
Jobs, and StatefulSets. These identify the Rook version which last modified the resource and the
Ceph version which Rook has detected in the pod(s) being run by the resource.- OSDs provisioned by
ceph-volume
now supportsmetadataDevice
anddatabaseSizeMB
options. - The operator will no longer remove osds from specified nodes when the node is tainted with
automatic Kubernetes taints
Osds can still be removed by more explicit methods. See the "Node Settings" section of the
Ceph Cluster CRD documentation for full details.
EdgeFS
- Declare all EdgeFS CRDs to be Beta v1. All users recommended to use documented migration procedure
- Automatic host validation and preparation of sysctl settings
- Support for OpenStack/SWIFT CRD
- Support for S3 bucket as DNS subdomain
- Support for Block (iSCSI) CSI Provisioner
- Support for Prometheus Dashboard and REST APIs
- Support for Management GUI with automated CRD wizards
- Support for Failure domains and zoning provisioning support
- Support for Multi-Namespace clusters with single operator instance
- Support for embedded mode and low resources deployments with minimally of 1GB of memory and 2 CPU cores
- Many bug fixes and usability improvements
Breaking Changes
- Rook no longer supports Kubernetes
1.8
and1.9
. - The build process no longer publishes the alpha, beta, and stable channels. The only channels published are
master
andrelease
. - The stability of storage providers is determined by the CRD versions rather than the overall product build, thus the channels were renamed to match this expectation.
Ceph
- Rook no longer supports running more than one monitor on the same node when
hostNetwork
andallowMultiplePerNode
aretrue
. - The example operator and CRD yaml files have been refactored to simplify configuration. See the examples help topic for more details.
- The common resources are now factored into
common.yaml
fromoperator.yaml
andcluster.yaml
. common.yaml
: Creates the namespace, RBAC, CRD definitions, and other common operator and cluster resourcesoperator.yaml
: Only contains the operator deploymentcluster.yaml
: Only contains the cluster CRD- Multiple examples of the operator and CRDs are provided for common usage of the operator and CRDs.
- By default, a single namespace (
rook-ceph
) is configured instead of two namespaces (rook-ceph-system
androok-ceph
). New and upgraded clusters can still be configured with the operator and cluster in two separate namespaces. Existing clusters will maintain their namespaces on upgrade.
- The common resources are now factored into
- Rook will no longer create a directory-based osd in the
dataDirHostPath
if no directories or
devices are specified or if there are no disks on the host. - Containers in
mon
,mgr
,mds
,rgw
, andrbd-mirror
pods have been removed and/or changed names. - Config paths in
mon
,mgr
,mds
andrgw
containers are now always under
/etc/ceph
or/var/lib/ceph
and as close to Ceph's default path as possible regardless of the
dataDirHostPath
setting. - The
rbd-mirror
pod labels now readrbd-mirror
instead ofrbdmirror
for consistency.
Known Issues
Ceph
- Creating an object store from Rook v1.0 will be configured incorrectly when running Ceph Luminous or Mimic. For users who are upgrading from v0.9 it is recommended to either create the object store before upgrading, or update to Nautilus before creating an object store.
Assets
2
travisn released this
Rook v0.9.3 is a patch release limited in scope and focusing on bug fixes.
Improvements
Cassandra
- Fix the mount point for the PVs (#2443, @yanniszark)
Ceph
Assets
2
travisn released this
Assets
2
Rook v0.9.2 is a patch release limited in scope and focusing on bug fixes.
Improvements
Ceph
- Correctly capture and log the stderr output from child processes (#2479 #2536, @noahdesu)
- Allow disabling setting fsgroup when mounting a volume (#2254, @travisn)
- Allow configuration of SELinux relabeling (#2417, @allen13)
- Correctly set the secretKey used for cephfs mounts (#2484, @galexrt)
- Set ceph-mgr privileges to prevent the dashboard from failing on rbd mirroring settings (#2404, @travisn)
- Correctly configure the ssl certificate for the RGW service (#2435, @syncroswitch)
- Allow configuration of the dashboard port (#2412, @noahdesu)
- Allow disabling of ssl on the dashboard (#2433, @noahdesu)
General
travisn released this
Assets
2
Rook v0.9.1 is a patch release limited in scope and focusing on bug fixes.
Improvements
Ceph
- Build with arch-specific Ceph base image to fix the arm64 build (#2406, @travisn)
- Detect the correct version of the Ceph image when the crd is edited (#2353, @travisn)
- Correct the name of the
dataBlockPool
parameter for the storage class of an erasure-coded pool (#2370, @galexrt) - Retry generating the self signed cert if the dashboard module is not ready (#2298, @travisn)
- Set the
server_addr
on the prometheus and dashboard modules to avoid health errors (#2335, @travisn) - Documentation: Add the missing mon count to a cluster crd example (@multi-io) and add stable channel to the helm chart docs (@jbw976)
EdgeFS
Assets
2
Major Themes
- Storage Providers for Cassandra, EdgeFS, and NFS were added
- Ceph CRDs have been declared stable V1.
- Ceph versioning is decoupled from the Rook version. Luminous and Mimic can be run in production, or Nautilus in experimental mode.
- Ceph upgrades are greatly simplified
Action Required
- Existing clusters that are running previous versions of Rook will need to be migrated to be compatible with the v0.9 operator and to begin using the new
ceph.rook.io/v1
CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release.
Notable Features
- The minimum version of Kubernetes supported by Rook changed from
1.7
to1.8
. - K8s client-go updated from version 1.8.2 to 1.11.3
Ceph
- The Ceph CRDs are now v1. The operator will automatically convert the CRDs from v1beta1 to v1.
- Different versions of Ceph can be orchestrated by Rook. Both Luminous and Mimic are now supported, with Nautilus coming soon.
The version of Ceph is specified in the cluster CRD with the cephVersion.image property. For example, to run Mimic you could use imageceph/ceph:v13.2.2-20181023
or any other image found on the Ceph DockerHub. - The
fsType
default for StorageClass examples are now using XFS to bring it in line with Ceph recommendations. - Rook Ceph block storage provisioner can now correctly create erasure coded block images. See Advanced Example: Erasure Coded Block Storage for an example usage.
- Service account (
rook-ceph-mgr
) added for the mgr daemon to grant the mgr orchestrator modules access to the K8s APIs. reclaimPolicy
parameter ofStorageClass
definition is now supported.- The toolbox manifest now creates a deployment based on the
rook/ceph
image instead of creating a pod on a specializedrook/ceph-toolbox
image. - The frequency of discovering devices on a node is reduced to 60 minutes by default, and is configurable with the setting
ROOK_DISCOVER_DEVICES_INTERVAL
in operator.yaml. - The number of mons can be changed by updating the
mon.count
in the cluster CRD. - RBD Mirroring is enabled by Rook. By setting the number of rbd mirroring workers, the daemon(s) will be started by rook. To configure the pools or images to be mirrored, use the Rook toolbox to run the rbd mirror configuration tool.
- Object Store User creation via CRD for Ceph clusters.
- Ceph MON, OSD, MGR, MDS, and RGW deployments (or DaemonSets) will be updated/upgraded automatically with updates to the Rook operator.
- Ceph OSDs are created with the
ceph-volume
tool when configuring devices, adding support for multiple OSDs per device. See the OSD configuration settings
NFS
- Network File System (NFS) is now supported by Rook with a new operator to deploy and manage this widely used server. NFS servers can be automatically deployed by creating an instance of the new
nfsservers.nfs.rook.io
custom resource. See the NFS server user guide to get started with NFS.
Cassandra
- Cassandra and Scylla are now supported by Rook with the rook-cassandra operator. Users can now deploy, configure and manage Cassandra or Scylla clusters, by creating an instance of the
clusters.cassandra.rook.io
custom resource. See the user guide to get started.
EdgeFS Geo-Transparent Storage
- EdgeFS are supported by a Rook operator, providing high-performance and low-latency object storage system with Geo-Transparent data access via standard protocols. See the user guide to get started.
Breaking Changes
- The Rook container images are no longer published to quay.io, they are published only to Docker Hub. All manifests have referenced Docker Hub for multiple releases now, so we do not expect any directly affected users from this change.
- Rook no longer supports kubernetes
1.7
. Users running Kubernetes1.7
on their clusters are recommended to upgrade to Kubernetes1.8
or higher. If you are usingkubeadm
, you can follow this guide to from Kubernetes1.7
to1.8
. If you are usingkops
orkubespray
for managing your Kubernetes cluster, just follow the respective projects'upgrade
guide.
Ceph
- The Ceph CRDs are now v1. With the version change, the
kind
has been renamed for the following Ceph CRDs:Cluster
-->CephCluster
Pool
-->CephBlockPool
Filesystem
-->CephFilesystem
ObjectStore
-->CephObjectStore
ObjectStoreUser
-->CephObjectStoreUser
- The
rook-ceph-cluster
service account was renamed torook-ceph-osd
as this service account only applies to OSDs.- On upgrade from v0.8, the
rook-ceph-osd
service account must be created before starting the operator on v0.9. - The
serviceAccount
property has been removed from the cluster CRD.
- On upgrade from v0.8, the
- Ceph mons are named consistently with other daemons with the letters a, b, c, etc.
- Ceph mons are now created with Deployments instead of ReplicaSets to improve the upgrade implementation.
- Ceph mon, osd, mgr, mds, and rgw container names in pods have changed with the refactors to initialize the daemon environments via pod InitContainers and run the Ceph daemons directly from the container entrypoint.
Minio
- Minio no longer exposes a configurable port for each distributed server instance to use. This was an internal only port that should not need to be configured by the user. All connections from users and clients are expected to come in through the configurable Service instance.
Known Issues
Ceph
- Upgrades are not supported to nautilus. Specifically, OSDs configured before the upgrade (without ceph-volume) will fail to start on nautilus. Nautilus is not officially supported until its release, but otherwise is expected to be working in test clusters.
Rook v0.8.1 is a patch release limited in scope and focusing on bug fixes.
Improvements
- An upgrade guide has been authored for upgrading from the v0.8.0 release to this v0.8.1 patch release. Please refer to this new guide when upgrading to v0.8.1. (@travisn)
- Ceph is updated to Luminous 12.2.7. (@travisn)
- Ceph OSDs will be automatically updated by the operator when there is a change to the operator version or when the OSD configuration changes. See the OSD upgrade notes. (@travisn)
- Ceph erasure-coded pools have the
min_size
set to the number of data chunks. (@galexrt) - Ceph OSDs will refresh their config at each startup with an init container. (@travisn)
- Ceph OSDs will respect the
placement
specified in the cluster CRD. (@rootfs) - Ceph OSDs will use the update strategy of
recreate
to avoid resource contention at restart. (@galexrt) - Pod names for Ceph OSDs are truncated in environments with long host names (@galexrt)
- The documentation for Rook flexvolume configuration was improved to reduce confusion and address all known scenarios and environments (@galexrt)
Major Themes
- Framework and architecture to support general cloud-native storage orchestration, with new support for CockroachDB and Minio. More storage providers will be integrated in the near future.
- Ceph support has been graduated to Beta maturity
- Full project status details can be found in the project status section of the main README.
- Security model has been improved, the cluster admin now has full control over the permissions granted to Rook and the privileges required to run the operato(s) are now much more restricted.
- Openshift is now a supported environment.
Action Required
- Existing clusters that are running previous versions of Rook will need to be upgraded/migrated to be compatible with the
v0.8
operator and to begin using the newrook.io/v1alpha2
andceph.rook.io/v1beta1
CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release, as it has been updated with specific steps to help you upgrade tov0.8
.
Notable Features
- Rook is now architected to be a general cloud-native storage orchestrator, and can now support multiple types of storage and providers beyond Ceph.
- CockroachDB is now supported by Rook with a new operator to deploy, configure and manage instances of this popular and resilient SQL database. Databases can be automatically deployed by creating an instance of the new
cluster.cockroachdb.rook.io
custom resource. See the CockroachDB user guide to get started with CockroachDB. - Minio is also supported now with an operator to deploy and manage this popular high performance distributed object storage server. To get started with Minio using the new
objectstore.minio.rook.io
custom resource, follow the steps in the Minio user guide.
- CockroachDB is now supported by Rook with a new operator to deploy, configure and manage instances of this popular and resilient SQL database. Databases can be automatically deployed by creating an instance of the new
- The status of Rook is no longer published for the project as a whole. Going forward, status will be published per storage provider or API group. Full details can be found in the project status section of the README.
- Ceph support has graduated to Beta.
- The
rook/ceph
image is now based on the ceph-container project's 'daemon-base' image so that Rook no longer has to manage installs of Ceph in image. This image is based on CentOS 7. - One OSD will run per pod to increase the reliability and maintainability of the OSDs. No longer will restarting an OSD pod mean that all OSDs on that node will go down. See the design doc.
- Ceph tools can be run from any rook pod.
- Output from stderr will be included in error messages returned from the
exec
of external tools. - Rook-Operator no longer creates the resources CRD's or TPR's at the runtime. Instead, those resources are provisioned during deployment via
helm
orkubectl
. - IPv6 environments are now supported.
- Rook CRD code generation is now working with BSD (Mac) and GNU sed.
- The Ceph dashboard can be enabled by the cluster CRD.
monCount
has been renamed tocount
, which has been moved into themon
spec. Additionally the default if unspecified or0
, is now3
.- You can now toggle if multiple Ceph mons might be placed on one node with the
allowMultiplePerNode
option (defaultfalse
) in themon
spec. - Added
nodeSelector
to Rook Ceph operator Helm chart.
Breaking Changes
- It is recommended to only use official releases of Rook, as unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases.
- Removed support for Kubernetes 1.6, including the legacy Third Party Resources (TPRs).
- Various paths and resources have changed to accommodate multiple storage providers:
- Examples: The yaml files for creating a Ceph cluster can be found in
cluster/examples/kubernetes/ceph
. The yaml files that are provider-independent will still be found in thecluster/examples/kubernetes
folder. - CRDs: The
apiVersion
of the Rook CRDs are now provider-specific, such asceph.rook.io/v1beta1
instead ofrook.io/v1alpha1
. - Cluster CRD: The Ceph cluster CRD has had several properties restructured for consistency with other storage provider CRDs. Rook will automatically upgrade the previous Ceph CRD versions to the new versions with all the compatible properties. When creating the cluster CRD based on the new
ceph.rook.io
apiVersion you will need to take note of the new settings structure. - Container images: The container images for Ceph and the toolbox are now
rook/ceph
androok/ceph-toolbox
. The steps in the upgrade user guide will automatically start using these new images for your cluster. - Namespaces: The example namespaces are now provider-specific. Instead of
rook-system
androok
, you will seerook-ceph-system
androok-ceph
. - Volume plugins: The dynamic provisioner and flex driver are now based on
ceph.rook.io
instead ofrook.io
- Examples: The yaml files for creating a Ceph cluster can be found in
- Minimal privileges are configured with a new cluster role for the operator and Ceph daemons, following the new security design.
- A role binding must be defined for each cluster to be managed by the operator.
- OSD pods are started by a deployment, instead of a daemonset or a replicaset. The new OSD pods will crash loop until the old daemonset or replicasets are removed.
Removal of the API service and rookctl tool
The REST API service has been removed. All cluster configuration is now accomplished through the
CRDs or with the Ceph tools in the toolbox.
The tool rookctl
has been removed from the toolbox pod. Cluster status and configuration can be queried and changed with the Ceph tools.
Here are some sample commands to help with your transition.
rookctl Command |
Replaced by | Description |
---|---|---|
rookctl status |
ceph status |
Query the status of the storage components |
rookctl block |
See the Block storage and direct Block config | Create, configure, mount, or unmount a block image |
rookctl filesystem |
See the Filesystem and direct File config | Create, configure, mount, or unmount a file system |
rookctl object |
See the Object storage config | Create and configure object stores and object users |
Deprecations
- Legacy CRD types in the
rook.io/v1alpha1
API group have been deprecated. The types from
rook.io/v1alpha2
should now be used instead. - Legacy command flag
public-ipv4
in the ceph components have been deprecated,public-ip
should now be used instead. - Legacy command flag
private-ipv4
in the ceph components have been deprecated,private-ip
should now be used instead.
Rook v0.7.1 is a patch release limited in scope and focusing on bug fixes.
Improvements
- The version of Ceph has been updated to Luminous 12.2.4 (@bassam)
- When a Ceph monitor is failed over, it will be assigned an appropriate IP address when host networking is being used (@galexrt)
- The upgrade user guide has been updated to include steps for upgrading from v0.6.x to the v0.7 releases (@travisn)
- An issue was fixed that prevented the Helm charts from being correctly published to https://charts.rook.io/ (@bassam)
- In environments where the Kubernetes cluster does not have a version set, the Helm charts will now appropriately proceed (@TimJones)
v0.7.0
jbw976 released this
Notable Features
- The Cluster CRD can now be edited/updated to add and remove storage nodes. Note that only adding/removing entire nodes is currently supported, but adding individual disks/directories will also be supported soon.
- The
rook/rook
image now uses the official Ceph packages instead of compiling from source. This ensures that Rook always ships the latest stable and supported Ceph version and reduces the developer burden for maintenance and building. - Resource limits are now supported for all pod types. You can constrain the CPU and memory usage for all Rook pods by setting resource limits in the Cluster CRD.
- Monitoring is now done through the Ceph MGR service for Ceph storage.
- The CRUSH root can be specified for pools with the
crushRoot
property, rather than always using thedefault
root. Configuration of the CRUSH hierarchy is necessary with theceph osd crush
commands in the toolbox. - A full client API has been generated for all Kubernetes resource types defined in Rook. This allows you to programmatically interact with Rook deployments using golang.
- The full list of resolved issues can be found in the 0.7 milestone page
Operator Settings
AGENT_TOLERATION
: Toleration can be added to the Rook agent, such as to run on the master node.FLEXVOLUME_DIR_PATH
: Flex volume directory can be overridden on the Rook agent.
Breaking Changes
armhf
build of Rook have been removed. Ceph is not supported or tested onarmhf
. arm64 support continues.
Cluster CRD
- Removed the
versionTag
property. The container version to launch in all pods will be the same as the version of the operator container. - Added the
cluster.rook.io
finalizer. When a cluster is deleted, the operator will cleanup resources and remove the finalizer, which then allows K8s to delete the CRD.
Operator
- Removed the
ROOK_REPO_PREFIX
env var. All containers will be launched with the same image as the operator
Deprecations
- Monitoring through rook-api is deprecated. The Ceph MGR service named
rook-ceph-mgr
port9283
path/
should be used instead: https://rook.io/docs/rook/master/monitoring.html
Watchers:265 |
Star:6484 |
Fork:1201 |
创建时间: 2016-07-09 06:45:05 |
最后Commits: 昨天 |
许可协议:Apache-2.0 |
50c6ca1
Improvements
Rook v1.1.7 is a patch release limited in scope and focusing on bug fixes.
Ceph