@@ -22,9 +22,7 @@ In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
22
22
on symmetric access by all clients to shared block devices, Ceph
23
23
separates data and metadata management into independent server
24
24
clusters, similar to Lustre. Unlike Lustre, however, metadata and
25
- storage nodes run entirely as user space daemons. Storage nodes
26
- utilize btrfs to store data objects, leveraging its advanced features
27
- (checksumming, metadata replication, etc.). File data is striped
25
+ storage nodes run entirely as user space daemons. File data is striped
28
26
across storage nodes in large chunks to distribute workload and
29
27
facilitate high throughputs. When storage nodes fail, data is
30
28
re-replicated in a distributed fashion by the storage nodes themselves
@@ -164,11 +162,11 @@ More Information
164
162
================
165
163
166
164
For more information on Ceph, see the home page at
167
- http ://ceph.newdream.net /
165
+ https ://ceph.com /
168
166
169
167
The Linux kernel client source tree is available at
170
- git ://ceph.newdream.net/git /ceph-client.git
168
+ https ://github.com/ceph /ceph-client.git
171
169
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
172
170
173
171
and the source for the full system is at
174
- git ://ceph.newdream.net/git /ceph.git
172
+ https ://github.com/ceph /ceph.git
0 commit comments