5
5
################################################################################
6
6
7
7
Author: NetApp and Open Grid Computing
8
- Date: April 15 , 2008
8
+ Date: May 29 , 2008
9
9
10
10
Table of Contents
11
11
~~~~~~~~~~~~~~~~~
@@ -60,39 +60,52 @@ Installation
60
60
The procedures described in this document have been tested with
61
61
distributions from Red Hat's Fedora Project (http://fedora.redhat.com/).
62
62
63
- - Install nfs-utils-1.1.1 or greater on the client
63
+ - Install nfs-utils-1.1.2 or greater on the client
64
64
65
- An NFS/RDMA mount point can only be obtained by using the mount.nfs
66
- command in nfs-utils-1.1.1 or greater. To see which version of mount.nfs
67
- you are using, type:
65
+ An NFS/RDMA mount point can be obtained by using the mount.nfs command in
66
+ nfs-utils-1.1.2 or greater (nfs-utils-1.1.1 was the first nfs-utils
67
+ version with support for NFS/RDMA mounts, but for various reasons we
68
+ recommend using nfs-utils-1.1.2 or greater). To see which version of
69
+ mount.nfs you are using, type:
68
70
69
- > /sbin/mount.nfs -V
71
+ $ /sbin/mount.nfs -V
70
72
71
- If the version is less than 1.1.1 or the command does not exist,
72
- then you will need to install the latest version of nfs-utils.
73
+ If the version is less than 1.1.2 or the command does not exist,
74
+ you should install the latest version of nfs-utils.
73
75
74
76
Download the latest package from:
75
77
76
78
http://www.kernel.org/pub/linux/utils/nfs
77
79
78
80
Uncompress the package and follow the installation instructions.
79
81
80
- If you will not be using GSS and NFSv4, the installation process
81
- can be simplified by disabling these features when running configure:
82
+ If you will not need the idmapper and gssd executables (you do not need
83
+ these to create an NFS/RDMA enabled mount command), the installation
84
+ process can be simplified by disabling these features when running
85
+ configure:
82
86
83
- > ./configure --disable-gss --disable-nfsv4
87
+ $ ./configure --disable-gss --disable-nfsv4
84
88
85
- For more information on this see the package's README and INSTALL files.
89
+ To build nfs-utils you will need the tcp_wrappers package installed. For
90
+ more information on this see the package's README and INSTALL files.
86
91
87
92
After building the nfs-utils package, there will be a mount.nfs binary in
88
93
the utils/mount directory. This binary can be used to initiate NFS v2, v3,
89
- or v4 mounts. To initiate a v4 mount, the binary must be called mount.nfs4.
90
- The standard technique is to create a symlink called mount.nfs4 to mount.nfs.
94
+ or v4 mounts. To initiate a v4 mount, the binary must be called
95
+ mount.nfs4. The standard technique is to create a symlink called
96
+ mount.nfs4 to mount.nfs.
91
97
92
- NOTE: mount.nfs and therefore nfs-utils-1.1.1 or greater is only needed
98
+ This mount.nfs binary should be installed at /sbin/mount.nfs as follows:
99
+
100
+ $ sudo cp utils/mount/mount.nfs /sbin/mount.nfs
101
+
102
+ In this location, mount.nfs will be invoked automatically for NFS mounts
103
+ by the system mount commmand.
104
+
105
+ NOTE: mount.nfs and therefore nfs-utils-1.1.2 or greater is only needed
93
106
on the NFS client machine. You do not need this specific version of
94
107
nfs-utils on the server. Furthermore, only the mount.nfs command from
95
- nfs-utils-1.1.1 is needed on the client.
108
+ nfs-utils-1.1.2 is needed on the client.
96
109
97
110
- Install a Linux kernel with NFS/RDMA
98
111
@@ -156,8 +169,8 @@ Check RDMA and NFS Setup
156
169
this time. For example, if you are using a Mellanox Tavor/Sinai/Arbel
157
170
card:
158
171
159
- > modprobe ib_mthca
160
- > modprobe ib_ipoib
172
+ $ modprobe ib_mthca
173
+ $ modprobe ib_ipoib
161
174
162
175
If you are using InfiniBand, make sure there is a Subnet Manager (SM)
163
176
running on the network. If your IB switch has an embedded SM, you can
@@ -166,18 +179,18 @@ Check RDMA and NFS Setup
166
179
167
180
If an SM is running on your network, you should see the following:
168
181
169
- > cat /sys/class/infiniband/driverX/ports/1/state
182
+ $ cat /sys/class/infiniband/driverX/ports/1/state
170
183
4: ACTIVE
171
184
172
185
where driverX is mthca0, ipath5, ehca3, etc.
173
186
174
187
To further test the InfiniBand software stack, use IPoIB (this
175
188
assumes you have two IB hosts named host1 and host2):
176
189
177
- host1> ifconfig ib0 a.b.c.x
178
- host2> ifconfig ib0 a.b.c.y
179
- host1> ping a.b.c.y
180
- host2> ping a.b.c.x
190
+ host1$ ifconfig ib0 a.b.c.x
191
+ host2$ ifconfig ib0 a.b.c.y
192
+ host1$ ping a.b.c.y
193
+ host2$ ping a.b.c.x
181
194
182
195
For other device types, follow the appropriate procedures.
183
196
@@ -202,55 +215,57 @@ NFS/RDMA Setup
202
215
/vol0 192.168.0.47(fsid=0,rw,async,insecure,no_root_squash)
203
216
/vol0 192.168.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)
204
217
205
- The IP address(es) is(are) the client's IPoIB address for an InfiniBand HCA or the
206
- cleint's iWARP address(es) for an RNIC.
218
+ The IP address(es) is(are) the client's IPoIB address for an InfiniBand
219
+ HCA or the cleint's iWARP address(es) for an RNIC.
207
220
208
- NOTE: The "insecure" option must be used because the NFS/RDMA client does not
209
- use a reserved port.
221
+ NOTE: The "insecure" option must be used because the NFS/RDMA client does
222
+ not use a reserved port.
210
223
211
224
Each time a machine boots:
212
225
213
226
- Load and configure the RDMA drivers
214
227
215
228
For InfiniBand using a Mellanox adapter:
216
229
217
- > modprobe ib_mthca
218
- > modprobe ib_ipoib
219
- > ifconfig ib0 a.b.c.d
230
+ $ modprobe ib_mthca
231
+ $ modprobe ib_ipoib
232
+ $ ifconfig ib0 a.b.c.d
220
233
221
234
NOTE: use unique addresses for the client and server
222
235
223
236
- Start the NFS server
224
237
225
- If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in kernel config),
226
- load the RDMA transport module:
238
+ If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
239
+ kernel config), load the RDMA transport module:
227
240
228
- > modprobe svcrdma
241
+ $ modprobe svcrdma
229
242
230
- Regardless of how the server was built (module or built-in), start the server:
243
+ Regardless of how the server was built (module or built-in), start the
244
+ server:
231
245
232
- > /etc/init.d/nfs start
246
+ $ /etc/init.d/nfs start
233
247
234
248
or
235
249
236
- > service nfs start
250
+ $ service nfs start
237
251
238
252
Instruct the server to listen on the RDMA transport:
239
253
240
- > echo rdma 2050 > /proc/fs/nfsd/portlist
254
+ $ echo rdma 2050 > /proc/fs/nfsd/portlist
241
255
242
256
- On the client system
243
257
244
- If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in kernel config),
245
- load the RDMA client module:
258
+ If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
259
+ kernel config), load the RDMA client module:
246
260
247
- > modprobe xprtrdma.ko
261
+ $ modprobe xprtrdma.ko
248
262
249
- Regardless of how the client was built (module or built-in), issue the mount.nfs command:
263
+ Regardless of how the client was built (module or built-in), use this
264
+ command to mount the NFS/RDMA server:
250
265
251
- > /path/to/your/ mount.nfs <IPoIB-server-name-or-address>:/<export> /mnt -i -o rdma,port=2050
266
+ $ mount -o rdma,port=2050 <IPoIB-server-name-or-address>:/<export> /mnt
252
267
253
- To verify that the mount is using RDMA, run "cat /proc/mounts" and check the
254
- "proto" field for the given mount.
268
+ To verify that the mount is using RDMA, run "cat /proc/mounts" and check
269
+ the "proto" field for the given mount.
255
270
256
271
Congratulations! You're using NFS/RDMA!
0 commit comments