Just a quick follow on to the last post. If you try to use native IB support with GPFS instead of the IP over IB you need to set the verbsRdma to enabled. You also need to set the rdma device to use with the verbsPort setting. If you get the following error in /var/mfms/gen/mmfslog then you didn’t properly set the verbsPort

VERBS RDMA starting.
VERBS RDMA library libibverbs.so (version >= 1.1) loaded and initialized.
VERBS RDMA library libibverbs.so unloaded.
VERBS RDMA failed to start, no verbsPorts defined.

Here is what I used for our single port HBA Infiniband setup.

mmchconfig verbsPorts=”ib0/1″
mmchconfig verbsRdma=enable

Then looking at my configuration I now show things correctly

[root@topaz-m1 ~]# mmlsconfig
Configuration data for cluster aaa.bbb.navy.mil:
—————————————————-
clusterName aaa.bbb.navy.mil
clusterId 72452797724XXXXXXXX
clusterType lc
autoload no
minReleaseLevel 3.2.1.5
dmapiFileHandleSize 32
dataStructureDump /scratch/root/GPFS_Dump
maxblocksize 512K
maxFilesToCache 7000
maxMBpS 2048
maxStatCache 28000
pagepool 512M
verbsPorts ib0/1
verbsRdma enable

File systems in cluster aaa.bbb.navy.mil:
———————————————
/dev/gpfs_a
/dev/gpfs_b
/dev/gpfs_c
/dev/gpfs_d

Success. I am now running gpfs natively across Infiniband with Linux. Much better performance this way.
Our basic configuration in case you are wondering.

40 Dell 1950 and 2950 nodes.

  • 1 head node 2950
  • 4 I/O 2950 nodes with HBA’s and HCA’s
  • 32 Compute nodes HBA’s
  • several misc nodes with HBA’s

Qlogic single port HBA’s
Redhat Enterprise Server 5.1
GPFS 3.2.1.9
DDN 9900

Advertisements