You are currently browsing the category archive for the ‘Computers’ category.

So I paid for Cineveo and installed on my system.

Cineveo is a movie app for the Oculus Rift, and other VR players. You currently get 6 different scenarios. A movie theater, an outdoor theater, water themed, space themed, and dark void. To actually purchase and download Cineveo was a bit weird. I used paypal to purchase, $10, the software then I received a notice saying I would get an email within the next day or so with a link to download the software. So, I had to wait a day. The email and link showed up. It was just a strange transaction.

I guess you can get it off of Steam also. I just don’t trust them to have a VR app working if it killed them. So, I’m avoiding Steam like a plague ship. Which, as far as I can tell, they deserve.

My first impression of Cineveo was “Holy Cow!! This is Awesome!” When I figured out the head tracking interface and got a movie running the first thing I loaded was the outdoor drive in theater. It was bloody amazing. Sitting in an old convertible looking down at the lights of a city. The show playing on the big screen.  First impression was awe. It was really neat being in such an immersive environment.  Then I tried out the 4D theater. Again, an amazingly immersive experience.

But. There is always a but isn’t there?

After looking around and drooling like fool for a while I sat back to watch a show. Unfortunately with the current version of the Rift the movie quality was not good. It’s actually quite poor. The screen door effect was very pronounced. It was distracting it was so bad.  This is a limitation of the Rift DK2. I’m hoping the commercially available version won’t have this problem. So. To break things down.

The Good

  • The interface is neat once you figure it out.
  • The themed environment is awesome.
  • The feeling of being immersed was really good.
  • Fairly inexpensive. Only $10. Worth the price.

The not so good

  • I couldn’t figure out how to play DVD’s or BlueRay’s. Only local MP4 style movies. Great if you’ve torrented your movies. Not so good if you’ve purchased.
  • No Netflix, Hulu, etc support.
  • Purchasing the software was a bit weird.
  • The movie quality on the Rift isn’t worth watching at this time. Better to just use the normal display.

Final Thoughts: I’m glad I bought the software. Probably won’t use it much currently because of the poor movie quality. Here is a picture looking back at the Drive In Theater.

A drive in theater

Advertisements

Well,

I’ve tried for two days now and my educated opinion of Steams Oculus Rift support is Excrement! Large stinking piles of excrement. Heck, it’s so bad it makes excrement seem good.

So. On to other things that actually do work.

Quick reasoning why I am so disgusted.

  1. Install steam.
  2. Install steamVR
  3. Download several “VR” aware games
  4. Modify a bunch of hidden options according to web pages.
  5. Fire up games. Nothing goes to the Oculus.
  6. Try a bunch of setting. Search internet. Again nothing goes to the Oculus.
  7. On every reboot. F$%#@ steam starts up and demands to connect to my account.
  8. Try games again. Again  nothing.
  9. Uninstall the whole fricken thing.
  10. Move on to things that work.

So,

In preparation to the Oculus Rift DK2 showing up I needed to build a computer that was powerful enough to drive it. I also wanted to build a system that would be somewhat future proofed. As such I build the baddest system I could with the money I had budgeted.

  • Motherboard – ASRock Z97 Extreme 6 – Toms Hardware recommended
  • CPU – Intel I7 4790k – Toms Hardware recommended
  • CPU Cooler – Corsair H50 Hydro series – Went mid level, I plan on upgrading to a CPU/GPU system in the future.
  • Memory – 32 GB G.SKILL Ares (4x8GB) DDR3 2400Mhz. (Best speed for money. Didn’t want memory to be a bottleneck)
  • Boot Drive – Samsung SM951 M.2 SSD 256GB (Fasted SSD without going crazy expensive)
  • Storage Drive – Western Digital 2TB Green 6GB/s SATA
  • Case – Arctic Nine Hundred – Decent case with ports for water cooling and 2 front USB 3 connections
  • Video Card – EVGA GeForce GTX 980 TI – This is where I spared no expenses and went for the best, within reason.
  • Monitor – Acer 27″ 2K – Looking for a good monitor without going to far, I’ll be using the Rift mostly (I hope)

 

Some lessons learned.

  1. The case fan had to be removed, the radiator installed and the fan put back on.
  2. UPDATE FLASH IMMEDIATELY!! The ASrock mother board will update the flash itself. A very nice option.
    1. Plug in a network cable
    2. Go into the UEFI (F2 on boot) and find the option under tools for Internet flash update
    3. Select go
    4. Wait 10 minutes.
    5. Done
  3. The M.2 SSD will NOT boot without a newer flash (2.30) installed.  It will install, just not boot.
  4. To create a USB install for windows 8.1 Pro.
    1. The full install takes about 10 minutes. Seriously. That’s it using the USB 3 ports.
    2. Go HERE and select “Create Media”
      1. Full link http://windows.microsoft.com/en-us/windows-8/create-reset-refresh-media
    3. You will need your Windows 8.1 Pro key to install. No surprise there.
  5. There is a weird bracket on the back of the central drive bay. This must be removed for the GTX 980 TI video card to fit.

Boot time after everything is installed and working? About 15 Seconds.

Well, the Oculus Rift DK2 showed up in the mail yesterday.

The Oculus Rift Developer Kit 2

The Oculus Rift Developer Kit 2

The box came in at just under 6 pounds.

The main things I remember that came in the box (I’ll upload us unboxing it)

  • Small setup document – a couple of dozen pages
  • The Oculus itself – plastic wrapped with the A lenses installed
  • An extra set of Lenses – B lenses for nearsighted folks
  • The Positional Tracker – IR webcam
  • Positional Tracker cable – looks like an audio cable
  • Positional Tracker USB cable –
  • Power Adapter and multiple international adapters – I’m currently not using.
  • Lens cleaning cloth – The most useful item so far.

 

The physical installation was pretty straight forward.

  • Plug the HMD (Head Mounted Display) USB cable into a USB port on the back
  • Plug the HMD HDMI cable into the video card
  • Place the PT (Positional Tracker) on the monitor (It didn’t fit well on my monitor)
  • Plug the micro USB side into the PT, the other side into a USB port on the back of the computer
  • Plug the PT cable (looks like an audio cable) into the side of the PT and then into the box on the HMD cables.

The physical installation of the Oculus Rift DK2 is complete.

A few notes. We discovered that, by habit, we kept trying to put the Oculus HMD on like you would a baseball cap. IE hook on the back of the head and pull it down onto your face. We would then need to take it off and clean the forehead oil off of the lenses.  We’re still working on the best way to put it on, but so far it’s putting it up against your eyes with one hand and pulling the straps down the back of your head with the other hand. This way you don’t put your forehead agains the lenses.

Megan initially trying the Oculus Rift DK2 on.

Megan initially trying the Oculus Rift DK2 on.

 

Here is the initial unboxing of the Oculus Rift DK2.

 

 

So, most people know that I work on some pretty big computers for a living. What’s really strange is how fragile those computes can be some times.

Here is the scenario. I’m called into work on a Sunday afternoon because our production cluster is having issues. Jobs will not run, everything is at a stand still. I come in and poke around a while and I finally find the culprit. A $1 fan on a 10Gigabit Ethernet card has failed, causing the Ethernet card to shut itself off to prevent itself from melting. The loss of the Ethernet card means that one of my management DNS servers has gone down. Because the DNS service is down the compute nodes have a time out before switching to the secondary. That time out period is enough to mean nothing runs. And I do mean nothing. 350+ compute nodes all making heat and keeping floor tiles down. All because of one little fan in just the right spot.

 

HCA_fan

Here you can see the card the fan was on.

2014-08-10 15.22.46

Here is a couple of examples of what one of the charts looks like. Each file system will have similar charts for 12 hours, 24 hours, 48 hours, 1 week, and 1 month.

I’ve removed part of the LUN names for obfuscation purposes.

24 hour graph of utilization

24 hour graph of utilization

24 hour chart showing average wait time

24 hour chart showing average wait time

 

Here is the main script I use to parse the gpfs.tmp files for which I/O nodes have which dm’s associated with which file system and then using that data create the multitude of graphs.

I make graphs for 12 hour (one Navy watch), 24 hours, 48 hours, a week, and a month. There are two main graphs I’m creating right now. The Average Wait graph and the % utilization graph. Also, if you delve into the code you will see I search for data in the lun name so that I don’t add the metadata luns into the charts. It just keeps it cleaner.

FYI. I’ve modified the scripts to remove any reference to any system where I work. So, I don’t “think” I’ve introduced any errors into the scripts, but it’s definitely possible that I have.

#!/usr/bin/python
# written by Richard Hickey
# 20 March 2014
# This script will read the lun layout files /gpfs/scratch/*.tmp
# and then create the utilization and average wait graphs in /var/www/html/iostats

import re
import sys
import rrdtool

#————————————————————————————-
# Set up an array with all of the file systems to parse through
# Set up a dictionary called filesystem for human readable names
#————————————————————————————-
myGPFSArray = [“gpfs_alpha”, “gpfs_beta”, “gpfs_ops”, “gpfs_scratch”]
filesystem = {‘gpfs_alpha’:’Alpha’, ‘gpfs_beta’:’Beta’, ‘gpfs_ops’:’Ops’, ‘gpfs_scratch’:’Scratch’}

#————————————————————————————-
# This function opens the gpfs lun mapping configuration file
# and fills in a data array with LUN, host, dm, and state (state isn’t used)
#————————————————————————————-
def getData(GPFSFileSystem):
    try:
        myFile=open(‘/gpfs/scratch/’ + GPFSFileSystem + ‘.tmp’, ‘r’) # open the config file
        myConfigArray = [] # initialize the array
        for line in myFile: # walk through the file line by line
            line = line.strip() # remove the newline character
            myline = line.split( ) # break the line into pieces using space
            myConfigArray.append(myline)
        myFile.close() # close the config file
        return(myConfigArray) #return an array consisting of the data from the config file

    except IOError:
        print ‘Could not open file ‘, myFile

#————————————————————————————-
# Create the Graph routine
#————————————————————————————-
def GraphCreate(lunData, areaData, graphtype):
    title = [’12 Hours’,’One Day’,’2 Days’,’One Week’,’One Month’]
    subpath = [’12’,’24’,’48’,’week’,’month’]
    path = ‘/var/www/html/iostats/’
    start = [‘-12h’, ‘-24h’,’-48h’,’-1w’,’-1m’]
    horizontalRule = ‘HRULE:90#000000:’

    #————————————————————————————-
    # set some perameters based on the graph type
    #————————————————————————————-
    if graphtype == ‘await’:
        verticalLabel = ‘Milliseconds’
        subtitle = ‘ Average Wait ‘
        filename = GPFSFileSystem + ‘_await.png’
        upperLimit = ’80’
        lowerLimit = ‘0’
    if graphtype == ‘util’:
        verticalLabel = ‘%’
        subtitle = ‘  % Utilization ‘
        filename = GPFSFileSystem + ‘_data.png’
        upperLimit = ‘100’
        lowerLimit = ‘0’
   
    #————————————————————————————-
    # Create the Graph
    #————————————————————————————-
    for count in range(5): # walk through the five chart types
        fullpath = path + ‘/’ + subpath[count] + ‘/’ + filename
        fulltitle = ‘/gpfs/’ + filesystem[GPFSFileSystem] + subtitle + title[count]
        rrdtool.graph(fullpath,
            ‘–title’, fulltitle,
            ‘–imgformat’, ‘PNG’,
            ‘–width’, ‘800’,
            ‘–height’, ‘400’,
            ‘–vertical-label’, verticalLabel,
            ‘–start’, start[count],
            ‘–upper-limit’, upperLimit,
            ‘–lower-limit’, lowerLimit,
            horizontalRule,
            lunData,
            areaData )   
   

#————————————————————————————-
# Main routine
#————————————————————————————-
for GPFSFileSystem in myGPFSArray:
    myConfigArray = getData(GPFSFileSystem)
    print ‘Doing ‘ + GPFSFileSystem

        #————————————————————————————-
    # Pull the individual components out of each line of the config file
        #————————————————————————————-
    utilData = []
    awaitData = []
    areaData = []
    for line in myConfigArray: # the line is an array with LUN HOST DM STATE
        lunType = re.search(r’data’,line[0])
        if lunType:
            tmplun = line[0] 
            lun = tmplun.split(‘_’)
            node = line[1]
            tmpdm = line[2]
            dm = tmpdm.split(‘/’)
            x = ‘DEF:’ + lun[0]+ ‘_’ + lun[1] + ‘=/gpfs/scratch/’ + node + ‘/’ + dm[2] + ‘.rrd:util:AVERAGE’
            utilData.append(x) # this creates the utilData array with the DEF lines of the rrdgraph
            y = ‘DEF:’ + lun[0]+ ‘_’ + lun[1] + ‘=/gpfs/scratch/’ + node + ‘/’ + dm[2] + ‘.rrd:await:AVERAGE’
            awaitData.append(y) # this creates the awaitData array with the DEF lines of the rrdgraph

            # The following populates the AREA portion of the rrdgraph array named areaData
            # The primary reason to break these apart is just to set the colors differently
            if node == ‘frodo-io3’:
                z = ‘AREA:’ + lun[0]+ ‘_’ + lun[1] + ‘#421c52:’ + lun[0]+ ‘_’ + lun[1]   
                areaData.append(z)
            if node == ‘frodo-io4’:
                z = ‘AREA:’ + lun[0]+ ‘_’ + lun[1] + ‘#005500:’ + lun[0]+ ‘_’ + lun[1]   
                areaData.append(z)
            if node == ‘frodo-io5’:
                z = ‘AREA:’ + lun[0]+ ‘_’ + lun[1] + ‘#21b6a8:’ + lun[0]+ ‘_’ + lun[1]   
                areaData.append(z)
            if node == ‘frodo-io6’:
                z = ‘AREA:’ + lun[0]+ ‘_’ + lun[1] + ‘#3300ff:’ + lun[0]+ ‘_’ + lun[1]   
                areaData.append(z)
   
        #————————————————————————————-
        # Call the function that creates the graphs
        #————————————————————————————-
    GraphCreate(utilData, areaData, ‘util’) # call the graph creating function
    GraphCreate(awaitData, areaData, ‘await’) # call the graph creating function

Now it’s time to get to the meat of things. Here is a bash script that will create some tmp files containing which dm’s on which I/O nodes go with which file systems.

#!/bin/bash

/usr/lpp/mmfs/bin/mmlsconfig|grep /dev/|awk -F\/ ‘{print $3}’|while read fs
do
    echo “Creating the tmp file for ${fs}”
    /usr/lpp/mmfs/bin/mmlsdisk ${fs} -M |grep frodo > ${fs}.tmp
done

So a few notes to make this easier to understand. The first major line is

/usr/lpp/mmfs/bin/mmlsconfig|grep /dev/|awk -F\/ ‘{print $3}’|while read fs

mmlsconfig gives way more data than just a list of file systems. I just want the file system names to input into a different command. I could make a static list, but then if something changed it would take manual intervention to get it correct again. Better to do a few extra steps right now and automate it. So, mmlsconfig gives to much informations, so I grep for /dev which gives me just the file systems (/dev/gpfs_scratch), I then awk -F\/ to split the line up using the / (the \ is so that awk doesn’t think the / is a special character) as my splitter. I then grab the third item, which is just the file system name (gpfs_scratch).

Now that I have just the file system name I push that into the mmlsdisk command. The -M option will display the underlying disk name on the I/O server node. I then output that information into a temp file. IE gpfs_scratch.tmp

/usr/lpp/mmfs/bin/mmlsdisk ${fs} -M |grep frodo > ${fs}.tmp

Easy peasy. Now I have my configuration files containing which dm on which I/O node goes with which gpfs file system. It’s now time to write a script to pull all this information together and make a nice pretty graph out of it.

Create a single RRD file for each LUN. This is ugly but works.

I forgot to mention. RRD is Round Robin Database. Information can be found at http://oss.oetiker.ch/rrdtool/

I chose to create a subdirectory for each I/O node. Into these directories I created the RRD files.

So.

  • mkdir /gpfs/scratch/frodo-io1
  • mkdir /gpfs/scratch/frodo-io2
  • mkdir /gpfs/scratch/frodo-io3
  • mkdir /gpfs/scratch/frodo-io3

I then created a short perl script to create the database files.

 

#!/usr/bin/perl
#————————————————————————-
# Author Richard Hickey
#————————————————————————-

use RRDs;
use strict;
use warnings;

print `clear` , “\n”;

my $rrd_file;

for ($rrd_file=0;$rrd_file<=91;$rrd_file++) {
RRDs::create(“/gpfs/scratch/temp/dm-$rrd_file.rrd”,
    “–start”, 1393346138,
    “–step”, 300,
    ‘DS:rrqms:GAUGE:1200:U:U’,
    ‘DS:wrqms:GAUGE:1200:U:U’,
    ‘DS:rps:GAUGE:1200:U:U’,
    ‘DS:wps:GAUGE:1200:U:U’,
    ‘DS:readMBs:GAUGE:1200:U:U’,
    ‘DS:writeMBs:GAUGE:1200:U:U’,
    ‘DS:avgrqsz:GAUGE:1200:U:U’,
    ‘DS:avgqsz:GAUGE:1200:U:U’,
    ‘DS:await:GAUGE:1200:U:U’,
    ‘DS:svctm:GAUGE:1200:U:U’,
    ‘DS:util:GAUGE:1200:U:U’,
    ‘RRA:AVERAGE:0.5:1:288’,
    ‘RRA:AVERAGE:0.5:3:672’,
    ‘RRA:AVERAGE:0.5:24:730’,
);
my $err=RRDs::error;
if ($err) {print “problem updating dm_$rrd_file.rrd: $err\n”;}
}
This created 91 separate RRD files called dm-0 through dm-91. I then copied these files into each of the four I/O node subdirectories. This gave me the Round Robin Databases which I could then start populating.

To populate the databases and start start collecting the information I used the following perl script and put it in /etc/cron.d so that it would run once a day and gather statistics every 5 minutes and do this 288 times. 288 * 5 minutes = 24 hours.

#!/usr/bin/perl

#————————————————————————-
# Author Richard Hickey
# Date 25 February 2014
#————————————————————————-

use RRDs;
use strict;
use warnings;
use POSIX qw(strftime);

print `clear` , “\n”;

#————————————————————————-
# layout of iostat data
# lun rrqms wrqms rps wps readMBs writeMBs avgrqsz avgqsz await svctm util
#————————————————————————-

#————————————————————————-
# set up some variables to use
#————————————————————————-
my @get_data;      my $get_data;
my $hostname = `/bin/hostname -s`; chomp($hostname);
my $err ;

#————————————————————————-
# run iostat and pipe into IOSTAT
#————————————————————————-
open(IOSTAT, “/usr/bin/iostat -dmtx dm-1 dm-2 dm-3 dm-4 dm-5 dm-6 dm-7 dm-8 dm-9 dm-10 dm-11 dm-12 dm-13 dm-14 dm-15 dm-16 dm-17 dm-18 dm-19 dm-20 dm-21 dm-22 dm-23 dm-24 dm-25 dm-26 dm-27 dm-28 dm-29 dm-30 dm-31 dm-32 dm-33 dm-34 dm-35 dm-36 dm-37 dm-38 dm-39 dm-40 dm-41 dm-42 dm-43 dm-44 dm-45 dm-46 dm-47 dm-48 dm-49 dm-50 dm-51 dm-52 dm-53 dm-54 dm-55 dm-56 dm-57 dm-58 dm-59 dm-60 dm-61 dm-62 dm-63 dm-64 dm-65 dm-66 dm-67 dm-68 dm-69 dm-70 dm-71 dm-72 dm-73 dm-74 dm-75 dm-76 dm-77 dm-78 dm-79 dm-80 dm-81 dm-82 dm-83 dm-84 dm-85 dm-86 dm-87 dm-88 dm-89 dm-90 dm-91 300 288 |”) || die “Can’t open iostat- $!”;

#————————————————————————-
# walk through the output and parse the data
#————————————————————————-
while (<IOSTAT>){
    chop;
    if (/^dm-/) {
    my $now_string = strftime(“%s”,localtime(time));
    s/\s+/,/g;
    @get_data = split(/,/);
#        print”/gpfs/scratch//$hostname/$get_data[0].rrd $now_string:$get_data[1]:$get_data[2]:$get_data[3]:$get_data[4]:$get_data[5]:$get_data[6]:$get_data[7]:$get_data[8]:$get_data[9]:$get_data[10]:$get_data[11]\n”;

#————————————————————————-
# update the rrd databases
#————————————————————————-
        RRDs::update (“/site/GPFS/iostats/$hostname/$get_data[0].rrd”,”$now_string:$get_data[1]:$get_data[2]:$get_data[3]:$get_data[4]:$get_data[5]:$get_data[6]:$get_data[7]:$get_data[8]:$get_data[9]:$get_data[10]:$get_data[11]”);
        $err=RRDs::error;
        if ($err) {print “problem updating $get_data[0].rrd: $err\n”;}
        }
    next;

}
close IOSTAT;

Great. Now I am gathering the I/O statistics for each LUN on each I/O node in 5 minute intervals. The nice thing about the RRD files is that they never grow in size. Which is one of the nice reasons to use them.

Next we’ll go over how to pull all this data together in a nice graphical form.

 

So, I’m going to put this up on my site just so that I have a record of this and so that others can use these scripts as an example. Understand, these scripts are crude to say the least, but they work. The final goal here is to automatically create graphs showing the current and historical performance of our GPFS file system on a disk by disk basis. I’ve decided to use some Perl, Python, Bash, and RRD tools to do this. Ya, go figure.

This is going to end up being several posts long. There is a lot of data. First the background on what and why.

Here is the scenario. I have a large linux cluster running IBM GPFS. Picture 300+ nodes connecting across QDR Infiniband to 4 I/O nodes that are each connected to the storage subsystems with 2 8GB fibre links. Also each storage subsystem has 2 heads for redundancy. So there are a possible 4 different routes to each storage LUN from each IO node. Each gpfs file system has between 4 and 16 LUNs, and there are 4-8 file systems per cluster. So 4 routes times 4 IO nodes times 16 LUNs times 8 file systems = big mess.

Now Redhat does try to make it a bit easier with something called dynamic multipathing. Basically what it does is assign a “dm” name to each lun and hides all the different pathing options. Here’s an example of what one looks like

mpathbd (360001ff08020b000000002e469560164a) dm-53 DDN,SFA 10000
size=2.1T features=’1 queue_if_no_path’ hwhandler=’0′ wp=rw
|-+- policy=’round-robin 0′ prio=100 status=active
| `- 4:0:12:120 sdsj 135:368 active ready  running
|-+- policy=’round-robin 0′ prio=90 status=enabled
| `- 3:0:14:120 sdjl 8:496   active ready  running
|-+- policy=’round-robin 0′ prio=20 status=enabled
| `- 3:0:5:120  sdmm 69:480  active ready  running
`-+- policy=’round-robin 0′ prio=10 status=enabled
  `- 4:0:4:120  sdwz 70:752  active ready  running
What this is showing is that there are 4 paths to the 2.1TB lun. The system (without multipathing) can access them as /dev/sdsj /dev/sdjl /dev/sdmm and /dev/sdwz. Or as /dev/dm-53. You might be wondering why bother with Multipathing at all? Well, what happens if I have a fibre link go down? I lose 2 of the 4 /dev/sdxx devices. If I pointed to them directly I’d have a disk failure. However, multipathing automagically load balances and fails over to the working path in case of a failure.

Okay. Enough about multipathing and why we have it. Suffice it to say that we do. So. easy peasy right? Ya, not so much. Since dynamic multipathing is “dynamic” it means that the /dev/dm-xx name can change on reboot or when we make any major changes to the system. This means that the dm for a lun in the beta file system today may end up being the dm in the scratch file system after reboot, or not. Really? Really? Why?

However, all is not lost. GPFS has a nice little command that you can run (it’s slow so beware) that will give you a mapping of all the dm numbers, by IO server per file system.

/usr/lpp/mmfs/bin/mmlsdisk gpfs_scratch -M

Disk name     IO performed on node     Device             Availability
————  ———————–  —————–  ————
ddn7_data40_nsd frodo-io3               /dev/dm-70         up
ddn7_data41_nsd frodo-io4               /dev/dm-34         up
ddn7_data42_nsd frodo-io5               /dev/dm-66         up
ddn7_data43_nsd frodo-io6               /dev/dm-63         up
ddn7_data91_nsd frodo-io5               /dev/dm-35         up
ddn7_meta11_nsd frodo-io6               /dev/dm-46         up
ddn7_meta12_nsd frodo-io3               /dev/dm-12         up

This shows the lun name ddn7_data40_nsd, the IO node it’s talking to frodo_io3, the DM name on that node /dev/dm-70 and the status of up.

Now, we understand how dynamic multipathing works, and we now know a way to get GPFS to show us which dm goes to which lun on which I/O node. We’re making progress here.

So. At this point we have the ability to figure out which LUN on which I/O node goes to which GPFS file system. So, let’s start gathering data. I found it easiest to just gather the statistics on every dm on each I/O node and then separate it out into individual file system later. So, the next post is how I did that.

 

Recent Comments

Blog Stats

  • 59,715 hits
March 2019
S M T W T F S
« Nov    
 12
3456789
10111213141516
17181920212223
24252627282930
31  
Advertisements