Generate disk usage graphs/charts with CLI only tools in Linux Generate disk usage graphs/charts with CLI only tools in Linux linux linux

Generate disk usage graphs/charts with CLI only tools in Linux


If some ASCII chars are "graphical" enough for you, I can recommend ncdu. It is a very nice interactive CLI tool, which helps me a lot to step down large directories without doing cd bigdir ; du -hs over and over again.


I would recommend munin. It is designed for exactly this sort of thing - graphing CPU usage, memory usage, disc-usage and such. sort of like MRTG (but MRTG is primarily aimed at graphing router's traffic, graphing anything but bandwidth with it is very hackish)

Writing Munin plugins is very easy (it was one of the projects goals). They can be written in almost anything (shell script, perl/python/ruby/etc, C, anything that can be execute and produce an output). The plugin output format is basically disc1usage.value 1234. And debugging the plugins is very easy (compared to MRTG)

I've set it up on my laptop to monitor disc-usage, bandwidth usage (by pulling data from my ISP's control panel, it graphs my two download "bins", uploads and newsgroup usage), load average and number of processes. Once I got it installed (currently slightly difficult on OS X, but it's trivial on Linux/FreeBSD), I had written a plugin in a few minutes, and it worked, first time!

I would describe how it's setup, but the munin site will do that far better than I could!

There's an example installation here

Some alternatives are nagios and cacti. You could also write something similar using rrdtool. Munin, MRTG and Cacti are basically all far-nicer-to-use systems based around this graphing tool.

If you want something really, really simple, you could do..

import osimport timewhile True:    disc_usage = os.system("df -h / | awk '{print $3}'")    log = open("mylog.txt")    log.write(disc_usage + "\n")    log.close()    time.sleep(60*5)

Then..

f = open("mylog.txt")lines = f.readlines()# Convert each line to a float numberlines = [float(cur_line) for cur_line in lines]# Get the biggest and smallestbiggest = max(lines)smallest = min(lines)for cur_line in lines:    base = (cur_line - smallest) + 1 # make lowest value 1    normalised = base / (biggest - smallest) # normalise value between 0 and 1    line_length = int(round(normalised * 28)) # make a graph between 0 and 28 characters wide    print "#" * line_length

That'll make a simple ascii graph of the disc usage. I really really don't recommend you use something like this. Why? The log file will get bigger, and bigger, and bigger. The graph will get progressively slower to graph. RRDTool uses a rolling-database system to store it's data, so the file will never get bigger than about 50-100KB, and it's consistently quick to graph as the file is a fixed length.

In short. If you want something to easily graph almost anything, use munin. If you want something smaller and self-contained, write something with RRDTool.


We rolled our own at work using RRDtool (the data storage back end to tools like MRTG). We run a perl script every 5 minutes that takes a du per partition and stuffs it into an RRD database and then uses RRD's graph function to build graphs. It takes a while to igure out how to set up the .rrd files (for instance, I had to re-learn RPN to do some of the calculations I wanted to do) but if you have some data you want to graph over time, RRD tool's a good bet.