Networking

Unix and Linux network configuration. Multiple network interfaces. Bridged NICs. High-availability network configurations.

Applications

Reviews of latest Unix and Linux software. Helpful tips for application support admins. Automating application support.

Data

Disk partitioning, filesystems, directories, and files. Volume management, logical volumes, HA filesystems. Backups and disaster recovery.

Monitoring

Distributed server monitoring. Server performance and capacity planning. Monitoring applications, network status and user activity.

Commands & Shells

Cool Unix shell commands and options. Command-line tools and application. Things every Unix sysadmin needs to know.

Home » Featured, Filesystems

Dealing with Full Filesystems

Submitted by on June 25, 2013 – 11:03 am

Filled up filesystems is a recurring condition eating up sysadmin time on a regular basis. Some studies show that filesystems running out of space are responsible for most day-to-day issues handled by IT departments. Disk space monitoring is an important step to dealing with this problem.

However, in my experience, most filesystem monitoring tools are primitive and are based on percentage threshold. They do not take into account the filesystem’s actual size or keep track of growth trends. Thus, a 2GB filesystem at 95% utilization is likely to be a concern, while a a 2TB filesystem at the same level of utilization still has more than 100GB of free space and probably does not justify waking up a sysadmin in the middle of the night. Similarly, data growth rate is an important factor. The particular challenge are filesystems that hold transient application data.

But, once the filesystem is full, the important thing is to clean it up quickly, hopefully before there is fallout on the application end. Below a couple of scripts that may help you with the clean up.

Treex

This is a simple script that will produce a listing of first-level subfolders in the directory of your choice and their sizes. For example, if /home is out of space, this script will show you how much space is consumed by data in each subfolder. The script is clever enough to stay on local filesystems and not to cross mountpoints. Here’s sample output for “/”:

[root@vermin01 quarantine]# treex /
Depending on the size of /, this script
may take some time to complete, so be patient.

Current time: 06/25/13 10:49:59

If the script is still running after ten minutes,
you will need to kill it and re-run from a lower
filesystem level.
3087.68 MB       /usr
323.05 MB        /lib
105.32 MB        /etc
35.21 MB         /sbin
24.13 MB         /lib64

And here’s the treex script: 
#!/bin/ksh
# igor<at>krazyworks.com
# ----------------------------------------------------------------------------
# This script can help you analyze filesystem space utilization and identify
# files that can be removed or compressed to save disk space.
# ----------------------------------------------------------------------------

ARGC=$#

usage() {
cat << EOF
Usage: tree [-v] </pathname>

Examples:

    treex
    treex -v
    treex -v /tmp

Pre-requisites:

        Korn shell
        "mountpoint" command
        GNU "du" and "find"

EOF
exit 0
}

if [ ${ARGC} -eq 0 ]
then
    dir=`pwd`
    verbose=0
elif [ ${ARGC} -eq 1 ]
then
    case  in
        -v|--v|-verbose|--verbose ) verbose=1
            dir=`pwd`
            ;;
        -h|--h|-help|--help ) usage
            ;;
        * ) verbose=0
            dir=$1
            ;;
    esac
elif [ ${ARGC} -eq 2 ]
then
    case  in
        -v|--v|-verbose|--verbose ) verbose=1
            dir=$2
            ;;
        * ) echo "Exiting..."
            usage
            ;;
    esac
else
    dir=`pwd`
    verbose=0
fi

if [ ! -d "${dir}" ]
then
    echo "Directory ${dir} not found. Exiting..."
    exit 1
else
cat << EOF
Depending on the size of ${dir}, this script
may take some time to complete, so be patient.

Current time: `date +'%D %T'`

If the script is still running after ten minutes,
you will need to kill it and re-run from a lower
filesystem level.

EOF
fi

diskhog() {
        echo " "
        echo -e "t Owner t Name t Size (MB) tt Date t File Name"
        echo -e "t -----------------------------------------------------------------------------------"
        echo " "

        du -kax "${pathname}" 2> /dev/null | sort -rn | head -10 | awk '{print $2}' | while read file
        do
                        if [ -f $file ]
                        then
                                        owner=$(ls -als $file | awk '{print $4}')
                                        name=$(getent passwd ${owner} | awk -F':' '{print $5}' | awk -F',' '{print $1}' | tail -1)
                                        bsize=$(ls -als $file | awk '{print $6}')
                                        fsize=$(echo "scale=2;${bsize}/1024/1024" | bc -l)
                                        mdate=$(ls -als $file | awk '{print $7" "$8" "$9}')
                                        fpath=$(ls -als $file | awk '{print $10" "$11" "$12}')

                                        echo -e "t ${owner} t ${name} t ${fsize} t ${mdate} t ${fpath}"
                        fi
        done
        echo ""
}

clear

#typeset -a array

i=0
df -klP | tail -n +2 | awk '{print $NF}' | grep -v "${dir}$" | while read j
do
    array[${i}]="${j}"
    (( i = i + 1 ))
done

array[${i}]="${dir}"

#for k in "${array[@]}" ; do echo "${k}" ; done

# exclude="printf -- '-not -path %s ' ${array[@]:0}"
# find "${dir}" -mount -maxdepth 1 -mindepth 1 `eval ${exclude}` -type d -exec du -skx {} ; | sort -rn | while read line
# do
    # pathname=$(echo "${line}" | awk '{$1=""; print $0}' | sed 's/^ //g')
    # size_k=$(echo ${line} | awk '{print $1}')
    # size_m=$(echo "scale=2;${size_k}/1024" | bc -l)
    # echo -e "${size_m} MB t ${pathname}"
    # if [ ${verbose} -eq 1 ]
    # then
        # diskhog
        # echo "==================================================================================="
    # fi
# done

find "${dir}" -mount -maxdepth 1 -mindepth 1 `eval ${exclude}` -type d | while read line2
do
        if [ `mountpoint "${line2}" > /dev/null 2>&1 ; echo $?` -ne 0 ]
        then
                du -skx "${line2}" 2> /dev/null
        fi
done | sort -rn | while read line
do
        pathname=$(echo "${line}" | awk '{$1=""; print $0}' | sed 's/^ //g')
    size_k=$(echo ${line} | awk '{print $1}')
    size_m=$(echo "scale=2;${size_k}/1024" | bc -l)
        if [ ${size_k} -gt 10240 ]
        then
                echo -e "${size_m} MB t ${pathname}"
                if [ ${verbose} -eq 1 ]
                then
                        diskhog
                        echo "==================================================================================="
                fi
        fi
done

 

Diskhog

Once you narrowed down the offending directory with the treex script, you can use the diskhog script to identify the largest files that may be good candidates for compression or archival. Similarly to treex, diskhog will stay on local filesystems and will not cross mountpoints. Here’s an example of the script in action:

[root@vermin01 quarantine]# diskhog /usr

File Owner      Name            File Size       File Date               File Name
___________________________________________________________________________________

root            root    150 Mb          May 1 00:01             /usr/local/sendmailanalyzer/data/vermin01/2013/04/history.tar.gz
root            root    129 Mb          Jun 1 00:01             /usr/local/sendmailanalyzer/data/vermin01/2013/05/history.tar.gz
root            root    113 Mb          Apr 1 00:00             /usr/local/sendmailanalyzer/data/vermin01/2013/03/history.tar.gz
root            root    53 Mb           May 9 2012              /usr/lib/locale/locale-archive
root            root    44 Mb           Dec 11 2009             /usr/openv/java/jre/lib/rt.jar
root            root    41 Mb           Oct 26 2011             /usr/lib64/libgcj.so.7rh.0.0

Here’s the diskhog script: 
#! /bin/ksh
# igor<at>krazyworks.com
# ----------------------------------------------------------------------------
# Use this script to find the largest files in a particular directory.
# ----------------------------------------------------------------------------

clear

if [ ! $1 ]
then

echo -en "Enter path which to search: "

read pathname

if [ -z $pathname ]
then
        echo "Error: pathname cannot be null! Exiting..."
        exit
fi

else

pathname="$1"
fi

if [ -d $pathname ]
then

clear

echo " "
echo "File Owner        Name            File Size       File Date               File Name"
echo "___________________________________________________________________________________"
echo " "

du -kax "$pathname" | sort -rn | head -50 | awk '{print $2}' | while read file
do
        if [ -f $file ]
        then
                owner=$(ls -als $file | awk '{print $4}')
                name=$(getent passwd ${owner} | awk -F':' '{print $5}' | awk -F',' '{print $1}' | tail -1)
                bsize=$(ls -als $file | awk '{print $6}')
                (( fsize = bsize / 1024 /1024 ))
                mdate=$(ls -als $file | awk '{print $7" "$8" "$9}')
                fpath=$(ls -als $file | awk '{print $10" "$11" "$12}')

                echo $owner"            "$name" "$fsize" Mb             "$mdate"                "$fpath
        fi
done

else

        echo " "
        echo "Error: $pathname - no such directory! Exiting..."
        echo " "
        exit
fi

 

Print Friendly, PDF & Email

Leave a Reply