Laziness is a virtue for a sysadmin. If something needs to be done twice, automate it. Usually, the scripts and hacks are very much tailored to the task, and useless outside the site where written. But here's some software I feel can be useful for others.
Report directories with many files (inodes) - Bacula du - Query Bacula logs - Remove unneeded Bacula volumes - Monitor Bacula (or Bareos) backups - Send SMS for free via GuleSider.no - Convert vCalendar 1.0 to iCalendar 2.0 - Report TITLE from URLs posted in Irssi - Fake keypresses - Keep running - Timestamp - Netgask - cknfs - Managesieve sync - Listadmin - Stripnuls - Compare Iozone - Sjekkpart - Genunames - FH find
So you have run out of inodes. How do you find out where they are hiding so you can clean them up? Introducing: inode-usage(1)!
Usage: inode-usage [OPTION] [DIRECTORY...] Like du(1), but report inode counts instead of disk space usage. When no directory is specified, report on current directory and its sub-directories. Options can be: -t, --threshold=N exclude directories with fewer than N inodes -x, --one-file-system skip directories on different file systems The four columns in the output are as follows: 1: Count of inodes including sub-directories 2: Cound of inodes in this directory 3: Marked with "*" if this directory contains hardlinked files 4: Directory name The inode count is an approximation. If the tree contains files hardlinked from the outside, the reported count can be a little too small.
inode-usage (version 1.0)
Bacula (and Bareos) stores detailed information in the database
about the contents of a job, but it can be awkward to inspect
bacula-du works mostly like
Usage: bacula-du [OPTIONS] -j JOBID Summarize disk usage of directories included in the backup JOBID Main options are: -a, --all write counts for all files, not just directories -S, --separate-dirs do not include size of subdirectories -t, --threshold=SIZE skip output for files or directories with usage below SIZE. default is 1 octet. -L, --largest=NUM only print NUM largest directories/files
There is also an alternate mode which can be useful as a faster
alternative to a
Usage: bacula-du --md5sum -j JOBID --md5sum output list of all files in job in md5sum format
bacula-du (version 1.4)
Bacula (and Bareos) logs can be unwieldy when you have hundreds or thousands of jobs. I wrote this tool so I can filter out the cruft easily.
Usage: bacula-logs [OPTIONS] [--days N] [JOB ...] Print logs for JOBS, or print logs for all jobs the last N days (default 1) JOB can be job id or a job name (optionally with % wildcard). Main options are: --errors-only, -e only print jobs which failed --last-run-only, -l only print the last instance for each job --pattern, -p PATTERN only print jobs with a log matching (Perl) PATTERN [*] --ignore-case, -i case-insensitive pattern search --match-only, m only print matching log entries --client-log, -c only match against log entries from client (without --pattern: only print log entries from client) --status-only, -s do not print logs --extended-info, -X add field (files, bytes, duration) to status line --simple-status, -S as --status-only, but without decoration of output
This can be used many ways. E.g.,
will give a summary of jobs which failed last
bacula-logs web02 will output the logs of all job
names starting with "web02".
My favourite command is
bacula-logs -eXl | less -p==
-p== specifies a search pattern which matches the
header for each job, so pressing "n" will skip to next job in the
bacula-logs (version 1.7)
The backup software Bacula (and Bareos) is very reluctant to delete files which contain backup data, even if the data comes from jobs which failed or if the data is older than your retention policy. This script will remove the files and clean up the database.
bacula-purge-unused (version 1.2)
The backup software Bacula (and Bareos) will only report success or failure -- but what if your backup is missing important files? This script allows you to monitor if those important files are included every night. You can also use it interactively to check what jobs contain a specific file.
bacula-check (version 1.3)
I found a script
by Richard Tangstad which uses wget to send SMS via the free service (5 SMS per day)
at Gule Sider. I did some
rudimentary usability tweaks, so here you go:
gulesider-sms (version 0.4)
After using syncml-ds-tool from the libsyncml project to dump the calendar from my Nokia e71, I was very disappointed to find that neither Google Calendar nor Evolution was able to parse the file. It turns out the dump contains extra comment lines and uses quoted-printable which aren't handled very well.
This quick Perl hack may massage your file into something more readable. I still had to remove a few calendar events manually which Google didn't grok, but it worked for me. Oh: Evolution didn't handle base64 either, so the long descriptions may become quite unreadable, so please let me know if you understand what I (or Evolution) did wrong.
Here it is: convert-vcalendar1.0-to-icalendar2.0 (version 0.2)
This plugin will attempt to download the head of the URL and report the title. It looks like this:
10:02 [title]>> What Happened When NYU Students Discovered They Could Email 40,000 People At Once 10:02
Here it is: linktitle.pl (version 2016022301)
Looking around on the Internet, I couldn't find any program which makes it easy to script entering text into X applications via faked key presses. I needed this to replicate how XTerm's translation table allows you to bind the insertion of chunks of text to function keys or whatever.
The code is pretty rudimentary, but works for me. It assumes all characters in your text are available by pressing a combination of Shift and AltGr, so I suspect CJK users will be out of luck. Suggestions for improvements are welcome. Usage is simple, anything given as the first argument is copied to the window holding focus. If you want to include newlines or tabs, consider using printf(1) to generate the input string.
The latest version is fakekeypresses 1.0.
There seems to be two wrappers I continually write. One to set environment variables before invoking the real binary, the other to make a slightly buggy daemon stay available by running it inside a loop.
Usually, it's too much work to set up exponential backoff, alerts via e-mail and/or syslog, so it's just left there, with a custom log file no one remembers to check. I decided to write a script which is kept simple (so it doesn't introduce new bugs in the system) but still is general enough to be used on a multitude of services.
Usage: keep-running [options] command [args] Valid options: -m <mail-address> send failure report to <mail-address> default: don't send e-mail -r <number> how many times to restart the service default: infinite -d <maxdelay> a limit on how long to wait before trying to restart, in seconds, or with a suffix. default: 1d -S <command> how to shut down the child cleanly. default: send SIGINT or SIGTERM -t <template-file> a file containing the e-mail message to send default: built-in template -u <username> switch to this user on startup -c <config-file> the name of a file to poll for changes. -c <watch-file> the name of a file to poll for changes (can be given multiple times). -l <log-target> which syslog facility to use for diagnostics, or, if the argument contains a '/', which file to log diagnostics to default: print on standard error -D run as daemon in the background (requires -l) -C "launch from cron" mode: exit silently if already running (requires -L) -L <lock-file> only run if lock can be held exclusively (also contains PID of keep-running)
For more details, read the manual page. The script is written in Perl. No extra modules are needed unless you want e-mail notification (Net::SMTP, Net::DNS) or logging via syslog (Unix::Syslog).
The latest version is keep-running 1.07.
Sometimes programs emit data on stdout or stderr, but they're long running, and you would like timestamps on each line to make it more like a log file. You can do this by piping through a Perl one-liner, but that collapses stdout and stderr. This wrapper script spawns the command itself to keep total control.
Usage: timestamp [options] COMMAND ... Prepend timestamp to each line of output from COMMAND. Valid options: -u, --utc Use UTC rather than local timezone -f, --format FORMAT Format timestamp according to strftime (default: "%Y-%m-%d %H:%M:%S ") -o, --format-stdout FORMAT Use this FORMAT for stdout -e, --format-stderr FORMAT Use this FORMAT for stderr --stdout-prefix STRING Add STRING in front of date on stdout --stderr-prefix STRING Add STRING in front of date on stderr $ timestamp echo This is a trivial example 2009-09-16 16:31:36 This is a trivial example
The latest version is timestamp 1.0.
This script is mostly obsolete with the more commonly distributed ts(1) from moreutils.
We use netgroups extensively at our site for giving users access to computers, printers, files, etc.; or for giving computers access to other computers; or for keeping track of what each computer is supposed to be doing. Most of these netgroups are automatically maintained based on our student database and the host database.
The key is to make this information easily accessible for sundry scripts, and Unix has for some reason always lacked the tools needed, the closest you get is ypmatch(1m) which is quite awkward for this task.
This package, which was originally written in 1988, contains four small utilities written in C:
The latest version is netgask 1.07. We use it on AIX, HP-UX, IRIX, Linux, MacOS X, Solaris and Tru64 Unix.
Don't you hate it when your scripts hang due to a dead NFS server? With cknfs, you can check if a path (or $PATH) is available before you access it. Each path is examined for an NFS mount point. If found, the corresponding NFS server is checked. Paths that lead to dead NFS servers are ignored, the remaining paths are printed to stdout.
Usage: cknfs -e -f -s -t# -u -v -D -L paths Check paths for dead NFS servers Good paths are printed to stdout -e silent, do not print paths -f accept ordinary files -s print paths in sh format (semicolons) -t n timeout interval before assuming an NFS server is dead (default 10 seconds) -u unique paths -v verbose -D debug -H print host pinged -L expand symbolic links
This program was originally written by Alan Klietz back in 1989. It has been maintained here at UiO since, but it's still in glorious K&R C and even runs on Ultrix. Improvements include support for NFS over TCP and a few useful flags.
The latest version is cknfs 1.9. We use it on HP-UX, IRIX, Linux and Solaris.
This little script uses the Perl module
Cyrus::SIEVE::managesieve which comes with Cyrus IMAPd to
copy all Sieve scripts from one server to the other. It supports
authorisation as a super user to enable copying without user
Mailman has a friendly but rather awkward web interface for manipulating the queue of messages held for moderator approval. Since I maintain a couple of dozen lists, some of which receive 50+ spams per day, I needed a way to reduce the time taken to process all the junk e-mail.
The result was listadmin. It is designed to keep user interaction to a minimum, in theory you could run it from cron to prune the queue. It can use the score from a header added by SpamAssassin to filter, or it can match specific senders, subjects, or reasons. The configuration file is Notepad.exe friendly. A sample configuration file:
password "Geheim" # action to take when pressing just Return default discard # discard automatically anything with SA score higher than 6 spamlevel 6 discard_if_from ^(postmaster|mailer(-daemon)?|listproc|no-reply)@ email@example.com firstname.lastname@example.org
You can't make a screenshot of a program like this, but a sample session may be instructive. See the manual page for the whole story. The script is written in Perl and requires a few modules, but AFAIK only Text::Reform isn't bundled with Perl 5.8.0.
Latest version is listadmin-2.40.
It has many improvements over 2.32
Many music albums contain "hidden tracks" implemented by putting a long stretch of silence into the last song. I find this very annoying, so when I rip the albums, I shorten all stretches of silence longer than six seconds. The new length of the silence is the square root of the length in seconds, so 5 minutes (I've seen even longer!) becomes roughly 17 seconds.
The program will print a terse summary if it removes data, I add this information as tags in my FLAC files.
Making sense of Iozone reports and see what's good and what's bad isn't so easy, so I wrote a script which makes an HTML report with colour coding to make it easier to spot the differences in performance of two systems. Yellow means same performance, red is a regression, green is an improvement. If you feed it many reports, it will throw out the worst and best results (since they may be anomalous) and average the rest.
Sjekkpart scans your system for harddisks, and tells you about unused space etc. («sjekk» simply means «check» in Norwegian.)
This is an example (excerpt) from an old SunOS server:
==== sd0 ===== 1002 MB, SUN1.05 sd0a 32 MB 24% [1999-12-28] / sd0b 245 MB [ never ] swap sd0e 382 MB 103% [1999-12-30] /local sd0g 201 MB 65% [ never ] /usr sd0 140 MB UNUSED (1750..2036) ==== sd1 ===== 1002 MB, SUN1.05 sd1a 23 MB 11% [ never ] /var sd1 978 MB UNUSED (48..2036)
The first line contains device name, disk size and disk name. The other lines are partition device name, partition size, %full (if mounted), date of last backup and local mount point. There may be a "G" (for «global») if the filesystem is in the automounter map. The numbers after «UNUSED» are first and last cylinder of the free area.
Sjekkpart understands Solaris' Disksuite and Veritas volume manager. It is designed to be easily ported to other operating systems. Please send me a copy if you do.
Genunames takes a full name as input, and tries to come up with reasonable user names. E.g., it will suggest kjetilh, kjetilth, kjetilho and so on given my name as input.
It is written in Perl, and should be easily pluggable into your existing scripts.
Have you ever gotten log messages like
client nfs: NFS write error on host server: No space left on device. client nfs: File: userid=1232, groupid=13787 client nfs: (file handle: 2240005 3e7 a0000 1fa04 26 a0000 2 0)and wondered exactly where the problem is? No more! Run the script fhfind.perl on the server:
# fhfind.perl 2240005 3e7 a0000 1fa04 26 a0000 2 0and it will scrounge through your /dev to find the file system, and then look for the inode there.