Skirting the addition

Skirting the addition is not the same as skirting the issue. I’m using pre-painted roofing, cutting it with a Dual Saw and burying it 5 to 7 inches deep in the ground.

Some things that’s helped me do this project:
2×4 strongtie metal ledge hangers, 1 per three feet
Pressure treated 2×4 “stud”, mounted widthwise, 2 ” above ground level
Roofing screws with rubber washers, 1 1/2″ and 2″
Screened loam, John Deere tractor
sharp square loam spade, rake
8p short galvanized nails, palm nailer, compressor, hose

skirting

Installing Skirting

lots of clamps, sawhorse, extra 2×4’s to clamp to
tape, level, tri-square, sheet-stock square, pencil, marker
hammer, punch, drill, 1/4 ” hex driver bit

A graphical MD5 validator

Sometimes, in technology, you just go around in a circle. Back in 2007, I was cruising Ubuntu Forums and found this thread that intrigued me. A lady was looking for a gui that would give an MD5 hash for a file. I played with some code to help her then, and subsequently forgot about it as I started using the excellent Download Them All Firefox extension, which includes checking files downloaded from the Web against any number of verification codes, including MD5.

So last weekend I started looking at lightweight Debian-based distributions to load on my big USB stick to run as a Live USB. I found one I wanted to try (PureOSlight) but the website only gave an MD5 of the ISO image, and a link to a third party website where I could get  a torrent file so to download the image via BitTorrent. Playing with unknown software is risky enough, but in this situation the original author has no direct control over the ISO image I was going to fetch. So verifying against his MD5 hash was the only safe way I could work — and my BitTorrent client, Transmission, didn’t have a hash-checking feature as far as I could see.

So while Transmission was getting pieces of my image from four different unknown places, I thought, it would be nice to get some stand-alone hash checking software. I checked out GtkHash from Ubuntu’s repository and thought it was quite nice, but it was missing two features I wanted. First, I saw no place where I could paste the hash the author provided, so I was going to have to eyeball-compare it against the one calculated by GtkHash from the image file. Man, I’m just too picky sometimes. Also, GtkHash won’t take a filename as an argument, so I couldn’t use it from a file manager. I wanted to be able to use it directly from Nautilus by finding the image file, right-clicking, choosing the Open With feature, then give me a text box to paste the hash into. Then it would calculate the MD5 of the file, and have the computer do the comparison.

So someone must have done it, and Googling around didn’t show any packaged software like that, but I did find my old forum thread from years ago! Since I have a few days off, and am procrastinating on my tax return, I just could not leave this unfinished. Now that I’ve got it to a working point, though, I still haven’t got to play with what I originally downloaded.

I’m releasing this under the MIT license, so you can do what you want with it, except sue me about it, but I would appreciate your posting suggestions for improvement here. This is alpha-quality at this point, but it does do the job.  To install, paste the text below  it into a file named “md5compare” (/usr/local/bin is an appropriate place), and make it executable for all users (sudo chmod +x /usr/local/bin/md5compare). Technical notes from writing this little accessory appear after the code:

#! /bin/bash
#
# md5compare
# A graphical MD5 validator
# https://jimcooncat.wordpress.com/2011/03/09/a-graphical-md5-validator/
# Released under MIT license
# http://www.opensource.org/licenses/mit-license.html
#
# Usage: md5compare [filename]
# Can accept a filename for an argument, for
# example to use with a file manager's "Open with" feature,
# or will show a file selector if none is given.
#
Commandname=$(basename $0)
SupposedMD5=$(zenity --entry \
  --text "Enter MD5Sum that file is supposed to be" \
  --title $Commandname)
if [ $# -gt 0 ]; then
  Filename="$1"
else
  Filename=$(zenity --file-selection  --title $Commandname)
fi
Tempfile=$(tempfile --prefix="md5-" --suffix=".list")
SpaceChar=" "
echo "$SupposedMD5$SpaceChar$SpaceChar$Filename" > $Tempfile
Result=$(md5sum -c $Tempfile 2>&1 | \
  tee >(zenity --progress --text "Calculating MD5sum" \
    --title $Commandname --pulsate --auto-close) )
rm $Tempfile
zenity --info --text "$SupposedMD5\n $Result " --title $Commandname

Some things I ran into while writing this:

  1. $0 is the full pathname to this script, so I set each window’s title to $(basename $0) in case you come up with a better name for it than md5compare.
  2. I have no real error checking in this script, it would be smart to verify that the hash pasted in by the user is verified to be the correct format before going continuing.
  3. In a full-featured version, it would be nice if the file-picker (displayed if no filename is passed to the script) defaulted to the directory where the user normally downloads their files. We’d need to store that preference in their /home.
  4. The tempfile command is very nice to use to give a unique name to a temporary file. I’m using it here for input to the md5sum command. Initially, I wanted to use a prefix of “md5command”, however it truncated my prefix after the fifth character. That’s kinda lame.
  5. The md5sum input file format is strange. It takes a file formatted with two spaces between the hash and the filename. For some reason (I am a bash noob), when I generated the file using the echo command initially, my two spaces in the script became only one in the file. So I assigned a space character to a variable and it worked properly with the code above.
  6. When running the md5sum command I wanted to show the user any error messages produced by it, so I used the “2>&1” phrase. This redirects stderr to stdout, so the final dialog will show any error messages. The message isn’t really pretty, but it is informative.
  7. While md5sum is churning against a large ISO image file, I piped the output to zenity’s progress dialog using tee. Otherwise, the user thinks that nothing is going on or the script aborted as the last dialog box would be closed. The pulsate feature doesn’t work though, since md5sum doesn’t generate any output while it’s crunching through a single large file. However, it serves the purpose, as the progress box closes when the md5sum command finishes.
  8. The final dialog box could be a bit more informative, especially for a new user.

Your thoughts and suggestions, please!

Packaging Configuration Files?

Dear lazyweb, I have a question about Debian Packaging (which also extends to Ubuntu and related distros) that would allow me to install a server with custom configurations. Here’s an example of what I’d like to do with the venerable openssh-server package, where I want to install an ssh server on a non-standard port.

custom-openssh-server (metapackage)
depends: custom-openssh-server-config, openssh-server

custom-openssh-server-config (adds non-standard port)
contains: /etc/ssh/sshd_config

openssh-server (straight from the official repo)

——————
The idea here is to install our custom config before installing the package we want. Good packages are not supposed to overwrite configuration files.

So when we add our metapackage to a preseed (building a computer from scratch), it would automatically do the right thing. If openssh gets a security update, version numbering would work correctly.

Since we could have some sensitive information in the config files, we could use an ssh repository with apt, or at a minimum only make our personal repository only available on the local network.

Can anyone explain the downsides to this approach?

Attempting to remove U3 with Linux

A friend found a 2 GB SanDisk Cruzer left in a Walmart shopping cart. There wasn’t anything fun on it, but it did have an annoying auto-run program on it called U3.

I’m trying this method from a post at:
http://noisetheatre.blogspot.com/2006/08/uninstall-u3-and-free-your-usb-drive.html

—–

Peter wrote at 21 January, 2009 05:21…
So hopefully someone will find this useful. After lots of googling I found that there weren’t any instructions for removing U3 under linux. Truth be told, it’s really easy, but the solution is as obscure as it is easy.

1)Mount the U3 “cd” partition
2)Run Mount to find out the name of the device that U3 is on. It should be some thing like scd#, the important part is the number there.
2.5) Just to be sure you’ve got the right device check that /dev/sr# is a symlink to /dev/scd# that you just found.
3) Now that you know which device you’re looking for you can start the actual removal. cd to /sys/class/block/sr#/device/
4)In this directory is a file named delete, it’s write only by root, and if you write to it (I’ve only ever tried with “1″) the U3 partition will be removed. With root privileges ‘echo “1″ > delete’ removes it quite nicely.

——
Here was my results:

1) I just plugged it in, it automounted
2) jim@mickey:~$ mount
… /dev/sdb1 on /media/disk type vfat (rw,nosuid,nodev,shortname=mixed,uid=1000,utf8,umask=077)
2.5) jim@mickey:~$ ls -l /dev/sr0
lrwxrwxrwx 1 root root 4 2009-05-12 07:04 /dev/sr0 -> scd0
3) jim@mickey:~$ cd /sys/block/sr0/device/
jim@mickey:/sys/block/sr0/device$ ls -l delete
–w——- 1 root root 4096 2009-05-14 11:32 delete
jim@mickey:/sys/block/sr0/device$ sudo -i
root@mickey:~# cd /sys/block/sr0/device
root@mickey:/sys/block/sr0/device# echo “1″ > delete
root@mickey:/sys/block/sr0/device# exit

After all that, it didn’t appear to do anything. I must be missing a step. I plugged it into a Win2k machine, and the U3 launchpad came up. I removed the software using the uninstall feature of U3.

So I guess that was a bust, but I’ll have more drives in the future to try this with.

Server Wishlist

I wrote these comments in response to Søren Hansen’s post about the future direction of Ubuntu Server development. I’m thankful for the recent discussion started by Thierry Carrez.

My wishlist:

Automatic installation. I’m using network preseeding to install servers and desktops, and it works like a charm. I’ve got apt-cacher in the mix, and when I install a new machine, it pulls only the updates from the Internet. I keep a custom repository, and have been successful in setting things like /etc/network/interfaces by making a small .deb packages to make these customizations. Setting up and adding to the custom repository is a pain, because I don’t understand the tools well enough yet — especially signing the packages so I don’t get security warnings.

Ability to review updates. I’d like a simple way, either through a chroot or virtualization, to subject updates to a vetting process before making it available for installation. Again, repository management tools are needed.

High Availability. Even in a small office, downtime is expensive. I’d like to see packages like drbd-primary and drbd-secondary, where you install each on a different machine, and you have simple redundant storage. Same for essential services like dns, dhcp, internet gateways, font servers, etc.

Guided networking schemes. When setting up a local network, there should be sensible defaults and alternatives to a standardized addressing scheme. IP addresses to use 192.168… or 10.0…? Internal services available at gateway.lan, dns.lan, printers.lan?

File-based LAN configuration. The underpinnings of configuration could be based on files, which can be altered and administered through custom .debs. A database-driven front-end could be used to help define the network, which spews out custom .debs to be distributed through the dpkg updating already in place. If you want to migrate to LDAP or some other “registry” type scheme, have it read from existing files like dnsmasq reads /etc/hosts as a starting point. Don’t force complexity on the network, offer it as alternatives to a simple, robust default.

Bulk deleting Gmail contacts

I’m migrating from a hosted Zimbra email setup to Gmail. When importing my contact data I had some glitches, and wanted an easy way to delete all the Gmail contacts. But no! Gmail only allows you to delete 500 at a time. And to do that, you have to click-click 500 times to select the contacts you want deleted. That wasn’t good for me.

After searching the net and trying a couple of things from posts, I was going to do some major bash scripting. I went out and plowed the driveway, and got a good idea. I’ll just make a little GreaseMonkey script to check off the boxes for me.

I signed into the Gmail account, and clicked the link at the bottom of the page marked “basic HTML”. Then I clicked “Contacts”, then “All Contacts”. I loaded the greasemonkey script shown below, and refreshed the page. The first 400 checkboxes were checked off, so all I had to do was press the Delete button. The page reloads automatically with the next 400 boxes checked. In about a minute, all the contacts were deleted!

Note: I’m using Google Apps for Gmail, your “namespace” below might be different for regular Gmail, I’m not sure.

// ==UserScript==
// @name           Select 400 Checkboxes
// @namespace      https://mail.google.com/a/
// @description    Checks off the first 400 checkboxes; good for removing Gmail contacts
// @include        https://addons.mozilla.org/en-US/firefox/addon/748
// ==/UserScript==

var allCheckboxes, thisCheckbox;
allCheckboxes = document.evaluate(
    "//input[@type='checkbox']",
    document,
    null,
    XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE,
    null);

for (var i = 0; i < 400; i++) {
    thisCheckbox = allCheckboxes.snapshotItem(i);
    thisCheckbox.checked=true
}

Learn Technology with Monit

Over the past few days I’ve been playing with software called Monit.

Monit is a utility for managing and monitoring, processes, files, directories and filesystems on a UNIX system. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations.

Translated to a simpler phrasing, Monit sits in the background and runs tests that you tell it to on your computer, and sends you an email about the results of those tests. Optionally, it can restart programs that stop working, or do any kind of trick you can dream up based on the results of the tests.

Monit comes with it’s own email sender, so you don’t have to set up anything extra to get it to send you an alert. You will need to specify an email server, though.

Getting Monit to run is very simple. Thanks to no-names.biz, I’ve modified their howto posting to show you how to just get it running on Ubuntu 8.04 (Hardy), and I’ve used nano instead of vim as an easy-to-use editor for the configuration files. Before using this, get familiar with nano. I’ve highlighted any portion where you need to substitute anything unique to you, like your email address:

#sudo aptitude install monit

#sudo cp /etc/monit/monitrc /etc/monit/monitrc_orginal

#sudo nano -w /etc/default/monit

startup=1
CHECK_INTERVALS=60

Ctrl-O to save the file, Ctrl-X to exit nano.

#sudo nano -w /etc/monit/monitrc

set daemon 60
set logfile syslog facility log_daemon

# If you run your own mailserver (use this or the next entry):
set mailserver mail.mycompany.com

#For gmail instead of your own mailserver (all on one line):
set mailserver smtp.gmail.com port 587 username “you@gmail.com” password “password” using tlsv1 with timeout 30 seconds

set mail-format { from: monit@$HOST.mycompany.com }
set alert you@mycompany.com
set httpd port 2812
use address localhost
allow localhost
allow you:password
## Services
## You put your tests here.

Ctrl-O to save the file, Ctrl-X to exit nano.

#sudo invoke-rc.d monit start

———–
If all goes right, you should get an email shortly with the subject “monit alert — Monit instance changed localhost”. Because we used the $HOST variable in the mail-format section, you can tell which computer sent you this by looking at the from: address of the email. If you don’t get an email within a few minutes, well, the aggravation can start now while you fix the /etc/monit/monitrc file, probably by monkeying with the mailserver line.

# tail /var/log/daemon.log

The above command will give you some clues if it’s not working right, as monit will log the errors.

Now the fun begins, as we add tests to the end of the /etc/monit/monitrc file.

#sudo nano -w /etc/monit/monitrc
Scroll down to the end of the file, you can just mash the down-arrow button until you get there.
## Services
## You put your tests here.
check host mycompany.com with address mycompany.com
if failed port 80 proto http for 3 times within 5 cycles then alert
#
check host example.com with address example.com
if failed port 80 proto http for 3 times within 5 cycles then alert

Ctrl-O to save the file, Ctrl-X to exit nano.

#sudo invoke-rc.d monit restart
——
What this will do is check your remotely-hosted website, as well as the little website at example.com. If your website isn’t up in three out of five minutes, monit will email you an alert. I’m also including a check against example.com, because there’s the possibility that your computer might not be connecting to the internet properly. So if you get an email that both are failing, then it’s a good chance your website is still up, but your internal network’s got a boo-boo.

A huge amount of tests are available, and many different technologies have tests written for them. By playing these tests and researching what they do, you will get a huge dose of technology learning across many different topics. Guaranteed.

Configuration examples from the monit wiki
Service test documentation

I’m currently running this one and trying to figure out how best to tweak it to my in-house server:

## Check the general system resources such as load average,
## cpu and memory usage. Each rule specifies the tested resource,
## the limit and the action which will be performed in the case
## that the test failed.
#
check system localhost
if loadavg (1min) > 4 then alert
if loadavg (5min) > 2 then alert
if memory usage > 75% then alert
if cpu usage (user) > 70% then alert
if cpu usage (system) > 30% then alert
if cpu usage (wait) > 20% then alert

Hi, Ubuntu Users!

I’m excited to have my blog listed on Ubuntu Weblogs, aka Planet Ubuntu Users. I haven’t missed a day of reading your entries for a few months now, after I discovered my favorite newsreader, Liferea.

I’m a private accountant in Maine U.S.A., and have been working with computers since 1986. I have many interests, and am enthusiastic about Ubuntu — especially the LTS releases. I run Ubuntu exclusively at home, and leverage it just as much as possible at work, though I do have to use Windows for several applications.

My past blogging has been long diatribes about the experiences I’ve had with projects I’ve taken on. I’ll try to put a few special, shorter tips in for you from time to time as I run across them. You’ll see me around on Ubuntu Forums and IRC as jimcooncat.

My current project at work is setting up a two-server system with a shared data partition (drbd), and I’ll be adding a healthy dose of KVM virtualization.

At home, I’m learning to build associative databases using SqLite and Tcl/tk to use for automated publishing. I plan on opening a very small business called “Cooncat Publishing”. I’ll be mostly repurposing publicly available data in a wide variety of formats and media.

Here’s some of my earlier Ubuntu related entries I’ve made before getting on this Planet:

Almost everthing has purpose, including Microsoft Windows

I work with two computer operating systems every day, Ubuntu Linux and Microsoft Windows. I post new things I find on web forums, both problems I’m having and tips for others. Whenever I get help from someone for my problems, I try to help at least one other. Once in a while, I hang out in IRC channels for a quicker fix of my community addiction….

A grand computing scheme

The plan

A few years in the planning, with several false starts, the dream of a smarter computing environment is taking shape. Like many small businesses, ours has a central point of data that is crucial to the business’s survival. Financial data, correspondence, photos, the publications we generate, and contact databases are our office’s lifeblood, and reside on a shared drive on a Windows computer….

Agonizing’s over, accept success when you can

When you don’t know where to start, attack everything at once.

I’d say the heading speaks for itself, yet a new project of this magnitude always seems to boggle me. Other things it my personal life were also in flux, as much as they are now I’m settled. So instead of setting real goals, I had to come to terms with all the relationships that are involved in a reliable, robust computer network….

—-

Thanks for listing my blog!

Agonizing’s over, accept success when you can

When you don’t know where to start, attack everything at once.

I’d say the heading speaks for itself, yet a new project of this magnitude always seems to boggle me. Other things it my personal life were also in flux, as much as they are now I’m settled. So instead of setting real goals, I had to come to terms with all the relationships that are involved in a reliable, robust computer network.

So, mulling over what I’ve learned about technology over the last umpteen years, I agonized. Over stupid stuff: users, groups, hard drives, network cards, virtualization, file systems, updating, customizations, preferences, and so many more things.

Don’t bite off more than you can chew at a time. Especially if it’s not appropriate to spit chunks.

Well, I rechecked the preseed setup I had installed on my computer to start with. Now preseeding, when it’s set up well, is a magical beast. But like most magic, it has some serious problems with fixes when things don’t happen as you expect. To explain, preseeding involves setting up a computer automatically, so if you have to reinstall everything on the computer you can do so quickly without remembering complicated steps. Or referring to half-assed notes you took before.

Therefore I set up two of the big computers to automatically load the Ubuntu Hardy Server operating system. In it’s current state, it’s a very open, stable, practical system with the option of easily adding fancier features that make it robust. But the most advantageous feature is what it inherited from the Debian project; the apt package management system.

So I got the computers to be able to be wiped clean and reloaded with a basic operating system in about seven minutes. I played with this over and over again, adding a few tweaks as I went. When things went well, I’d hit the backup button I installed to make sure my work wouldn’t be in vain if a hard drive failed. Looks like I was in sight of my goal, to be able to install a new server with a minimum of steps that a non-tech would be able to follow by purchasing the right equipment, plugging it in, and not have to make any decisions to get to a working state.

I hit a big snag when I installed two more network cards into the computer, which only had the one attached to the motherboard. I had searched for hours trying to find good cards that would work with Linux and give me the most bang I could out of the office wiring. It was my plan that I was to use one of these to install from, yet during the installation process there was a problem. The card tries to connect to the network twice during the process, and the second time through it couldn’t connect. It was maddening!

Network card Russian Roulette.

Fortunately, the Ubuntu installer has some very nice features, and I found that pressing Alt-F2 I could log into the system at the point it was hung up; and Alt-F4 to see all the messages the installer had spit out. It took me a while, but I realized that it was switching network cards on me between the first and second connection. I googled for hours to find the right answers — I couldn’t have the computer switching network cards on me in some random manner. If I rebooted, or the power went out so long it had to shut down, then I would be the only one able to get it back running correctly. So much for reliability — I felt I was falling back into the hole I’d been spending all this time, and the boss’s money, to dig out of.

After many reboots, I discovered that the computer would assign the network cards at random. Usually, the one on the motherboard would get named “eth0”, but occasionally it would end up as “eth2”. And the other two cards posed a problem as well, in that all I could see was their make and model; since I’d bought them at the same time from the same place, they were indistinguishable.

So as I googled my fortieth page to find some answers, I discovered that the nice folks that provide Ubuntu had solved this problem with some more magic called udev, but it only kicks in after the system is installed. That’s great, but I do wish that it had been more obvious; the configuration file is buried deep in the system. I do have to remember that if I change out cards that I have to find that file again, or the new card will be mounted in a new place, rather than the system automatically replacing the one I’ve taken out.

As much as I hated to, I resigned myself to installing from the original network card that came with the machine. It did have me rethink my security plan, and after agonizing over that for a while I realized that I just had to document plugging a cable in here to start, there when it’s ready to become a second router. No biggie, and it’s embarrassing to worry too much about such a minor thing. But if you know me, that’s not my nature; I cogitate on puzzles until they are solved or something else forces me to abandon it. Then I realized that this whole situation was forcing me to unplug the computer from the jack that goes to the internet while the installation was taking place.

Computers have no soul, and much as you’d like to be friends with them and assign human characteristics, they’ll never watch your back. Yet, this whole project has developed some life of its own; or it seems that way to me. Resolving this problem forced me back to what was smart, and made me realize that the one thing I’d always done during Windows installations — unplugging from the public network — was always a smart thing to do. Even for an operating system that we may feel secure with once it’s up and running.

Hacking is fun, and will cause you much heartache.

So now I had a good basic system going, and it was time to decide what my next move would be. Well, slick as a new Ubuntu installation is, it doesn’t make much of a server out of the box. That’s because any software that talks to the network, that is any process that uses outgoing ports, is disabled even after installation. That’s a very smart move on their part, as the liability of opening a port to the outside world can potentially be huge — or at least screw up your week.

But server software isn’t very useful if it’s not serving, and it’s left to the admin to figure and configure it out. Normally, the process goes like this: you install software, configure it and all the related pieces, it doesn’t work as you expect, you try some features in it you think you may want, you find out you don’t need all that junk, you get it working, and you make some comments along the way if you think you need to do it again.

It works for now. A year later, you install a new computer or some software that changes how things are set up. You go back to what you’ve done before, and remember how clever you were at the time. When you look again, you find you can’t get back to the mindset you were at before. What seemed obvious at the time when you had originally installed the software you’d left out of your comments is now coming back to bite you. And some of the relationships to other software are no longer applicable. Almost always, it ends up feeling simpler just to start from scratch again, so you do.

You know that putting in good comments and achieving a consistent state for others to follow is the right thing to do, but at that point you feel as if you’ve wasted your time. If you’re taking over someone else’s work, its much worse unless they wrote an entire step-by-step how-to document. It’s much better to configure software as an installable package, and keep consistency with the rest of the system. Keep commenting and provide a list of changes you’ve made to it over time, but you really only need to fully explain what makes your customized version different from the default. So, when it comes time for someone else (or you at a later stage in your life) to consider whether to use this version or go back to the drawing board, they only need to evaluate if your package does what they want it to do; and you make it easy for them to make that decision by keeping your notes very simple.

Enjoy the nice things that are right in front of you.

Ubuntu inherited something very nice from the Debian project; it’s not only an operating system, it’s a software publishing platform. Much thought and care has been put into the publishing tools, and hundreds of little helper programs are available to round out your setup. You can list these programs as “dependencies” to your software, and call on them to provide services so you don’t have to write the functionality yourself. That’s great if you’re the official maintainer of the software package, and it works extremely well for their developers.

Now I want to make packages for myself, to set up this new network automatically to a custom configuration. It turns out that the huge effort to develop the operating system does not support this well with how-to documents; or rather that these documents are huge and unwieldy as they address things such as how to submit new work to the society for inclusion in the distribution.

Here’s the choice I was facing: either do things the old fashioned hacking way, or learn how to apply what was already available. I thought some more about how things are normally done by system administrators to maintain and distribute custom configurations. I was looking at developing my own way of pulling down a custom configuration, applying it to the software, restarting the software if needed, etc. Or I could use tools others had developed to do similar things, having to learn their arcane nomenclature and understand their mindset. As I looked at this hard, I could see very clearly that a different system would require much more work on my part than just learning how to do it right in the first place.

Old dog learns new tricks.

So after poring over more how-to documents and bugging the nice people at the Freenode’s #ubuntu-motu chat room, I’ve realized a very simple way to make and distribute software configured to my needs. Note that this is just a starting point, I’ll be refining and better documenting this method as I learn more.

  • Start with a clean working system.
  • Install the official package you want to customize.
  • Tweak the configuration as desired, making comments along the way.
  • Test, re-tweak, and test again until all bugs are resolved or acceptable.
  • In an empty directory, run “sudo dpkg-repack –generate”. This will make the guts of a .deb package with all your configuration changes intact.
  • Edit the DEBIAN/control file to bump the version number up higher than the original software.
  • Run “sudo dpkg-deb -b <your directory>”. This makes a file called “.deb”, which is a hidden file.
  • Rename the .deb file to a standardized package name with your new version.
  • Copy the renamed file to your repository.
  • In your repository, run “dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz”.
  • On the computers you want to install or update, make sure your repository is listed in “/etc/apt/sources.list” or “/etc/apt/sources.list.d/”.
  • Run “sudo aptitude update”. This will make your new package available.
  • Run “sudo aptitude purge <your package>” if there’s an older version of the software installed. This wipes out any configuration files on the target system as well.
  • Run “sudo aptitude install <your package>”. That’s it!

If this looks complex, of course it is. The method has many advantages to it though:

  • You can install the custom configuration on as many computers as you want simply once this process is done.
  • If you want to share your customized package with others, you make a publicly assessable repository and upload/update it.
  • You can include your customized package in a preseeded, hands-off install.

I’m missing some steps I’ll want to include once I learn how, mainly how to proper document the changes. That looks very simple, I just haven’t gotten that far yet. And I’ll be automating this process along the way so I won’t have to copy and paste the somewhat arcane commands. Thank you for reading, I’ll look forward to writing another overly-long post on some irregular date in the future.

A grand computing scheme

The plan

A few years in the planning, with several false starts, the dream of a smarter computing environment is taking shape. Like many small businesses, ours has a central point of data that is crucial to the business’s survival. Financial data, correspondence, photos, the publications we generate, and contact databases are our office’s lifeblood, and reside on a shared drive on a Windows computer.

Backing up this drive was initially a nightmare. A consultant installed a tape drive with terrible software, and it was my job to change the tapes daily. This wouldn’t have been so bad, but restoring any file would take a matter of two hours — if I was lucky. I was having a 50% rate of recovery. I suppose that if I had spent another $500 of the company’s cash and gone to a training session on the software, I may have improved my efficiency somewhat.

A couple years later, I switched my PC to Ubuntu Linux. After some research, I settled on a backup tool called rsnapshot. Once I started using that software, my nightmare went away. I was able not only to back up our drive, but in a fashion that was automatic and very easy to live with. It not only stores the files, but also the changes to them, so I can easily go back to previous edits. The backups are on my hard disk, which I trust much more than tapes that stretch and wind and grind. Restoring is super-simple — I just go into the appropriate directory and copy a file, no need to run any special software to restore.

As time wore on, our computers started wearing out. A power supply here, a hard drive there, entropy was starting to catch up with us. Our boss was reluctant to shell out for new hardware, and I was reluctant to keep installing software onto new computers. I’d spend two days reinstalling Windows and all the software we use, only to wind up with an environment that wasn’t an exact clone of what the worker had before — so they’d spend at least a couple days worth of work getting used to it.

I started again to think about that shared drive, and realized that if the machine it was on went south, it would mean some hours of downtime to restore it. Many times in the office it would not affect our bottom line much; but there are hectic moment where downtime would be disastrous. At some point, and I’m not sure when, computers turned from an efficiency tool into a production mechanism. And we need a production-grade environment.

Doing some heavy research on disaster recovery and high availability scenarios, I found that small businesses are greatly under-served by commercial computing vendors. The marketing teams generate huge amounts of hype for bad products that don’t work correctly, or try to take a product made for larger business and shoehorn it into cheap hardware. They didn’t seem to take value into account at all.

Like many in my situation, I played with the idea of getting a big box and using RAID1 or RAID5; that is multiple drives that would allow for one to crash and still be able to work. It seemed workable to me, and then we lost a computer entirely to a power spike. Not only the power supply burned out, but the drive fried as well. A few weeks later, our huge web hosting provider went down. The reason? A RAID controller in one of their servers burned out, and they had to wait for delivery from the other side of the state to get back up and running. All the while, they had been touting the safety and redundancy of their service. I was disgusted.

It was apparent that we needed to not only protect the drive itself, but the whole environment. Single points of failure are not a permanent option for a business that provides my livelihood. Back to scouring the internet for solutions, I came across the Linux-HA project, which provides tools called drbd and heartbeat. These tools make two drives on two separate computers appear as one drive to the rest of the machines. If one of the computers goes bad, the other takes over in about 15 seconds. The only downtime is that the worker would probably have to restart their machine. This was what I was looking for!

After making a few sales pitches to my boss, he agreed to allow me to construct a new server setup. I purchased three new identical machines, with additional large hard drives and filled with 4 Gigabytes of RAM. So there they were, sitting on my newly-cleared-off workbench in the basement — and then I had to wait. Other projects screamed for my attention, and the staff was going through some changes. So there it sat for a full year almost untouched. I am very thankful for the patience my life experiences have trained me for; yet stuck in my mind was this project I needed to build and knew that distraction wouldn’t cut it.

Setting up my computer

Last week I finally was able to start on the project. Ubuntu Linux has progressed to the point where I will be able to have support for the operating system without massive changes for another four years. The hardware purchased has proven to be wonderful, as I had taken one of the boxes and used it to run two virtualized copies of Windows with fair success. And I had done plenty of research, so I understood to a management level what I wanted out of this — but now for the implementation.

The primary focus of this project is simplicity. Taking complex situations and boiling them down to their underlying substance is the kind of puzzle I thrive on. So what I want to do with this setup is to turn each piece into a appliance, which can be fixed and maintained with a minimum of instruction.

So I’ve started by turning my work computer into a “netboot installer”. This is probably the most magical thing I’ve ever seen a computer do in all the years I’ve worked with them. I take one of the new computers, plug it into the network, and reboot it. When it starts up, I press a key (the F8 key on these) and tell it to boot from the network. It finds my work computer, and automatically wipes out everything on the drive, installs the operating system (plus any additional software I want loaded), and reboots. When it’s done, it’s completely loaded with the most current versions of everything; I don’t even need to go through another long round of updates!

Setting up this kind of environment wasn’t simple, of course. There are four pieces of software that I had to install and configure the hell out of on my work computer: dhcpd3 (the network magic), tftpd-hpa (serves configuration files and kernel for the installation software), apt-cacher (temporarily store the operating system files) and apache2 (a web server for the operating system files).

So we take the new machine and plug it into the network. Start it up, and when it first beeps mash the F8 key and tell it to boot from the network card. It’s hands-off from then on if the configuration is right. The new machine looks for the DHCP server that’s broadcasting on a special address. When it finds it, it downloads the installer operating system and starts it up. This temporary installer then looks again at the DHCP server, and gets its first instructions from it based on the new machine’s network hardware address. It then downloads the “preseed”; a series of answers to the questions it normally asks during installation.

I have the preseed set up to reformat the hard drive and set up new partitions. It also does many other things automatically, like telling the computer to use English, a US keyboard, and set the time zone.

It then starts to download the packages of the operating system from the internet. Since this normally takes more than an hour for a server (even longer for a desktop), we have it download through apt-cacher. What this does is to automatically store each package locally on my work computer, so it won’t have to be downloaded again unless it gets a new update. Once you’ve done one install, subsequent installs take only a few minutes; I’m down to seven minutes for a subsequent server install, and I believe a desktop install (with graphics) will take about twenty based on a test I did last year. So far, I’ve only done the first desktop install on this new setup, as I realize this isn’t the current project — but I may need to use it for another computer at some point.

Once the new computer has been installed, it reboots itself and loads the new operating system from the hard drive. So if I had to talk the boss through re-installation of a computer over the phone, it would go like this:

  1. Plug the computer into the network jack.
  2. Start it up, and press F8 (several times if you want) at the first beep.
  3. Press the down arrow until you reach the menu item “NVIDIA Boot Age”
  4. Watch for half a minute to make sure nothing gets stuck. If it does within that time, restart.
  5. Go do something else for a while. If you hear another beep from the computer, it’s done and will show a login screen.

Customizing

Well, the workbench is in the basement with the new computers on it, but my office is on the upstairs floor. The first day I was working, I was running up and down two flights of stairs to reboot, adding to the distraction of my co-worker at the front desk. Well, I did move the hardware upstairs on the second day, so that’s much better.

My next step was to check out the new operating system environment to see how the installation went. So I added a remote-control program called openssh-server to the setup. Now once I hear the second beep (much easier now that it’s upstairs) I can log into the new computer and check it out.

The installer configuration lets me do some of the customization I need to do the operating system, but not everything. I can add packages to the setup, like I did with openssh-server. But beyond some very simple options, it installs everything to the default, which in some cases means that a program I want started automatically is not set up to. Well, I need to customize, and with that comes a problem.

In the many years I’ve worked with computers, I’ve seen a lot of installations. The tech setting them up would load the operating system, then play around with the settings until they get it the way they want. Most times, they would only document what they believe wasn’t obvious, invariably leaving out an important detail or two. Subsequent installs meant doing the same thing, relying on notes or memory from previous times.

This is no way to ensure quality installation every time. But we have another option, which is to customize the software packages. With the high quality package management that Ubuntu inherited from the Debian project, I can make my own repository of customized packages that will seamlessly integrate with the setup. So one package at a time, I’ll unpack it, tweak it, package it back up, and test it. If I hose the whole computer doing so, oh well — it will cost me another twenty minutes (or less) to reinstall the whole machine. Since I have two machines right now to test the installation, I just work on the other one while the reinstall is going on.

Time to back up

As I was driving home after the second day, I was quite pleased I got the bugs out of the installer setup (there had been many, almost all my fault for not understanding the manuals) and was looking forward to starting the package customization portion. Then it dawned on my that my setup on my work computer wasn’t backed up! First thing the next workday, I installed a simple backup program. Now after I test changes to the setup, I can press a button and make a new backup of my installer setup. Whew!

The third day at this I needed to get some other work done, but I was able to research how to make a simple “software repository” for my customized packaging. It turns out that this is a deceptively simple thing to do, but there was very little available on the internet to point me to the right documentation; so the research took me some time that I didn’t expect.

Next steps

Now I have two major tasks ahead of me, to set up the drbd and heartbeat on the two computers to test it out; and clone my installer setup to the new computers. These are both fairly major undertakings since it’s new-to-me software; but I believe using the methods I have so far will let me plug away at it easily while doing other work as well during the day. I’m also going to be installing two monitoring tools; a generalized package called “monit” that can test for all sorts of things, and a specialized “smartmon-tools” that will notify me of potential hard drive problems before anything actually goes bad with them.

I’m so looking forward to this work!

« Older entries