Agonizing’s over, accept success when you can

When you don’t know where to start, attack everything at once.

I’d say the heading speaks for itself, yet a new project of this magnitude always seems to boggle me. Other things it my personal life were also in flux, as much as they are now I’m settled. So instead of setting real goals, I had to come to terms with all the relationships that are involved in a reliable, robust computer network.

So, mulling over what I’ve learned about technology over the last umpteen years, I agonized. Over stupid stuff: users, groups, hard drives, network cards, virtualization, file systems, updating, customizations, preferences, and so many more things.

Don’t bite off more than you can chew at a time. Especially if it’s not appropriate to spit chunks.

Well, I rechecked the preseed setup I had installed on my computer to start with. Now preseeding, when it’s set up well, is a magical beast. But like most magic, it has some serious problems with fixes when things don’t happen as you expect. To explain, preseeding involves setting up a computer automatically, so if you have to reinstall everything on the computer you can do so quickly without remembering complicated steps. Or referring to half-assed notes you took before.

Therefore I set up two of the big computers to automatically load the Ubuntu Hardy Server operating system. In it’s current state, it’s a very open, stable, practical system with the option of easily adding fancier features that make it robust. But the most advantageous feature is what it inherited from the Debian project; the apt package management system.

So I got the computers to be able to be wiped clean and reloaded with a basic operating system in about seven minutes. I played with this over and over again, adding a few tweaks as I went. When things went well, I’d hit the backup button I installed to make sure my work wouldn’t be in vain if a hard drive failed. Looks like I was in sight of my goal, to be able to install a new server with a minimum of steps that a non-tech would be able to follow by purchasing the right equipment, plugging it in, and not have to make any decisions to get to a working state.

I hit a big snag when I installed two more network cards into the computer, which only had the one attached to the motherboard. I had searched for hours trying to find good cards that would work with Linux and give me the most bang I could out of the office wiring. It was my plan that I was to use one of these to install from, yet during the installation process there was a problem. The card tries to connect to the network twice during the process, and the second time through it couldn’t connect. It was maddening!

Network card Russian Roulette.

Fortunately, the Ubuntu installer has some very nice features, and I found that pressing Alt-F2 I could log into the system at the point it was hung up; and Alt-F4 to see all the messages the installer had spit out. It took me a while, but I realized that it was switching network cards on me between the first and second connection. I googled for hours to find the right answers — I couldn’t have the computer switching network cards on me in some random manner. If I rebooted, or the power went out so long it had to shut down, then I would be the only one able to get it back running correctly. So much for reliability — I felt I was falling back into the hole I’d been spending all this time, and the boss’s money, to dig out of.

After many reboots, I discovered that the computer would assign the network cards at random. Usually, the one on the motherboard would get named “eth0”, but occasionally it would end up as “eth2”. And the other two cards posed a problem as well, in that all I could see was their make and model; since I’d bought them at the same time from the same place, they were indistinguishable.

So as I googled my fortieth page to find some answers, I discovered that the nice folks that provide Ubuntu had solved this problem with some more magic called udev, but it only kicks in after the system is installed. That’s great, but I do wish that it had been more obvious; the configuration file is buried deep in the system. I do have to remember that if I change out cards that I have to find that file again, or the new card will be mounted in a new place, rather than the system automatically replacing the one I’ve taken out.

As much as I hated to, I resigned myself to installing from the original network card that came with the machine. It did have me rethink my security plan, and after agonizing over that for a while I realized that I just had to document plugging a cable in here to start, there when it’s ready to become a second router. No biggie, and it’s embarrassing to worry too much about such a minor thing. But if you know me, that’s not my nature; I cogitate on puzzles until they are solved or something else forces me to abandon it. Then I realized that this whole situation was forcing me to unplug the computer from the jack that goes to the internet while the installation was taking place.

Computers have no soul, and much as you’d like to be friends with them and assign human characteristics, they’ll never watch your back. Yet, this whole project has developed some life of its own; or it seems that way to me. Resolving this problem forced me back to what was smart, and made me realize that the one thing I’d always done during Windows installations — unplugging from the public network — was always a smart thing to do. Even for an operating system that we may feel secure with once it’s up and running.

Hacking is fun, and will cause you much heartache.

So now I had a good basic system going, and it was time to decide what my next move would be. Well, slick as a new Ubuntu installation is, it doesn’t make much of a server out of the box. That’s because any software that talks to the network, that is any process that uses outgoing ports, is disabled even after installation. That’s a very smart move on their part, as the liability of opening a port to the outside world can potentially be huge — or at least screw up your week.

But server software isn’t very useful if it’s not serving, and it’s left to the admin to figure and configure it out. Normally, the process goes like this: you install software, configure it and all the related pieces, it doesn’t work as you expect, you try some features in it you think you may want, you find out you don’t need all that junk, you get it working, and you make some comments along the way if you think you need to do it again.

It works for now. A year later, you install a new computer or some software that changes how things are set up. You go back to what you’ve done before, and remember how clever you were at the time. When you look again, you find you can’t get back to the mindset you were at before. What seemed obvious at the time when you had originally installed the software you’d left out of your comments is now coming back to bite you. And some of the relationships to other software are no longer applicable. Almost always, it ends up feeling simpler just to start from scratch again, so you do.

You know that putting in good comments and achieving a consistent state for others to follow is the right thing to do, but at that point you feel as if you’ve wasted your time. If you’re taking over someone else’s work, its much worse unless they wrote an entire step-by-step how-to document. It’s much better to configure software as an installable package, and keep consistency with the rest of the system. Keep commenting and provide a list of changes you’ve made to it over time, but you really only need to fully explain what makes your customized version different from the default. So, when it comes time for someone else (or you at a later stage in your life) to consider whether to use this version or go back to the drawing board, they only need to evaluate if your package does what they want it to do; and you make it easy for them to make that decision by keeping your notes very simple.

Enjoy the nice things that are right in front of you.

Ubuntu inherited something very nice from the Debian project; it’s not only an operating system, it’s a software publishing platform. Much thought and care has been put into the publishing tools, and hundreds of little helper programs are available to round out your setup. You can list these programs as “dependencies” to your software, and call on them to provide services so you don’t have to write the functionality yourself. That’s great if you’re the official maintainer of the software package, and it works extremely well for their developers.

Now I want to make packages for myself, to set up this new network automatically to a custom configuration. It turns out that the huge effort to develop the operating system does not support this well with how-to documents; or rather that these documents are huge and unwieldy as they address things such as how to submit new work to the society for inclusion in the distribution.

Here’s the choice I was facing: either do things the old fashioned hacking way, or learn how to apply what was already available. I thought some more about how things are normally done by system administrators to maintain and distribute custom configurations. I was looking at developing my own way of pulling down a custom configuration, applying it to the software, restarting the software if needed, etc. Or I could use tools others had developed to do similar things, having to learn their arcane nomenclature and understand their mindset. As I looked at this hard, I could see very clearly that a different system would require much more work on my part than just learning how to do it right in the first place.

Old dog learns new tricks.

So after poring over more how-to documents and bugging the nice people at the Freenode’s #ubuntu-motu chat room, I’ve realized a very simple way to make and distribute software configured to my needs. Note that this is just a starting point, I’ll be refining and better documenting this method as I learn more.

  • Start with a clean working system.
  • Install the official package you want to customize.
  • Tweak the configuration as desired, making comments along the way.
  • Test, re-tweak, and test again until all bugs are resolved or acceptable.
  • In an empty directory, run “sudo dpkg-repack –generate”. This will make the guts of a .deb package with all your configuration changes intact.
  • Edit the DEBIAN/control file to bump the version number up higher than the original software.
  • Run “sudo dpkg-deb -b <your directory>”. This makes a file called “.deb”, which is a hidden file.
  • Rename the .deb file to a standardized package name with your new version.
  • Copy the renamed file to your repository.
  • In your repository, run “dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz”.
  • On the computers you want to install or update, make sure your repository is listed in “/etc/apt/sources.list” or “/etc/apt/sources.list.d/”.
  • Run “sudo aptitude update”. This will make your new package available.
  • Run “sudo aptitude purge <your package>” if there’s an older version of the software installed. This wipes out any configuration files on the target system as well.
  • Run “sudo aptitude install <your package>”. That’s it!

If this looks complex, of course it is. The method has many advantages to it though:

  • You can install the custom configuration on as many computers as you want simply once this process is done.
  • If you want to share your customized package with others, you make a publicly assessable repository and upload/update it.
  • You can include your customized package in a preseeded, hands-off install.

I’m missing some steps I’ll want to include once I learn how, mainly how to proper document the changes. That looks very simple, I just haven’t gotten that far yet. And I’ll be automating this process along the way so I won’t have to copy and paste the somewhat arcane commands. Thank you for reading, I’ll look forward to writing another overly-long post on some irregular date in the future.