Do Package Managers Spoil Us?

I thought of this interesting question the other day while messing around with Slackware 9.0 which was one of the last versions of Slackware to come on a single disk. The goal was to try to take a Slackware 9.0 install to the most recent stable and it was almost accomplished. Glibc was the largest hassle…and I made it to Slackware 11.0 before something caused things to not boot at all. All things considered, I spent 3 days on trying to get Slackware 9 to current.

Slackware for those of you that don’t know, has no dependency resolving package manager. Previously, a good attempt was made with swaret and that was my first jump into package managers with dependency resolution all together when it came out…but Swaret is no longer being maintained and doesn’t really work well anymore.

Since Slackware has no real dep resolving package manager…it’s one of the last ‘true’ Unix like Linux versions out there. Back in the early to mid nineties…things were exactly like this. If you wanted to update your Linux version…you stepped through it manually and tried to get things to work. What was great about Slackware was making your own Slack packages with source…no dependency resolution but in the process of making the package you’d have all the dependencies eventually installed. In this entire process, you became VERY familiar with your system…how it booted, what run level things occurred at, how cron jobs worked, etc. You were baptized by fire so to speak…you were to sink or swim.

As I said, this got me thinking…do we rely on dependency resolving package managers TOO much? They’re cliché now of course…run of the mill. Back in the 1990’s though rpm was the only true package management system around…and rpm was never designed for internet consumption. The guys who wrote rpm had in mind CD and floppy upgrades. Fast forward to now and we have zypper, pacman, urpmi, deb, and conary…all built with online repositories in mind. Do these managers take the heavy lifting away for new users? Do they spoil them?

Do systems break less with easier resolutions due to package managers? Does it mean that the new user of today won’t be as experienced as the old user of yesterday?

I think it might.

Users in the past had to chip away and reassemble with less documentation and no package manager. This meant that the user of yesterday ripped apart systems and packages to discover how they worked and which cogs fit where.

The user of today follows step by step instructions and the software is given a sane set of defaults by most package developers when said package is installed.

Does this make for lazy users?

I don’t think users are lazy per se…but as previously stated, spoiled ones. And it’s no fault of their own…it’s the direction the software has taken us. Now the questions we need to answer are:

  1. Is this direction the correct direction we should be heading?
  2. Are there better approaches to package management that don’t follow the model we have currently (other than Conary)
  3. Can we come up with a system that doesn’t make new users spoiled?

I think I’m of both worlds…I started off with no package manager but managed to ride the wave of Red Hat 7.2 and above followed by Mand{rake,riva} and PCLinuxOS. I’m both spoiled and unspoiled. I know what it takes to manage a system without a conventional package manager but I also know how much time it can save me to use one. I sometimes find myself wanting less though…less and more. Less time and more hands on gutting the system. I think I’m in the minority though.

How about you, as a reader of this article? Do you think new users are spoiled by conventional package management systems? Do you see solutions or have ideas we can discuss? Is this really just a process we can improve or is there any programming to be done? Please sound off in the comments section!

Ubuntu Names Their Desktop After Us?

I was quite surprised this morning whilst reading my RSS feeds to discover that Ubuntu has named their most recent ‘lite desktop‘ Unity.  Surprised because we have our project, Unity Linux.  Strange that both our ‘lightweight distribution and desktop’ and Ubuntu’s ‘lite desktop’ should share a name together.

While I’m not really sure why no one threw up a stop to this in the Canonical brainstorming session that produced ‘Ubuntu Unity’ one can only have a laugh about this and hope we don’t get our pants sued off even though we named our distro first.

If things do get hairy, I’m sure we can change our name to ‘Unity Ubuntu’ or something similar to properly confuse everyone.

So, on behalf of all the Unity Linux developers, I’d like to thank the Academy and give a special shout out to Ubuntu for making our name known!  Thanks Mark!  Oh and good luck with that Unity thing! 😛

* devnet removes tongue from cheek

Green Technology

I recently began blogging with my wife about green living including green technology.  We launched the site a couple of days ago and today was my first post on green technology:  Corky, a Battery Free Wireless Mouse.  It’s a mouse that charges up with every mechanical motion and click of the button you make.

We’re really excited to chronicle the changes we make in our lives to ‘go green’ and we’re attempting to develop a community around the site so all can benefit from shared knowledge.

Please check the site out and let me know what you think!

Rethinking Home Servers

Since my first home-built server (a PI 75Mhz behemoth) I’ve used Red Hat based distributions as my home server.  This lasted until around 2002-3 when I moved into a 4 bedroom house with 3 of my Air Force buddies and one of them wanted to learn Linux.

I knew from experience in the mid-nineties that Slackware was probably the most Unix-like distribution out there…I felt at home there quite a bit after learning the *nix ropes on Solaris 2.0.  So we configured a Slackware 8.1 dual processor tower server he was lucky enough to acquire as our home firewall-all-around-great-linux box.  He took his beginning steps there and flourished since our Air Force job already had us jumping around in a VAX/VMS mainframe.  We had many late night hacking sessions attempting to get things to work or compile there.  We also had a multi GB shared hard disk (unheard of at the time!) shared over samba.

After I got moved out, I continued to keep the Slackware box up to date.  I moved onward to Slackware 9.  Samba operated like a champ and Slackware was a great routing system and dhcp server.  Then I discovered ClarkConnect and loved the web interface.  I could do things in half the time!  I could do them over the web from work without SSH tunneling!  All this appealed to me at the time.

I continued to run ClarkConnect from that point on and have continued to all the way up to when it changed to ClearOS this past year.  Indeed, I have ClearOS now as my central server.

The only problem is that I’ve suffered 2 of the most catastrophic losses of files in my samba shares when running ClarkConnect/ClearOS…and I didn’t draw the lines together  on these separate incidents until just recently.

The first loss came when an entire samba share was completely eradicated…13GB of music was just gone.  The second loss happened just the other day when tons of scanned pictures just VANISHED into thin air.  Each time these happened, I was using ClarkConnect/ClearOS.  Each time it happened a few users reported instability in the forums of those distributions.  I am not sure how it could have happened and I was caught completely off guard on the second time…my backups were not yet configured since it was a new server.  The first time it happened…I didn’t know the value of having a good backup routine.  So each time, no backups 🙁  Lesson learned the hard way but learned nonetheless.

I recall running Slackware on my server and NEVER having the problems I have had with ClarkConnect/ClearOS.  This got me rethinking my home server design.  Servers should be the epitome of stability.  One should be able to migrate from one version of the operating system to the next with few hiccups.  When considering each of these it is very apparent that I should be running Slackware core on my main samba server.

I will be making that transition in the next week or two and moving to a Slackware core based server.  I’m not sure what to use for backups across the network (I usually mirror the drive to an NTFS drive in my Windows based multimedia server) nor backups locally to other hard drives.  If you have any suggestions, I’d really like to hear them.  Also, I’d like to know what readers consider using for a server.  Please vote for your favorite below and drop me a comment letting me know specifics and thanks for your help!

[poll id=”3″]

Are you looking for Linux Hosting?

Backup Directories and Subdirectories Preserving File Structure

I needed a quick way to backup my small music collection on my laptop and preserve the complete file structure and permissions.  There are a few ways to do this of course…for example, you can just copy the files using whatever file manager you happen to be using in your Linux distribution.  In some cases though, you might want your backup to take up less space than the full monty.  Especially true if you are backing up to thumb drives!

You can use the tar command to make this a snap.

Tar combines multiple files into an archive and you can use it to preserve permissions and file structure and then you can compress the archive to save space.

tar -c --recursion -p --file=backup.tar directory

The -c flag creates an archive for us.  –recursion goes through all subdirectories.  The -p flag preserves permissions on all the files.  This is handy if you have certain folders or files that you need to sticky with individual users or groups.  The –file flag is the option for outputting to a file name.  You can also add multiple directories that you’re zipping up like the following:

tar -c --recursion -p --file=backup.tar directory1 directory2 directory3

After you have the file output as backup.tar it’s time to compress it.  The most standard way to do this is to use the gzip command:

gzip backup.tar

This command will output backup.tar.gz to the current directory which will take up less space than that of a standard 1-to-1 copy.  There are many other flags and options that you can use with the tar command.  For an in depth look at those flags and options, check the tar man page by typing ‘man tar’ in a terminal or view it online here.

UPDATE:

Commenter ‘jack’ has offered a few extra flags to combine the archiving and zipping into one command:

tar -c -z --recursion -p --file=backup.tar directory1 directory2 directory3

The -z flag will gzip the archive after you’ve used tar to create it.  Substituting -j in for -z above will bzip the archive.  Thanks for the tips jack!

Unity Linux Automates Build Process

The guys over at Unity Linux have created and developed a ‘build server’ that will allow the automation of package building in both 64bit and 32bit flavors.  All the building is done in a chroot and then the package is automatically moved into the ‘Testing’ repository.

Very interesting stuff…much like what rMake does for Conary and Foresight Linux…but applied to RPM’s instead of conary changesets.  Just the same, it’s interesting that such a small team of developers are showing their prowess in development and making strides toward building a robust developer community.

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.