Do Package Managers Spoil Us?

I thought of this interesting question the other day while messing around with Slackware 9.0 which was one of the last versions of Slackware to come on a single disk. The goal was to try to take a Slackware 9.0 install to the most recent stable and it was almost accomplished. Glibc was the largest hassle…and I made it to Slackware 11.0 before something caused things to not boot at all. All things considered, I spent 3 days on trying to get Slackware 9 to current.

Slackware for those of you that don’t know, has no dependency resolving package manager. Previously, a good attempt was made with swaret and that was my first jump into package managers with dependency resolution all together when it came out…but Swaret is no longer being maintained and doesn’t really work well anymore.

Since Slackware has no real dep resolving package manager…it’s one of the last ‘true’ Unix like Linux versions out there. Back in the early to mid nineties…things were exactly like this. If you wanted to update your Linux version…you stepped through it manually and tried to get things to work. What was great about Slackware was making your own Slack packages with source…no dependency resolution but in the process of making the package you’d have all the dependencies eventually installed. In this entire process, you became VERY familiar with your system…how it booted, what run level things occurred at, how cron jobs worked, etc. You were baptized by fire so to speak…you were to sink or swim.

As I said, this got me thinking…do we rely on dependency resolving package managers TOO much? They’re cliché now of course…run of the mill. Back in the 1990’s though rpm was the only true package management system around…and rpm was never designed for internet consumption. The guys who wrote rpm had in mind CD and floppy upgrades. Fast forward to now and we have zypper, pacman, urpmi, deb, and conary…all built with online repositories in mind. Do these managers take the heavy lifting away for new users? Do they spoil them?

Do systems break less with easier resolutions due to package managers? Does it mean that the new user of today won’t be as experienced as the old user of yesterday?

I think it might.

Users in the past had to chip away and reassemble with less documentation and no package manager. This meant that the user of yesterday ripped apart systems and packages to discover how they worked and which cogs fit where.

The user of today follows step by step instructions and the software is given a sane set of defaults by most package developers when said package is installed.

Does this make for lazy users?

I don’t think users are lazy per se…but as previously stated, spoiled ones. And it’s no fault of their own…it’s the direction the software has taken us. Now the questions we need to answer are:

  1. Is this direction the correct direction we should be heading?
  2. Are there better approaches to package management that don’t follow the model we have currently (other than Conary)
  3. Can we come up with a system that doesn’t make new users spoiled?

I think I’m of both worlds…I started off with no package manager but managed to ride the wave of Red Hat 7.2 and above followed by Mand{rake,riva} and PCLinuxOS. I’m both spoiled and unspoiled. I know what it takes to manage a system without a conventional package manager but I also know how much time it can save me to use one. I sometimes find myself wanting less though…less and more. Less time and more hands on gutting the system. I think I’m in the minority though.

How about you, as a reader of this article? Do you think new users are spoiled by conventional package management systems? Do you see solutions or have ideas we can discuss? Is this really just a process we can improve or is there any programming to be done? Please sound off in the comments section!

Ubuntu Names Their Desktop After Us?

I was quite surprised this morning whilst reading my RSS feeds to discover that Ubuntu has named their most recent ‘lite desktop‘ Unity.  Surprised because we have our project, Unity Linux.  Strange that both our ‘lightweight distribution and desktop’ and Ubuntu’s ‘lite desktop’ should share a name together.

While I’m not really sure why no one threw up a stop to this in the Canonical brainstorming session that produced ‘Ubuntu Unity’ one can only have a laugh about this and hope we don’t get our pants sued off even though we named our distro first.

If things do get hairy, I’m sure we can change our name to ‘Unity Ubuntu’ or something similar to properly confuse everyone.

So, on behalf of all the Unity Linux developers, I’d like to thank the Academy and give a special shout out to Ubuntu for making our name known!  Thanks Mark!  Oh and good luck with that Unity thing! 😛

* devnet removes tongue from cheek

Rethinking Home Servers

Since my first home-built server (a PI 75Mhz behemoth) I’ve used Red Hat based distributions as my home server.  This lasted until around 2002-3 when I moved into a 4 bedroom house with 3 of my Air Force buddies and one of them wanted to learn Linux.

I knew from experience in the mid-nineties that Slackware was probably the most Unix-like distribution out there…I felt at home there quite a bit after learning the *nix ropes on Solaris 2.0.  So we configured a Slackware 8.1 dual processor tower server he was lucky enough to acquire as our home firewall-all-around-great-linux box.  He took his beginning steps there and flourished since our Air Force job already had us jumping around in a VAX/VMS mainframe.  We had many late night hacking sessions attempting to get things to work or compile there.  We also had a multi GB shared hard disk (unheard of at the time!) shared over samba.

After I got moved out, I continued to keep the Slackware box up to date.  I moved onward to Slackware 9.  Samba operated like a champ and Slackware was a great routing system and dhcp server.  Then I discovered ClarkConnect and loved the web interface.  I could do things in half the time!  I could do them over the web from work without SSH tunneling!  All this appealed to me at the time.

I continued to run ClarkConnect from that point on and have continued to all the way up to when it changed to ClearOS this past year.  Indeed, I have ClearOS now as my central server.

The only problem is that I’ve suffered 2 of the most catastrophic losses of files in my samba shares when running ClarkConnect/ClearOS…and I didn’t draw the lines together  on these separate incidents until just recently.

The first loss came when an entire samba share was completely eradicated…13GB of music was just gone.  The second loss happened just the other day when tons of scanned pictures just VANISHED into thin air.  Each time these happened, I was using ClarkConnect/ClearOS.  Each time it happened a few users reported instability in the forums of those distributions.  I am not sure how it could have happened and I was caught completely off guard on the second time…my backups were not yet configured since it was a new server.  The first time it happened…I didn’t know the value of having a good backup routine.  So each time, no backups 🙁  Lesson learned the hard way but learned nonetheless.

I recall running Slackware on my server and NEVER having the problems I have had with ClarkConnect/ClearOS.  This got me rethinking my home server design.  Servers should be the epitome of stability.  One should be able to migrate from one version of the operating system to the next with few hiccups.  When considering each of these it is very apparent that I should be running Slackware core on my main samba server.

I will be making that transition in the next week or two and moving to a Slackware core based server.  I’m not sure what to use for backups across the network (I usually mirror the drive to an NTFS drive in my Windows based multimedia server) nor backups locally to other hard drives.  If you have any suggestions, I’d really like to hear them.  Also, I’d like to know what readers consider using for a server.  Please vote for your favorite below and drop me a comment letting me know specifics and thanks for your help!

[poll id=”3″]

Are you looking for Linux Hosting?

Backup Directories and Subdirectories Preserving File Structure

I needed a quick way to backup my small music collection on my laptop and preserve the complete file structure and permissions.  There are a few ways to do this of course…for example, you can just copy the files using whatever file manager you happen to be using in your Linux distribution.  In some cases though, you might want your backup to take up less space than the full monty.  Especially true if you are backing up to thumb drives!

You can use the tar command to make this a snap.

Tar combines multiple files into an archive and you can use it to preserve permissions and file structure and then you can compress the archive to save space.

tar -c --recursion -p --file=backup.tar directory

The -c flag creates an archive for us.  –recursion goes through all subdirectories.  The -p flag preserves permissions on all the files.  This is handy if you have certain folders or files that you need to sticky with individual users or groups.  The –file flag is the option for outputting to a file name.  You can also add multiple directories that you’re zipping up like the following:

tar -c --recursion -p --file=backup.tar directory1 directory2 directory3

After you have the file output as backup.tar it’s time to compress it.  The most standard way to do this is to use the gzip command:

gzip backup.tar

This command will output backup.tar.gz to the current directory which will take up less space than that of a standard 1-to-1 copy.  There are many other flags and options that you can use with the tar command.  For an in depth look at those flags and options, check the tar man page by typing ‘man tar’ in a terminal or view it online here.

UPDATE:

Commenter ‘jack’ has offered a few extra flags to combine the archiving and zipping into one command:

tar -c -z --recursion -p --file=backup.tar directory1 directory2 directory3

The -z flag will gzip the archive after you’ve used tar to create it.  Substituting -j in for -z above will bzip the archive.  Thanks for the tips jack!

Unity Linux Automates Build Process

The guys over at Unity Linux have created and developed a ‘build server’ that will allow the automation of package building in both 64bit and 32bit flavors.  All the building is done in a chroot and then the package is automatically moved into the ‘Testing’ repository.

Very interesting stuff…much like what rMake does for Conary and Foresight Linux…but applied to RPM’s instead of conary changesets.  Just the same, it’s interesting that such a small team of developers are showing their prowess in development and making strides toward building a robust developer community.

What Is Unity Linux?

There’s been a lot of confusion about exactly what Unity Linux is.

I thought I’d talk today a bit about that.   I’d like to talk a bit about what Unity uses for it’s ‘guts’.  I’d also like to dispel some myths surrounding Unity.  Lastly, I’d like to talk briefly about how Unity is doing all it can to further Open Source and Linux by contributing to projects it is involved with.  The reason I know so much about this topic is that I’m the webmaster and host for the Unity Linux Project as well as one of the documentation team members.  So, let’s take a look first at what Unity Linux is…

What is Unity Linux

Unity Linux is not a conventional distribution of Linux.  It’s a core on which developers can build their own distribution of Linux.  We’ve set out from the start to provide an excellent minimum graphical environment that gave developers “just enough graphics” for them to create something.  The smaller, the better.  We elected to go with Openbox because of it’s size and stability.  We selected using Mandriva as our base because of the number of packages they provide and the quality of those packages.  We pushed lxpanel as a minimal panel because it provides just enough functionality for distro developers to see what they’ve installed after they’ve installed it…it also is familiar to most people whereas Openbox right click menu’s may not be.  All in all, our target for the core release is developers.  We’re not designing this basic desktop to be used by end users.  We’re not trying to win any awards with our awesome minimalistic desktop skills.  Why would we do this?  To answer this, you have to take a look at our developers.

One of our developers, Kdulcimer, is the lead developer of TinyMe Linux.  A few years ago, he created a fantastic minimalistic “remaster” of PCLinuxOS.  It was wildly popular and continues to be so today.  Kdulcimer was one of the first developers that elected to go with Unity Linux for his core.  Our other developers saw what Kdulcimer did with his distro and how small he made the core.  They learned from how he did things and applied it to Unity.  Thus, Unity has a small base…as evident by both the beta releases.  Upcoming release candidates will be very much the same.

Lead developer gettinther does a good job explaining what Unity is:

One of the big issues facing small distros around is that there’s a limitation in the ability of each group to maintain a healthy up-to-date core.  Most people prefer to focus on the DE / user interface, working on the look&feel rather the the internals.  Those distros end up with stale core which in turn causes numerous “hard-to-find” issues.

Most of the distros with us existed before Unity, like Tinyme, Sam (abandoned project now), Granular, Synergy (formely eeepclos).  The idea is to create distros only insofar as “presetting desktops by people who love those desktops”.  Rather than having a “one shoe fits all”, we decided to provide a core module and look after maintaining it.  Each branch distribution joins the team and has full developer access.  For Unity to become a full fledged distro means favoring a DE over others.  By limiting the scope to the core product (we maintain the various DE too but leave the DE specific changes to the branches).  It makes it a little more difficult to install stuff but it also means that all DE are looked after.

As far as the user is concerned, it means the each branch has their word in the development of the core which ensures that the distro is well supported.  It pools the efforts of each distro who would otherwise be on their own so means a large development team and as such better packages.

So Unity Linux is a base on which to build.  A foundation for “remasters” to build from.  But what is a remaster?  What technologies does Unity use? Let’s take a look at the internals of Unity next.

Unity Linux Internals aka Guts

When we initially set out to not only have a small graphical base but also to wrap around the LiveCD project.  For those of you who don’t know what LiveCD is..you can visit the old berlios.de project page:

The project features automatic hardware detection and setup, and utilises compression technology to build a LiveCD from a partition much larger than would typically fit on a CD. (Up to 2GB for a normal 650MB CD.) When booting from this LiveCD, the data is transparently decompressed as needed. LiveCD now supports udev.

Currently,  Mandrakelinux and PCLinuxOS are supported as a host for creation of the LiveCD, i.e. we are only able to create LiveCD’s from a MDK or PCLinuxOS install. The LiveCD scripts are still beta, and bugs are being eliminated. Your help and feedback are appreciated!

The set of scripts allows a person to make a liveCD copy out of their desktop for backup purposes or as a standalone linux distribution.  When you create that new ISO or backup ISO, you have ‘remastered’ the master copy.  So the livecd scripts are really just a set of tools that allows a user to create something new or backup their existing desktop as a live CD.

The project at berlios was taken over by Didouph as lead developer just before Unity was formed.  There hadn’t been much work after Tom Kelly left the project quite a long time ago, but Didouph was optimistic.  When he joined Team Unity, he placed LiveCD development on the back burner and worked hard with the graphics team on logo development.

Later, it became apparent that in order to keep creating a great distribution that could remaster itself, we needed to make improvements to the code of LiveCD.  First off, it needed 64bit support.  Secondly, it needed better detection than what it had.  Third, it needed to have internationalization work done.  Fourth, it needed to support higher kernel versions than what it did.  All those things have been accomplished with internationalization still being worked on.

When we initially took over the ‘modernization’ of LiveCD we didn’t all flock to berlios to do so.  Work instead began when we gave a small sliver of our own SVN over to LiveCD.  It made sense geographically for our developers to have the ability to commit code in the same place instead of at a third party (berlios); the reason being, we needed many commits fast and didn’t want to wait…we were ready to move forward with it immediately.  We snagged the GPL’d LiveCD code and located it on our SVN.

Since Didouph was the maintainer of LiveCD, we felt it only natural that Unity would lend a hand to him and his project by taking over development.  An entire team working on LiveCD would mean greater output and more advancement.  Thus, Unity maintaining the LiveCD project was born.  Anyone is welcome to take the code and use it how they seem fit.  We’re working on getting LiveCD it’s own proper SVN or Git repository at a public site away from Unity Linux…if you’d like access (read only) to LiveCD SVN, drop Unity Linux a line via their contact page.

Common Myths Surrounding Unity Linux

Heard any good ones lately?  If I don’t cover the ones you’ve heard here, please leave me a comment and I’ll address yours specifically.

Myth #1 – Unity Linux is just PCLinuxOS rebranded

Most of the developers of Unity Linux were contributors to PCLinuxOS during the time that Texstar had stepped away.  As contributors, they were not part of the developer team.  They had limited access to the core, iner-workings of PCLinuxOS.  How do I know?  I was a developer…the main web developer…for PCLinuxOS and I monitored all mailing lists, all websites, and even was chief of MyPCLinuxOS.com.  There were very few people on the development team of PCLinuxOS that are now part of Unity Linux…because the PCLinuxOS development team was kept small.

When Unity Linux initially was started, the contributors and developers that were involved grabbed a ‘snapshot’ of the PCLinuxOS repositories and began working on bringing packages to updated versions.  They quickly ran into trouble because PCLinuxOS used such an outdated toolchain that many new packages wouldn’t compile with it.

After some discussion, developers abandoned PCLinuxOS packages and instead worked with Mandriva packages.  This allowed Unity to move forward sans old toolchain and outdated core.  Now most of this stuff doesn’t matter to the end user…they just want a stable environment.  But the Unity Linux developers wanted to push forward with the latest kernels, the latest rpm version, and the latest smart package manager versions.  Doing so required massive leaps forward even from Mandriva.

As you can see, while Unity Linux originally started with a PCLinuxOS fork, they abandoned that fork and rebased on Mandriva.  They now stay inline with Mandriva development.  If you have Mandriva and Unity Linux questions, please stop into the Unity Linux chat channel on Freenode: #unitylinux and ask proyvind questions…as he is the Mandriva Linux representative that works with Unity Linux 🙂

Myth #2 – Unity Linux Stole mklivecd aka livecd from PCLinuxOS

This is a pretty funny one and I’ve seen quite a few references to ‘stealing’  GPL code.  First things first:  You cannot ‘steal’ GPL code.  It just can’t be done.  Secondly, the LiveCD project was stagnant and had a SINGLE developer working on it.  That developer joined Unity Linux and all 25+ developers there decided to help him make some progress on it.  In the meantime, they took the initiative to make improvements.  For example, they gave it 64bit compatibility.  They gave it have better detection.  They took the code and gave it better international language support.  All those things are made available for FREE to any distribution wanting to download a snapshot from SVN.

Now, if anyone has a claim to LiveCD as ‘theirs’ it would be Jaco Greefe who was the principal on the project LONG before any distributions other than Mandrake aka Mandriva even worked with it.  Texstar grabbed what Jaco’s project mklivecd and used it to create the original PCLinuxOS 2003 release.  This release was based on Mandrake 9.2 at the time and a few other Mandrake developers began to debug the script through the creation of PCLinuxOS.  Mandrake was a trademarked name, so Texstar named it PCLinuxOS.

As you can see, if any one distribution has claim to mklivecd, it would be Mandrake aka Mandriva which was where the script creators came from.  It’s also where the script was first made useable.  However, claim that Texstar made it into a nice package with PCLinuxOS…that is totally true.  What we’re doing now by developing it is making sure it continues to progress into the future with 64bit support and even when udev is dropped from Linux…no matter what, we’ll make sure it works…and hopefully it will work for more than just Mandriva derived distributions.

There have been many attempts by Unity Linux developers to get other distributions that use mklivecd involved with the development of it.  That invitation is always open to any and all distributions that use it.

Myth #3 – Unity Linux wants to steal away users from other distributions of Linux

The main reason this isn’t true is that Unity Linux targets DEVELOPERS.  We don’t target end users.  If end users like Unity, GREAT!  If not, we don’t worry about it.  Unity Linux has derivative distributions called “branches” that work to target the end user.  Unity Linux itself is targeted squarely at distribution developers and advanced users who want to be able to use the mklivecd scripts.

Myth #4 – Unity Linux DOESN’T use PCLinuxOS at all in development

This is half true.  We don’t ‘use’ PCLinuxOS to create things…we use it as a mirror synch.  Paul Grinberg, a developer on the team, has a PCLinuxOS box that he doesn’t use.  During the initial setup of Unity Linux, we based things on PCLinuxOS before purging and switching to Mandriva.  Since the developer mirror server (referred to on the mailing lists as the dev server) still ran PCLinuxOS and Unity Linux didn’t have a release yet, we saw no reason to change it.

As Unity Linux still has no stable release as of March 29, 2010, that developer mirror server still runs PCLinuxOS and pushes uploaded packages developed on a Unity Linux server to various mirrors for propagation.

In other words, the PCLinuxOS server Unity Linux uses is just a web server.  It will be replaced with Unity Linux when 2010 is released.  Until then, taking the time to wipe it out and repopulate it would throw a kink in the flow of package development so developers have put this ‘to-do’ item as something to be accomplished after stable release.

Unity Linux and Open Source

Unity Linux does a great job of contributing to projects upstream.  As an example, David Smid, a Unity Linux developer, is also a Smart Package Manager (SPM) developer.  This allows Unity the ability to test the latest and greatest SPM and get things quickly patched/fixed/redesigned.  Other projects such as mklivecd are developed openly by Unity Linux and contributors are welcome.  Unity Linux contributes bug finds to Mandriva through use of the Mandriva Cooker repository.  Unity Linux developer Paul Grinberg contributed Google Map integration for MirrorMon, which you can view on our Mirror Status Page, back upstream to the creator of MirrorMon.  Unity Linux also contributes upstream to rpm5.org.

Unity Linux also has a working partnership with Yoper Linux.  Why?  Because Yoper Linux uses many of the same core technologies (Smart, rpm5) that Unity Linux uses and because the lead developer, Tobias Gerschner, is an all around great guy :).

You can see everything that Unity Linux works on by visiting our repository:  http://dev.unity-linux.org/projects/unitylinux/repository

Development is done in the open, not behind closed doors:  http://groups.google.com/group/ul-developers

Unity Linux strives with an almost rabid will to keep everything in the open for users and branch developers so that they are not left wondering what’s going on with their distribution.  The Developers continue to try and engage other distributions to work with them and will continue to do so in the future.

Closing Thoughts

Unity Linux doesn’t target the same users as your average distribution of Linux…they’re after the more savvy users out there.  The ones that want to create something and make something from the core image.  Users that like to tinker and mess and break things.

Unity got off to a rough start with much FUD slinging and accusations.  Hopefully, the actions you see that Unity has taken to keep it’s project open will show the intent of the developers…to make a great core on which others can branch from all the while remaining open and free for everyone.

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.