How-To Choose the Right Distribution of Linux

so many choices
Courtesy of evelynishere

Which distribution is the RIGHT distribution?  Is there such a thing?  When you start your journey with Linux you might here something like this:

– Ubuntu is the best distribution for the desktop
– Linux Mint is the best distribution for a home user and the desktop
– Debian is the best way to go because of its stability and solid base
– Mandriva isn’t as good as Mageia
– Mageia isn’t as good as Mandriva
– Red Hat is for servers only
– Distribution X is better than distribution Y!

Here’s the thing…statements like these are all BLATANTLY FALSE.  Why?  Because they’re opinions..everyone has one and they are all just that…opinions.

When you start your journey with Linux, don’t let someone else tell you what you should or shouldn’t use.  Go out and find what fits you like a glove and use that.  It doesn’t matter how large of community the distribution has (unless that is what you’re specifically looking for) or how often it updates or how many hits it has on the Distrowatch tracker.  Use what is best FOR YOU.  Only you can decide what distribution scratches whatever itch you have.

If you choose the right one, chances are you’ll be a part of that distribution for a long time.  But don’t worry, it isn’t like Indiana Jones and the Last Crusade and if you pick the wrong distribution you won’t turn into a dusty exploding skeleton.  In this situation, the RIGHT distribution of Linux is ANY distribution of Linux.  As long as you’re making a conscious effort to choose free software and use Linux, you win.

I’ve been in, around and even leading Linux communities since the late 1990’s and there is one thing I’ve found it is this:  Every single distribution has a place in this world.  Every single distribution has it’s own niche users.  Every single distribution of Linux is important. I’m sure many of you have heard or have said that Linux just needs to simplify more and have only a handful of distributions so we can concentrate on just that handful and make it be fantastic.  Unfortunately, that wouldn’t work very well and would stifle creativity.  To prove my point…what if we didn’t have small distributions at all?  That wouldn’t have a large effect on Linux as a whole right? Let’s take a look at that hypothesis…

If Small Distributions Never Were…

As an example:  Symphony OS.  It used FVWM and Mezzo for the desktop experience and it REVOLUTIONIZED the way we see and interact with files.  If you use Gnome 3, Ubuntu Unity, or KDE 4.X, you’re using concepts that Symphony OS was the first to put onto a Linux desktop.  Symphony never had a huge user base.  It never shot up the charts at Distrowatch.  It did however, push the envelope of what a desktop distribution can and can’t do.  It did push the boundaries of design.  It did push simplicity and usability to a new level.  It also did web apps before webapps were cool.  Somehow it never caught on…but I it influenced people and challenged people to push the envelope of what was possible and impossible with desktop Linux.

Small, Niche Distributions Perform a Function

Often times I have found Linux users looking for a distribution that fills a specific function.  “I just want a file sharing distribution” they’ll say, or perhaps “I just want a nice and simple desktop”, or maybe even “I just want a tight firewall”.  The beauty of open source software and Linux is that you’ll find small, niche distributions that fit the bill for all of those needs and when you use these distributions, you’ll continue to learn about Linux…and perhaps you’ll push the envelope of what is possible and not possible just like Symphony OS did.

Regardless if you choose small or large distributions, you win.  The fact is you CHOSE and weren’t force fed something by system installers and companies who think they know what is best for you.

We CAN All Get Along

Many times when we pick the flavor of Linux we like, we identify with its goals…the direction its heading…maybe even the direction the community champions.  There isn’t anything wrong with this.  The next time you experience passionate supporters of Linux, keep in mind that neither you nor they are the enemy.  If you both use Linux and open source, you both win.  Small, large,  and niche distributions of Linux operate harmoniously together and build off one another…it’s one of the unseen benefits of Linux and open source.  Beauty and power in simplicity through collaboration.  Congratulate yourself every single day for choosing Linux!

Linux File Permissions, Groups, and Users

Why Are Permissions Important?

Permissions are important for keeping your data safe and secure.   Utilizing permission settings in Linux can benefit you and those you want to give access to your files and you don’t need to open up everything just to share one file or directory (something Windows sharing often does).  You can group individual users together and change permissions on folders (called directories in Linux) and files and you don’t have to be in the same OU or workgroup or be part of a domain for them to access those files.  You can change permissions on one file and share that out to a single group or multiple groups.  Fine grained security over your files places you in the driver seat in control of your own data.

Some will argue that it may be too much responsibility…that placing this onto the user is foolish and other aforementioned operating systems don’t do this.  You’d be right…XP doesn’t do this.  However, Microsoft saw what Linux and Unix do with the principle of least privilege and have copied this aspect from them.  While the NTFS filesystem employs user access lists with workgroups and domains…it cannot mirror the fine grained, small scale security of Linux for individual files and folders.  For the home user, Linux empowers control and security.

I’m going to go over how users and directory/file permissions work.  So, let’s setup an example that will allow us to explore file permissions.  If you have any questions, just ask it in the comments section at the end of the article.

File Permissions Explained

permissionsThe picture to your left is a snapshot of my $HOME directory.  I’ve included this “legend” to color code and label the various columns.  Let’s go through the labels and names of things first and then work on understanding how we can manipulate them in the next section.

As noted in the picture, the first column (orange) explains whether or not the contents listed is a directory or not.  If any of these happened NOT to be a directory, a dash (-) would be in place of the d at the beginning of the listing on the far left.

In the second, third, and fourth column (Green, Blue and Red) we find permissions.  Looking at the gray box in the bottom-right corner gives us an explanation of what each letter represents in our first few columns.  These tell us whether or not each user, group, or other (explained in detail later in this article) have read, write, and execute privileges for the file or folder/directory.

In the 5th column (white) the number of hard links is displayed.  This column shows the number of actual physical hard links.  A hard link is a directory reference, or pointer, to a file on a storage volume that is a specific location of physical data.  More information on hard links vs. symbolic (soft) links can be found here.

In column 6 (light blue) we find the user/owner of the file/directory.  In column 7 (gray blue), the group that has access to the file/folder is displayed.  In column 8 (pink), the size of the file or folder is shown in kilobytes.  In column 9 (fluorescent green), the last date the file or folder was altered or touched is shown.  In column 10 (grey), the file or directory name is displayed.

We’re going to pay specific attention to the first four columns in the next section and then follow that up by working with the sixth and seventh by going over user/owner and group.  Let’s move on to go over all of those rwx listings and how we can make them work for us.

Read, Write, Execute – User, Group, Other

First, let’s go over what different permissions mean.  Read permission means you can view the contents of a file or folder.  Write permission means you can write to a file or to a directory (add new files, new subdirectories) or modify a file or directory.  Execute permission means that you can change to a directory and execute ( or run ) a file or script for that file or directory.

The User section shown in green in the picture above shows whether or not the user can perform the actions listed above.  If the letter is present, the user has the ability to perform that action.  The same is true for the Group shown in blue above…if a member of the group that has access to the file or directory looks in this column, they will know what they can or can’t do (read,write, or execute).  Lastly, all others (noted in the red column above).  Do all others have read, write, and execute permissions on the file or folder?  This is important for giving anonymous users access to files in a file server or web server environment.

You can see how fine grained you might be able to set things up with…For example, you may give users read only access while allowing a group of 5 users full control of the file or directory.  You may want to switch that around.  It’s entirely up to you how you want to setup permissions.

More about Groups

Let’s go through setting up a group and adding a few users to it and then assigning that group permissions to access a directory and file.

Create a file inside your home directory by opening up a shell or terminal and typing:

touch ~/example.txt

You’ve now created a file called example.txt inside your home directory.  If you are already there, you can list the contents with the ‘ls’ command.  Do that now.  If you’re not already there, type ‘cd ~/’ and you will be taken to your home directory where you can ‘ls’ list the files.  It should look similar to the following:

[devnet@lostlap ~]$ ls -l
total 40
drwxr-xr-x  2 devnet devnet 4096 2010-05-24 17:04 Desktop
drwxr-xr-x  6 devnet devnet 4096 2010-05-24 13:10 Documents
drwxr-xr-x  9 devnet devnet 4096 2010-05-27 15:25 Download
-rw-rw-r--  1 devnet devnet    0 2010-05-28 10:21 example.txt
drwxr-xr-x 13 devnet devnet 4096 2010-05-26 16:48 Music
drwxr-xr-x  3 devnet devnet 4096 2010-05-24 13:09 Pictures
drwxr-xr-x  3 devnet devnet 4096 2010-05-24 13:04 Videos

Next up, let’s create a new group and a couple of new users.  After creating these we’ll assign the users to the new group.  After that, we’ll move the file and lock it down to the new group only.  If everything works as planned, the file should be accessible to root and the other 2 users but NOT accessible to your current user. You’ll need to be root for all of these commands (or use sudo for them). Since I have sudo and don’t want to continually type sudo, I used the command “sudo -s” and entered my root password to permanently log in as root in a terminal for the duration of this how-to. OK, Let’s get started:

[root@lostlap ~]$ useradd -m -g users -G audio,lp,optical,storage,video,wheel,games,power -s /bin/bash testuser1
[root@lostlap ~]$ useradd -m -g users -G audio,lp,optical,storage,video,wheel,games,power -s /bin/bash testuser2

The above commands will create two users that should be pretty close to your current logged in user (as far as group membership goes).  If the groups you’re adding the user to do not exist, you may get a warning that the groups don’t exist…no worries, just continue.  If the above commands don’t work on your system (I used Arch Linux to do this) then you can use the GUI elements to manage users and add a new one.  You won’t need to add them to any extra groups since we just need a basic user.  Next, let’s create our ‘control’ group.

[root@lostlap ~]$ groupadd testgroup

The above command creates the ‘testgroup’ group. Now let’s add the two users we created to this group.

[root@lostlap ~]$ gpasswd -a testuser1 testgroup
[root@lostlap ~]$ gpasswd -a testuser2 testgroup

The command above adds both our test users to the test group we created. Now we need to lock the file down so that only those users inside of ‘testgroup’ can access it. Since your current logged in user is NOT a member of ‘testgroup’ then you shouldn’t be able to access the file once we lock access to that group.

[root@lostlap ~]$ chgrp testgroup example.txt

The above command changes the group portion of file permission (discussed earlier) from a group your currently logged in user is a member of to our new group ‘testgroup’. We still need to change the owner of the file so a new terminal opened up as your current user won’t be the owner of example.txt.  To do this, let’s assign example.txt a new owner of Testuser2.

[root@lostlap ~]$ chown testuser2 example.txt

Now when you try to access the file example.txt you won’t be able to open it up as your standard user (root still will be able to access it) because you don’t have the permissions to do so. To test this, open up a new terminal (one where you are not root user) and use your favorite text editor and try to open up example.txt.

[devnet@lostlap ~]$ nano example.txt

Both testuser1 and testuser2 will be able to access example.txt because testuser2 owns the file and testuser1 is in the testgroup that has access to this file. However, your current logged in user will also have READ rights to it but will not be able to access it. Why? Let’s take a look at the permissions on example.txt

[devnet@lostlap ~]$ ls -l example.txt
-rw-r--r-- 1 testuser1 testgroup 8 2010-05-28 10:21 example.txt

Notice that the user, group, and other (1st, 2nd, and 3rd position of r,w,x – see the handy diagram I made above) have permissions assigned to them. The user can read and write to the file. The group can read it. Others can also read it. So let’s remove a permission to lock this file down. Go back to your root terminal that is open or ‘sudo -s’ to root again and do the following:

[root@lostlap ~]$ chmod o-r example.txt

Now go back to your user terminal and take a look at the file again:

[devnet@lostlap ~]$ ls -l example.txt
-rw-r----- 1 testuser1 testgroup 8 2010-05-28 10:21 example.txt

Once that has been accomplished, try and open the file with your favorite text editor as your currently logged in user (devnet for me):

[devnet@lostlap ~]$ nano example.txt

Your user now should get a permission denied error by nano (or whatever text editor you used to open it). This is how locking down files and directories works. It’s very granular as you can give read, write, and execute permissions to individual users, groups of users, and the general public. I’m sure most of you have seen permissions commands with 777 or 644 and you can use this as well (example, chmod 666 filename) but please remember you can always use the chmod ugo+rwx or ugo-rwx as a way to change the permissions as well. I liked using letters as opposed to the numbers because it made more sense to me…perhaps you’ll feel the same.

Hopefully you now have a general understanding how groups, users and permissions work and can appreciate how the complexity of it is also elegant at the same time. If you have questions, please fire away in the comments section. Corrections? Please let me know! Thanks for reading!

Do Package Managers Spoil Us?

I thought of this interesting question the other day while messing around with Slackware 9.0 which was one of the last versions of Slackware to come on a single disk. The goal was to try to take a Slackware 9.0 install to the most recent stable and it was almost accomplished. Glibc was the largest hassle…and I made it to Slackware 11.0 before something caused things to not boot at all. All things considered, I spent 3 days on trying to get Slackware 9 to current.

Slackware for those of you that don’t know, has no dependency resolving package manager. Previously, a good attempt was made with swaret and that was my first jump into package managers with dependency resolution all together when it came out…but Swaret is no longer being maintained and doesn’t really work well anymore.

Since Slackware has no real dep resolving package manager…it’s one of the last ‘true’ Unix like Linux versions out there. Back in the early to mid nineties…things were exactly like this. If you wanted to update your Linux version…you stepped through it manually and tried to get things to work. What was great about Slackware was making your own Slack packages with source…no dependency resolution but in the process of making the package you’d have all the dependencies eventually installed. In this entire process, you became VERY familiar with your system…how it booted, what run level things occurred at, how cron jobs worked, etc. You were baptized by fire so to speak…you were to sink or swim.

As I said, this got me thinking…do we rely on dependency resolving package managers TOO much? They’re cliché now of course…run of the mill. Back in the 1990’s though rpm was the only true package management system around…and rpm was never designed for internet consumption. The guys who wrote rpm had in mind CD and floppy upgrades. Fast forward to now and we have zypper, pacman, urpmi, deb, and conary…all built with online repositories in mind. Do these managers take the heavy lifting away for new users? Do they spoil them?

Do systems break less with easier resolutions due to package managers? Does it mean that the new user of today won’t be as experienced as the old user of yesterday?

I think it might.

Users in the past had to chip away and reassemble with less documentation and no package manager. This meant that the user of yesterday ripped apart systems and packages to discover how they worked and which cogs fit where.

The user of today follows step by step instructions and the software is given a sane set of defaults by most package developers when said package is installed.

Does this make for lazy users?

I don’t think users are lazy per se…but as previously stated, spoiled ones. And it’s no fault of their own…it’s the direction the software has taken us. Now the questions we need to answer are:

  1. Is this direction the correct direction we should be heading?
  2. Are there better approaches to package management that don’t follow the model we have currently (other than Conary)
  3. Can we come up with a system that doesn’t make new users spoiled?

I think I’m of both worlds…I started off with no package manager but managed to ride the wave of Red Hat 7.2 and above followed by Mand{rake,riva} and PCLinuxOS. I’m both spoiled and unspoiled. I know what it takes to manage a system without a conventional package manager but I also know how much time it can save me to use one. I sometimes find myself wanting less though…less and more. Less time and more hands on gutting the system. I think I’m in the minority though.

How about you, as a reader of this article? Do you think new users are spoiled by conventional package management systems? Do you see solutions or have ideas we can discuss? Is this really just a process we can improve or is there any programming to be done? Please sound off in the comments section!

What Is Unity Linux?

There’s been a lot of confusion about exactly what Unity Linux is.

I thought I’d talk today a bit about that.   I’d like to talk a bit about what Unity uses for it’s ‘guts’.  I’d also like to dispel some myths surrounding Unity.  Lastly, I’d like to talk briefly about how Unity is doing all it can to further Open Source and Linux by contributing to projects it is involved with.  The reason I know so much about this topic is that I’m the webmaster and host for the Unity Linux Project as well as one of the documentation team members.  So, let’s take a look first at what Unity Linux is…

What is Unity Linux

Unity Linux is not a conventional distribution of Linux.  It’s a core on which developers can build their own distribution of Linux.  We’ve set out from the start to provide an excellent minimum graphical environment that gave developers “just enough graphics” for them to create something.  The smaller, the better.  We elected to go with Openbox because of it’s size and stability.  We selected using Mandriva as our base because of the number of packages they provide and the quality of those packages.  We pushed lxpanel as a minimal panel because it provides just enough functionality for distro developers to see what they’ve installed after they’ve installed it…it also is familiar to most people whereas Openbox right click menu’s may not be.  All in all, our target for the core release is developers.  We’re not designing this basic desktop to be used by end users.  We’re not trying to win any awards with our awesome minimalistic desktop skills.  Why would we do this?  To answer this, you have to take a look at our developers.

One of our developers, Kdulcimer, is the lead developer of TinyMe Linux.  A few years ago, he created a fantastic minimalistic “remaster” of PCLinuxOS.  It was wildly popular and continues to be so today.  Kdulcimer was one of the first developers that elected to go with Unity Linux for his core.  Our other developers saw what Kdulcimer did with his distro and how small he made the core.  They learned from how he did things and applied it to Unity.  Thus, Unity has a small base…as evident by both the beta releases.  Upcoming release candidates will be very much the same.

Lead developer gettinther does a good job explaining what Unity is:

One of the big issues facing small distros around is that there’s a limitation in the ability of each group to maintain a healthy up-to-date core.  Most people prefer to focus on the DE / user interface, working on the look&feel rather the the internals.  Those distros end up with stale core which in turn causes numerous “hard-to-find” issues.

Most of the distros with us existed before Unity, like Tinyme, Sam (abandoned project now), Granular, Synergy (formely eeepclos).  The idea is to create distros only insofar as “presetting desktops by people who love those desktops”.  Rather than having a “one shoe fits all”, we decided to provide a core module and look after maintaining it.  Each branch distribution joins the team and has full developer access.  For Unity to become a full fledged distro means favoring a DE over others.  By limiting the scope to the core product (we maintain the various DE too but leave the DE specific changes to the branches).  It makes it a little more difficult to install stuff but it also means that all DE are looked after.

As far as the user is concerned, it means the each branch has their word in the development of the core which ensures that the distro is well supported.  It pools the efforts of each distro who would otherwise be on their own so means a large development team and as such better packages.

So Unity Linux is a base on which to build.  A foundation for “remasters” to build from.  But what is a remaster?  What technologies does Unity use? Let’s take a look at the internals of Unity next.

Unity Linux Internals aka Guts

When we initially set out to not only have a small graphical base but also to wrap around the LiveCD project.  For those of you who don’t know what LiveCD is..you can visit the old berlios.de project page:

The project features automatic hardware detection and setup, and utilises compression technology to build a LiveCD from a partition much larger than would typically fit on a CD. (Up to 2GB for a normal 650MB CD.) When booting from this LiveCD, the data is transparently decompressed as needed. LiveCD now supports udev.

Currently,  Mandrakelinux and PCLinuxOS are supported as a host for creation of the LiveCD, i.e. we are only able to create LiveCD’s from a MDK or PCLinuxOS install. The LiveCD scripts are still beta, and bugs are being eliminated. Your help and feedback are appreciated!

The set of scripts allows a person to make a liveCD copy out of their desktop for backup purposes or as a standalone linux distribution.  When you create that new ISO or backup ISO, you have ‘remastered’ the master copy.  So the livecd scripts are really just a set of tools that allows a user to create something new or backup their existing desktop as a live CD.

The project at berlios was taken over by Didouph as lead developer just before Unity was formed.  There hadn’t been much work after Tom Kelly left the project quite a long time ago, but Didouph was optimistic.  When he joined Team Unity, he placed LiveCD development on the back burner and worked hard with the graphics team on logo development.

Later, it became apparent that in order to keep creating a great distribution that could remaster itself, we needed to make improvements to the code of LiveCD.  First off, it needed 64bit support.  Secondly, it needed better detection than what it had.  Third, it needed to have internationalization work done.  Fourth, it needed to support higher kernel versions than what it did.  All those things have been accomplished with internationalization still being worked on.

When we initially took over the ‘modernization’ of LiveCD we didn’t all flock to berlios to do so.  Work instead began when we gave a small sliver of our own SVN over to LiveCD.  It made sense geographically for our developers to have the ability to commit code in the same place instead of at a third party (berlios); the reason being, we needed many commits fast and didn’t want to wait…we were ready to move forward with it immediately.  We snagged the GPL’d LiveCD code and located it on our SVN.

Since Didouph was the maintainer of LiveCD, we felt it only natural that Unity would lend a hand to him and his project by taking over development.  An entire team working on LiveCD would mean greater output and more advancement.  Thus, Unity maintaining the LiveCD project was born.  Anyone is welcome to take the code and use it how they seem fit.  We’re working on getting LiveCD it’s own proper SVN or Git repository at a public site away from Unity Linux…if you’d like access (read only) to LiveCD SVN, drop Unity Linux a line via their contact page.

Common Myths Surrounding Unity Linux

Heard any good ones lately?  If I don’t cover the ones you’ve heard here, please leave me a comment and I’ll address yours specifically.

Myth #1 – Unity Linux is just PCLinuxOS rebranded

Most of the developers of Unity Linux were contributors to PCLinuxOS during the time that Texstar had stepped away.  As contributors, they were not part of the developer team.  They had limited access to the core, iner-workings of PCLinuxOS.  How do I know?  I was a developer…the main web developer…for PCLinuxOS and I monitored all mailing lists, all websites, and even was chief of MyPCLinuxOS.com.  There were very few people on the development team of PCLinuxOS that are now part of Unity Linux…because the PCLinuxOS development team was kept small.

When Unity Linux initially was started, the contributors and developers that were involved grabbed a ‘snapshot’ of the PCLinuxOS repositories and began working on bringing packages to updated versions.  They quickly ran into trouble because PCLinuxOS used such an outdated toolchain that many new packages wouldn’t compile with it.

After some discussion, developers abandoned PCLinuxOS packages and instead worked with Mandriva packages.  This allowed Unity to move forward sans old toolchain and outdated core.  Now most of this stuff doesn’t matter to the end user…they just want a stable environment.  But the Unity Linux developers wanted to push forward with the latest kernels, the latest rpm version, and the latest smart package manager versions.  Doing so required massive leaps forward even from Mandriva.

As you can see, while Unity Linux originally started with a PCLinuxOS fork, they abandoned that fork and rebased on Mandriva.  They now stay inline with Mandriva development.  If you have Mandriva and Unity Linux questions, please stop into the Unity Linux chat channel on Freenode: #unitylinux and ask proyvind questions…as he is the Mandriva Linux representative that works with Unity Linux 🙂

Myth #2 – Unity Linux Stole mklivecd aka livecd from PCLinuxOS

This is a pretty funny one and I’ve seen quite a few references to ‘stealing’  GPL code.  First things first:  You cannot ‘steal’ GPL code.  It just can’t be done.  Secondly, the LiveCD project was stagnant and had a SINGLE developer working on it.  That developer joined Unity Linux and all 25+ developers there decided to help him make some progress on it.  In the meantime, they took the initiative to make improvements.  For example, they gave it 64bit compatibility.  They gave it have better detection.  They took the code and gave it better international language support.  All those things are made available for FREE to any distribution wanting to download a snapshot from SVN.

Now, if anyone has a claim to LiveCD as ‘theirs’ it would be Jaco Greefe who was the principal on the project LONG before any distributions other than Mandrake aka Mandriva even worked with it.  Texstar grabbed what Jaco’s project mklivecd and used it to create the original PCLinuxOS 2003 release.  This release was based on Mandrake 9.2 at the time and a few other Mandrake developers began to debug the script through the creation of PCLinuxOS.  Mandrake was a trademarked name, so Texstar named it PCLinuxOS.

As you can see, if any one distribution has claim to mklivecd, it would be Mandrake aka Mandriva which was where the script creators came from.  It’s also where the script was first made useable.  However, claim that Texstar made it into a nice package with PCLinuxOS…that is totally true.  What we’re doing now by developing it is making sure it continues to progress into the future with 64bit support and even when udev is dropped from Linux…no matter what, we’ll make sure it works…and hopefully it will work for more than just Mandriva derived distributions.

There have been many attempts by Unity Linux developers to get other distributions that use mklivecd involved with the development of it.  That invitation is always open to any and all distributions that use it.

Myth #3 – Unity Linux wants to steal away users from other distributions of Linux

The main reason this isn’t true is that Unity Linux targets DEVELOPERS.  We don’t target end users.  If end users like Unity, GREAT!  If not, we don’t worry about it.  Unity Linux has derivative distributions called “branches” that work to target the end user.  Unity Linux itself is targeted squarely at distribution developers and advanced users who want to be able to use the mklivecd scripts.

Myth #4 – Unity Linux DOESN’T use PCLinuxOS at all in development

This is half true.  We don’t ‘use’ PCLinuxOS to create things…we use it as a mirror synch.  Paul Grinberg, a developer on the team, has a PCLinuxOS box that he doesn’t use.  During the initial setup of Unity Linux, we based things on PCLinuxOS before purging and switching to Mandriva.  Since the developer mirror server (referred to on the mailing lists as the dev server) still ran PCLinuxOS and Unity Linux didn’t have a release yet, we saw no reason to change it.

As Unity Linux still has no stable release as of March 29, 2010, that developer mirror server still runs PCLinuxOS and pushes uploaded packages developed on a Unity Linux server to various mirrors for propagation.

In other words, the PCLinuxOS server Unity Linux uses is just a web server.  It will be replaced with Unity Linux when 2010 is released.  Until then, taking the time to wipe it out and repopulate it would throw a kink in the flow of package development so developers have put this ‘to-do’ item as something to be accomplished after stable release.

Unity Linux and Open Source

Unity Linux does a great job of contributing to projects upstream.  As an example, David Smid, a Unity Linux developer, is also a Smart Package Manager (SPM) developer.  This allows Unity the ability to test the latest and greatest SPM and get things quickly patched/fixed/redesigned.  Other projects such as mklivecd are developed openly by Unity Linux and contributors are welcome.  Unity Linux contributes bug finds to Mandriva through use of the Mandriva Cooker repository.  Unity Linux developer Paul Grinberg contributed Google Map integration for MirrorMon, which you can view on our Mirror Status Page, back upstream to the creator of MirrorMon.  Unity Linux also contributes upstream to rpm5.org.

Unity Linux also has a working partnership with Yoper Linux.  Why?  Because Yoper Linux uses many of the same core technologies (Smart, rpm5) that Unity Linux uses and because the lead developer, Tobias Gerschner, is an all around great guy :).

You can see everything that Unity Linux works on by visiting our repository:  http://dev.unity-linux.org/projects/unitylinux/repository

Development is done in the open, not behind closed doors:  http://groups.google.com/group/ul-developers

Unity Linux strives with an almost rabid will to keep everything in the open for users and branch developers so that they are not left wondering what’s going on with their distribution.  The Developers continue to try and engage other distributions to work with them and will continue to do so in the future.

Closing Thoughts

Unity Linux doesn’t target the same users as your average distribution of Linux…they’re after the more savvy users out there.  The ones that want to create something and make something from the core image.  Users that like to tinker and mess and break things.

Unity got off to a rough start with much FUD slinging and accusations.  Hopefully, the actions you see that Unity has taken to keep it’s project open will show the intent of the developers…to make a great core on which others can branch from all the while remaining open and free for everyone.

Convert PNG to GIF via Command Line

I installed a bare bones Arch Linux system today and took a screenshot.  With no graphics utilities installed, I needed a way to convert a PNG to a GIF for a Simple Machines forum template thumbnail.  I figured I’d use a command line utility to help me and ImageMagick is installed by default on most distributions.  A quick read through the ImageMagick manpage and I found the convert command and thought I’d share it with everyone.  Use convert in the following fashion:  convert [input-options] input-file [output-options] output-file

convert SMFPress.png -channel Alpha -threshold 80% -resize 120x120 thumbnail.gif

This did a quick, same-size conversion with little loss for me to display the thumbnail online.  For more information on the options I used and other options that I didn’t use, take a peek at the ImageMagick Online Help Page for convert.

Adding Color to Bash List Command Part II

I previously blogged about how to add color to the ‘ls’ command utilizing an config file and alias.  I then stumbled across a nugget of wisdom from a Foresight Linux user on the developers mailing list who gave a handy command that remedies some problems with missing color in a terminal.

On some distributions, the system-wide /etc/DIR_COLORS* files are removed or not present.  This results in no colors being given inside of a terminal when looking for color directories and filenames.  If you find yourself in this boat, try the following command to re-populate this setting:

devnet-> cd ~/
devnet-> dircolors -p >.dircolors

This should create a default profile for colors for your session if it hasn’t been done or was accidentally removed.  For more information on the dircolors command try ‘man dircolors’.  Please also note that dircolors command uses the environmental variable LS_COLORS to set your session.

For more information on LS_COLORS and how it pertains to the terminal/shell/cli/prompt, there are a few blog posts that do an excellent job explaining here, here and here.

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.