Midori, Flash, and Unity Linux 2010

I just took a look at how Unity Linux 2010.1 shapes up and found that the flashplayer plugin doesn’t work with the default browser which is Midori.  Here’s a quick fix for getting flash to work with Midori on Unity 2010.  First, install the flash-player-plugin (as root in terminal or use the gui):

smart install flash-player-plugin

Next, we need to create a directory under your profile to house the flashplayer plugin and then copy it there.  I’m sure we might be able to get by with a symbolic link but I didn’t try that out…

mkdir -p ~/.mozilla/plugins && cp /usr/lib/mozilla/plugins/libflashplayer.so ~/.mozilla/plugins/

That’s it, it should work now.  I’ve done this on 32bit Unity Linux 2010.1 on a Gateway M250.  Hopefully this helps out someone out there 🙂

Interesting Statistics

Very interesting statistics that I’ve noticed since moving the site to a Linode VPS.

If you take a look at the graphic below, the spike in the middle will probably stick out quite a bit.  Oddly enough, the spike I noticed in CPU percentage used (which is regulated for VPS at Linode) also spiked up disk usage…mainly because I began to swap when cpu/ram use skyrocketed.  All of this happened with Ubuntu 10.04 installed.  CentOS was the first distro I tried but I quickly switched to Ubuntu when I spotted a really nice how-to in the Linode document library.  Oh, and please excuse my horrible gimp skills on the image below…it was a quick and dirty editing of the image:

cpu usage

After switching to Ubuntu, I began receiving alarms for my account due to the high usage of CPU and disk.  I attempted to tweak settings and configuration files for about a week and realized it just wasn’t going to work for me.  I switched to Debian Lenny and the move was a positive as is reflected in these pictures.

disk usage

I was hoping Ubuntu 10.04 would fit for me since it is a long term support (LTS) release.  CentOS is my normal server distribution of choice and I really wanted to branch out and go with something different.  I used a Linode Stackscript for WordPress for CentOS but elected for vanilla installs of Ubuntu and Debian aftwards (I didn’t like NOT knowing what was installed when I first logged in…call me a control freak).

I just found it interesting that Ubuntu 10.04 did so horribly in this instance.  After investigating, I found a couple of likely suspects:

  1. Default Apache install in Ubuntu leaves a lot to be desired..even after tweaking both it and PHP for days I couldn’t get them to lay off the resources.  Even switching to mpm_worker and FastCGI did little to settle things down.
  2. Ubuntu swappiness is bad…it is set at 60 (I use 10 normally) and it swapped every chance it could get…it’s set by default to swap more than it should.
  3. mod_php on Ubuntu is hungry for all your cpu and ram and disk; be warned!

Debian, as the parent distribution of Ubuntu, would most likely suffer from the same problems…except it doesn’t.  Things are working great with it and I’d recommend it for any of your server needs!  Has anyone else seen this oddity with Ubuntu 10.04?  If so, please drop me a comment below.

Do Package Managers Spoil Us?

I thought of this interesting question the other day while messing around with Slackware 9.0 which was one of the last versions of Slackware to come on a single disk. The goal was to try to take a Slackware 9.0 install to the most recent stable and it was almost accomplished. Glibc was the largest hassle…and I made it to Slackware 11.0 before something caused things to not boot at all. All things considered, I spent 3 days on trying to get Slackware 9 to current.

Slackware for those of you that don’t know, has no dependency resolving package manager. Previously, a good attempt was made with swaret and that was my first jump into package managers with dependency resolution all together when it came out…but Swaret is no longer being maintained and doesn’t really work well anymore.

Since Slackware has no real dep resolving package manager…it’s one of the last ‘true’ Unix like Linux versions out there. Back in the early to mid nineties…things were exactly like this. If you wanted to update your Linux version…you stepped through it manually and tried to get things to work. What was great about Slackware was making your own Slack packages with source…no dependency resolution but in the process of making the package you’d have all the dependencies eventually installed. In this entire process, you became VERY familiar with your system…how it booted, what run level things occurred at, how cron jobs worked, etc. You were baptized by fire so to speak…you were to sink or swim.

As I said, this got me thinking…do we rely on dependency resolving package managers TOO much? They’re cliché now of course…run of the mill. Back in the 1990’s though rpm was the only true package management system around…and rpm was never designed for internet consumption. The guys who wrote rpm had in mind CD and floppy upgrades. Fast forward to now and we have zypper, pacman, urpmi, deb, and conary…all built with online repositories in mind. Do these managers take the heavy lifting away for new users? Do they spoil them?

Do systems break less with easier resolutions due to package managers? Does it mean that the new user of today won’t be as experienced as the old user of yesterday?

I think it might.

Users in the past had to chip away and reassemble with less documentation and no package manager. This meant that the user of yesterday ripped apart systems and packages to discover how they worked and which cogs fit where.

The user of today follows step by step instructions and the software is given a sane set of defaults by most package developers when said package is installed.

Does this make for lazy users?

I don’t think users are lazy per se…but as previously stated, spoiled ones. And it’s no fault of their own…it’s the direction the software has taken us. Now the questions we need to answer are:

  1. Is this direction the correct direction we should be heading?
  2. Are there better approaches to package management that don’t follow the model we have currently (other than Conary)
  3. Can we come up with a system that doesn’t make new users spoiled?

I think I’m of both worlds…I started off with no package manager but managed to ride the wave of Red Hat 7.2 and above followed by Mand{rake,riva} and PCLinuxOS. I’m both spoiled and unspoiled. I know what it takes to manage a system without a conventional package manager but I also know how much time it can save me to use one. I sometimes find myself wanting less though…less and more. Less time and more hands on gutting the system. I think I’m in the minority though.

How about you, as a reader of this article? Do you think new users are spoiled by conventional package management systems? Do you see solutions or have ideas we can discuss? Is this really just a process we can improve or is there any programming to be done? Please sound off in the comments section!

Rethinking Home Servers

Since my first home-built server (a PI 75Mhz behemoth) I’ve used Red Hat based distributions as my home server.  This lasted until around 2002-3 when I moved into a 4 bedroom house with 3 of my Air Force buddies and one of them wanted to learn Linux.

I knew from experience in the mid-nineties that Slackware was probably the most Unix-like distribution out there…I felt at home there quite a bit after learning the *nix ropes on Solaris 2.0.  So we configured a Slackware 8.1 dual processor tower server he was lucky enough to acquire as our home firewall-all-around-great-linux box.  He took his beginning steps there and flourished since our Air Force job already had us jumping around in a VAX/VMS mainframe.  We had many late night hacking sessions attempting to get things to work or compile there.  We also had a multi GB shared hard disk (unheard of at the time!) shared over samba.

After I got moved out, I continued to keep the Slackware box up to date.  I moved onward to Slackware 9.  Samba operated like a champ and Slackware was a great routing system and dhcp server.  Then I discovered ClarkConnect and loved the web interface.  I could do things in half the time!  I could do them over the web from work without SSH tunneling!  All this appealed to me at the time.

I continued to run ClarkConnect from that point on and have continued to all the way up to when it changed to ClearOS this past year.  Indeed, I have ClearOS now as my central server.

The only problem is that I’ve suffered 2 of the most catastrophic losses of files in my samba shares when running ClarkConnect/ClearOS…and I didn’t draw the lines together  on these separate incidents until just recently.

The first loss came when an entire samba share was completely eradicated…13GB of music was just gone.  The second loss happened just the other day when tons of scanned pictures just VANISHED into thin air.  Each time these happened, I was using ClarkConnect/ClearOS.  Each time it happened a few users reported instability in the forums of those distributions.  I am not sure how it could have happened and I was caught completely off guard on the second time…my backups were not yet configured since it was a new server.  The first time it happened…I didn’t know the value of having a good backup routine.  So each time, no backups 🙁  Lesson learned the hard way but learned nonetheless.

I recall running Slackware on my server and NEVER having the problems I have had with ClarkConnect/ClearOS.  This got me rethinking my home server design.  Servers should be the epitome of stability.  One should be able to migrate from one version of the operating system to the next with few hiccups.  When considering each of these it is very apparent that I should be running Slackware core on my main samba server.

I will be making that transition in the next week or two and moving to a Slackware core based server.  I’m not sure what to use for backups across the network (I usually mirror the drive to an NTFS drive in my Windows based multimedia server) nor backups locally to other hard drives.  If you have any suggestions, I’d really like to hear them.  Also, I’d like to know what readers consider using for a server.  Please vote for your favorite below and drop me a comment letting me know specifics and thanks for your help!

[poll id=”3″]

Are you looking for Linux Hosting?

What Is Unity Linux?

There’s been a lot of confusion about exactly what Unity Linux is.

I thought I’d talk today a bit about that.   I’d like to talk a bit about what Unity uses for it’s ‘guts’.  I’d also like to dispel some myths surrounding Unity.  Lastly, I’d like to talk briefly about how Unity is doing all it can to further Open Source and Linux by contributing to projects it is involved with.  The reason I know so much about this topic is that I’m the webmaster and host for the Unity Linux Project as well as one of the documentation team members.  So, let’s take a look first at what Unity Linux is…

What is Unity Linux

Unity Linux is not a conventional distribution of Linux.  It’s a core on which developers can build their own distribution of Linux.  We’ve set out from the start to provide an excellent minimum graphical environment that gave developers “just enough graphics” for them to create something.  The smaller, the better.  We elected to go with Openbox because of it’s size and stability.  We selected using Mandriva as our base because of the number of packages they provide and the quality of those packages.  We pushed lxpanel as a minimal panel because it provides just enough functionality for distro developers to see what they’ve installed after they’ve installed it…it also is familiar to most people whereas Openbox right click menu’s may not be.  All in all, our target for the core release is developers.  We’re not designing this basic desktop to be used by end users.  We’re not trying to win any awards with our awesome minimalistic desktop skills.  Why would we do this?  To answer this, you have to take a look at our developers.

One of our developers, Kdulcimer, is the lead developer of TinyMe Linux.  A few years ago, he created a fantastic minimalistic “remaster” of PCLinuxOS.  It was wildly popular and continues to be so today.  Kdulcimer was one of the first developers that elected to go with Unity Linux for his core.  Our other developers saw what Kdulcimer did with his distro and how small he made the core.  They learned from how he did things and applied it to Unity.  Thus, Unity has a small base…as evident by both the beta releases.  Upcoming release candidates will be very much the same.

Lead developer gettinther does a good job explaining what Unity is:

One of the big issues facing small distros around is that there’s a limitation in the ability of each group to maintain a healthy up-to-date core.  Most people prefer to focus on the DE / user interface, working on the look&feel rather the the internals.  Those distros end up with stale core which in turn causes numerous “hard-to-find” issues.

Most of the distros with us existed before Unity, like Tinyme, Sam (abandoned project now), Granular, Synergy (formely eeepclos).  The idea is to create distros only insofar as “presetting desktops by people who love those desktops”.  Rather than having a “one shoe fits all”, we decided to provide a core module and look after maintaining it.  Each branch distribution joins the team and has full developer access.  For Unity to become a full fledged distro means favoring a DE over others.  By limiting the scope to the core product (we maintain the various DE too but leave the DE specific changes to the branches).  It makes it a little more difficult to install stuff but it also means that all DE are looked after.

As far as the user is concerned, it means the each branch has their word in the development of the core which ensures that the distro is well supported.  It pools the efforts of each distro who would otherwise be on their own so means a large development team and as such better packages.

So Unity Linux is a base on which to build.  A foundation for “remasters” to build from.  But what is a remaster?  What technologies does Unity use? Let’s take a look at the internals of Unity next.

Unity Linux Internals aka Guts

When we initially set out to not only have a small graphical base but also to wrap around the LiveCD project.  For those of you who don’t know what LiveCD is..you can visit the old berlios.de project page:

The project features automatic hardware detection and setup, and utilises compression technology to build a LiveCD from a partition much larger than would typically fit on a CD. (Up to 2GB for a normal 650MB CD.) When booting from this LiveCD, the data is transparently decompressed as needed. LiveCD now supports udev.

Currently,  Mandrakelinux and PCLinuxOS are supported as a host for creation of the LiveCD, i.e. we are only able to create LiveCD’s from a MDK or PCLinuxOS install. The LiveCD scripts are still beta, and bugs are being eliminated. Your help and feedback are appreciated!

The set of scripts allows a person to make a liveCD copy out of their desktop for backup purposes or as a standalone linux distribution.  When you create that new ISO or backup ISO, you have ‘remastered’ the master copy.  So the livecd scripts are really just a set of tools that allows a user to create something new or backup their existing desktop as a live CD.

The project at berlios was taken over by Didouph as lead developer just before Unity was formed.  There hadn’t been much work after Tom Kelly left the project quite a long time ago, but Didouph was optimistic.  When he joined Team Unity, he placed LiveCD development on the back burner and worked hard with the graphics team on logo development.

Later, it became apparent that in order to keep creating a great distribution that could remaster itself, we needed to make improvements to the code of LiveCD.  First off, it needed 64bit support.  Secondly, it needed better detection than what it had.  Third, it needed to have internationalization work done.  Fourth, it needed to support higher kernel versions than what it did.  All those things have been accomplished with internationalization still being worked on.

When we initially took over the ‘modernization’ of LiveCD we didn’t all flock to berlios to do so.  Work instead began when we gave a small sliver of our own SVN over to LiveCD.  It made sense geographically for our developers to have the ability to commit code in the same place instead of at a third party (berlios); the reason being, we needed many commits fast and didn’t want to wait…we were ready to move forward with it immediately.  We snagged the GPL’d LiveCD code and located it on our SVN.

Since Didouph was the maintainer of LiveCD, we felt it only natural that Unity would lend a hand to him and his project by taking over development.  An entire team working on LiveCD would mean greater output and more advancement.  Thus, Unity maintaining the LiveCD project was born.  Anyone is welcome to take the code and use it how they seem fit.  We’re working on getting LiveCD it’s own proper SVN or Git repository at a public site away from Unity Linux…if you’d like access (read only) to LiveCD SVN, drop Unity Linux a line via their contact page.

Common Myths Surrounding Unity Linux

Heard any good ones lately?  If I don’t cover the ones you’ve heard here, please leave me a comment and I’ll address yours specifically.

Myth #1 – Unity Linux is just PCLinuxOS rebranded

Most of the developers of Unity Linux were contributors to PCLinuxOS during the time that Texstar had stepped away.  As contributors, they were not part of the developer team.  They had limited access to the core, iner-workings of PCLinuxOS.  How do I know?  I was a developer…the main web developer…for PCLinuxOS and I monitored all mailing lists, all websites, and even was chief of MyPCLinuxOS.com.  There were very few people on the development team of PCLinuxOS that are now part of Unity Linux…because the PCLinuxOS development team was kept small.

When Unity Linux initially was started, the contributors and developers that were involved grabbed a ‘snapshot’ of the PCLinuxOS repositories and began working on bringing packages to updated versions.  They quickly ran into trouble because PCLinuxOS used such an outdated toolchain that many new packages wouldn’t compile with it.

After some discussion, developers abandoned PCLinuxOS packages and instead worked with Mandriva packages.  This allowed Unity to move forward sans old toolchain and outdated core.  Now most of this stuff doesn’t matter to the end user…they just want a stable environment.  But the Unity Linux developers wanted to push forward with the latest kernels, the latest rpm version, and the latest smart package manager versions.  Doing so required massive leaps forward even from Mandriva.

As you can see, while Unity Linux originally started with a PCLinuxOS fork, they abandoned that fork and rebased on Mandriva.  They now stay inline with Mandriva development.  If you have Mandriva and Unity Linux questions, please stop into the Unity Linux chat channel on Freenode: #unitylinux and ask proyvind questions…as he is the Mandriva Linux representative that works with Unity Linux 🙂

Myth #2 – Unity Linux Stole mklivecd aka livecd from PCLinuxOS

This is a pretty funny one and I’ve seen quite a few references to ‘stealing’  GPL code.  First things first:  You cannot ‘steal’ GPL code.  It just can’t be done.  Secondly, the LiveCD project was stagnant and had a SINGLE developer working on it.  That developer joined Unity Linux and all 25+ developers there decided to help him make some progress on it.  In the meantime, they took the initiative to make improvements.  For example, they gave it 64bit compatibility.  They gave it have better detection.  They took the code and gave it better international language support.  All those things are made available for FREE to any distribution wanting to download a snapshot from SVN.

Now, if anyone has a claim to LiveCD as ‘theirs’ it would be Jaco Greefe who was the principal on the project LONG before any distributions other than Mandrake aka Mandriva even worked with it.  Texstar grabbed what Jaco’s project mklivecd and used it to create the original PCLinuxOS 2003 release.  This release was based on Mandrake 9.2 at the time and a few other Mandrake developers began to debug the script through the creation of PCLinuxOS.  Mandrake was a trademarked name, so Texstar named it PCLinuxOS.

As you can see, if any one distribution has claim to mklivecd, it would be Mandrake aka Mandriva which was where the script creators came from.  It’s also where the script was first made useable.  However, claim that Texstar made it into a nice package with PCLinuxOS…that is totally true.  What we’re doing now by developing it is making sure it continues to progress into the future with 64bit support and even when udev is dropped from Linux…no matter what, we’ll make sure it works…and hopefully it will work for more than just Mandriva derived distributions.

There have been many attempts by Unity Linux developers to get other distributions that use mklivecd involved with the development of it.  That invitation is always open to any and all distributions that use it.

Myth #3 – Unity Linux wants to steal away users from other distributions of Linux

The main reason this isn’t true is that Unity Linux targets DEVELOPERS.  We don’t target end users.  If end users like Unity, GREAT!  If not, we don’t worry about it.  Unity Linux has derivative distributions called “branches” that work to target the end user.  Unity Linux itself is targeted squarely at distribution developers and advanced users who want to be able to use the mklivecd scripts.

Myth #4 – Unity Linux DOESN’T use PCLinuxOS at all in development

This is half true.  We don’t ‘use’ PCLinuxOS to create things…we use it as a mirror synch.  Paul Grinberg, a developer on the team, has a PCLinuxOS box that he doesn’t use.  During the initial setup of Unity Linux, we based things on PCLinuxOS before purging and switching to Mandriva.  Since the developer mirror server (referred to on the mailing lists as the dev server) still ran PCLinuxOS and Unity Linux didn’t have a release yet, we saw no reason to change it.

As Unity Linux still has no stable release as of March 29, 2010, that developer mirror server still runs PCLinuxOS and pushes uploaded packages developed on a Unity Linux server to various mirrors for propagation.

In other words, the PCLinuxOS server Unity Linux uses is just a web server.  It will be replaced with Unity Linux when 2010 is released.  Until then, taking the time to wipe it out and repopulate it would throw a kink in the flow of package development so developers have put this ‘to-do’ item as something to be accomplished after stable release.

Unity Linux and Open Source

Unity Linux does a great job of contributing to projects upstream.  As an example, David Smid, a Unity Linux developer, is also a Smart Package Manager (SPM) developer.  This allows Unity the ability to test the latest and greatest SPM and get things quickly patched/fixed/redesigned.  Other projects such as mklivecd are developed openly by Unity Linux and contributors are welcome.  Unity Linux contributes bug finds to Mandriva through use of the Mandriva Cooker repository.  Unity Linux developer Paul Grinberg contributed Google Map integration for MirrorMon, which you can view on our Mirror Status Page, back upstream to the creator of MirrorMon.  Unity Linux also contributes upstream to rpm5.org.

Unity Linux also has a working partnership with Yoper Linux.  Why?  Because Yoper Linux uses many of the same core technologies (Smart, rpm5) that Unity Linux uses and because the lead developer, Tobias Gerschner, is an all around great guy :).

You can see everything that Unity Linux works on by visiting our repository:  http://dev.unity-linux.org/projects/unitylinux/repository

Development is done in the open, not behind closed doors:  http://groups.google.com/group/ul-developers

Unity Linux strives with an almost rabid will to keep everything in the open for users and branch developers so that they are not left wondering what’s going on with their distribution.  The Developers continue to try and engage other distributions to work with them and will continue to do so in the future.

Closing Thoughts

Unity Linux doesn’t target the same users as your average distribution of Linux…they’re after the more savvy users out there.  The ones that want to create something and make something from the core image.  Users that like to tinker and mess and break things.

Unity got off to a rough start with much FUD slinging and accusations.  Hopefully, the actions you see that Unity has taken to keep it’s project open will show the intent of the developers…to make a great core on which others can branch from all the while remaining open and free for everyone.

Unity 2010 Beta 2 Impressions

As noted previously, I’ve been pretty hard pressed lately in my secular job due to migrations and other fun activities happening throughout the past few months.  I did however, get the chance to download Unity 2010 Beta 2 and give it a go.  I had some problems when booting because I was brought to a blank black screen with a mouse pointer no matter what options I passed during boot.  To get by this, I had to follow some IRC advice on #unitylinux  (thanks wile_netbook!) and change to a second tty, kill the Xserver and GDM, followed by executing do-vesa.  It’s hard to try to do it quickly though because GDM will try and restart X and switch init levels on you back to a graphical one.  To get by this, you’ll need to do the following:

Drop into a different tty.  Login as root…if you’re on the liveCD, the password is root.  Execute:

ps aux | more

Make note of the PID for X and GDM.  Write them down…replace the terms below with your PID numbers:

kill -9 PID_for_X && kill -9 PID_for_GDM && do-vesa

You now should see something other than black screen with mouse cursor.  I’m not sure how many systems this affects…but I know my Dell Latitude D630 laptop took it on the chin for this one.  Not a huge problem for a Beta…I mean, a distro can’t be all things to everyone.

Overall though, Unity 2010 Beta 2 is much more solid than Beta 1 was for me after getting by the initial X problem.  Everything works as it should as far as sound, Internet, and wireless are concerned.  I quickly removed PCmanFM and replaced it with Thunar, my file manager of choice.  I removed LXPanel and installed Tint2.  Installed Nitrogen to manage wallpaper.  Installed Parcellite to give me a clipboard,  Installed volwheel to give me a volume applet to control volume.  Installed Pragha to give myself a great music player.  Installed Irssi to allow me to get my IRC fix and put pidgin in play to IM.  I had a usable, customized desktop within about an hour.  And it’s been really solid…just as solid as my Arch Openbox desktop I run at home…which makes me feel good about this Beta.

So what else have I been working on?  I’ve been working on a large (VERY large)  tutorial on file permissions and making use of groups for file/directory access to add to the tutorials section of YALB.  This thing has been in work since last year and I’m attempting to finish it up before the months end to give a good representation of what file permissions in Linux are for and how they work with users and groups.  I’m also going to write up a tutorial on how to customize Unity 2010 Beta 2 into a lightweight Openbox desktop.  So, some good updates hovering on the horizon.  Stay tuned 😀

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.