header image
Trashing Chromebooks
June 5th, 2014 under Computers, Hardware, rengolin, Unix/Linux. [ Comments: 8 ]

At Linaro, we do lots of toolchain tests: GCC, LLVM, binutils, libraries and so on. Normally, you’d find a fast machine where you could build toolchains and run all the tests, integrated with some dispatch mechanism (like Jenkins). Normally, you’d have a vast choice of hardware to chose from, for each different form-factor (workstation, server, rack mount) and you’d pick the fastest CPUs and a fast SSD disk with space enough for the huge temporary files that toolchain testing produces.

tcwg-rack

The only problem is, there aren’t any ARM rack-servers or workstations. In the ARM world, you either have many cheap development boards, or one very expensive (100x more) professional development board. Servers, workstations and desktops are still non-existent. Some have tried (Calxeda, for ex.) but they have failed. Others are trying with ARMv8 (the new 32/64-bit architecture), but all of them are under heavy development, so not even Alpha quality.

Meanwhile, we need to test the toolchain, and we have been doing it for years, so waiting for a stable ARM server was not an option and still isn’t. A year ago I took the task of finding the most stable development board that is fast enough for toolchain testing and fill a rack with it. Easier said than done.

Choices

Amongst the choices I had, Panda, Beagle, Arndale and Odroid boards were the obvious candidates. After initial testing, it was clear that Beagles, with only 500MB or RAM, were not able to compile anything natively without some major refactoring of the build systems involved. So, while they’re fine for running remote tests (SSH execution), they have very little use for anything else related to toolchain testing.

panda

Pandas, on the other hand, have 1GB or RAM and can compile any toolchain product, but the timing is a bit on the wrong side. Taking 5+ hours to compile a full LLVM+Clang build, a full bootstrap with testing would take a whole day. For background testing on the architecture, it’s fine, but for regression tracking and investigative work, they’re useless.

With the Arndales, we haven’t had such luck. They’re either unstable or deprecated months after release, which makes it really hard to acquire them in any meaningful volumes for contingency and scalability plans. We were left then, with the Odroids.

arndale

HardKernel makes very decent boards, with fast quad-A9 and octa-A15 chips, 2GB of RAM and a big heat sink. Compilation times were in the right ball park (40~80 min) so they’re good for both regression catching and bootstrapping toolchains. But they had the same problem as every other board we tried: instability under heavy load.

Development boards are built for hobby projects and prototyping. They normally can get at very high frequencies (1~2 GHz) and are normally designed for low-power, stand-by usage most of the time. But toolchain testing involves building the whole compiler and running the full test-suite on every commit, and that puts it on 100% CPU usage, 24/7. Since the build times are around an hour or more, by the time that the build finishes, other commits have gone through and need to be tested, making it a non-stop job.

CPUs are designed to scale down the frequency when they get too hot, so throughout the normal testing, they stay stable at their operating temperatures (~60C), and adding a heat sink only makes it go further on frequency and keeping the same temperature, so it won’t solve the temperature problem.

The issue is that, after running for a while (a few hours, days, weeks), the compilation jobs start to fail randomly (the infamous “internal compiler error”) in different places of different files every time. This is clearly not a software problem, but if it were the CPU’s fault, it’d have happened a lot earlier, since it reaches the operating temperature seconds after the test starts, and only fails hours or days after they’re running full time. Also, that same argument rules out any trouble in the power supply, since it should have failed in the beginning, not days later.

The problem that the heat sink doesn’t solve, however, is the board’s overall temperature, which gets quite hot (40C~50C), and has negative effects on other components, like the SD reader and the card itself, or the USB port and the stick itself. Those boards can’t boot from USB, so we must use SD cards for the system, and even using a USB external hard drive with a powered USB hub, we still see the failures, which hints that the SD card is failing under high load and high temperatures.

According to SanDisk, their SD cards should be ok on that temperature range, but other parties might be at play, like the kernel drivers (which aren’t build for that kind of load). What pointed me to the SD card is the first place was that when running solely on the SD card (for system and build directories), the failures appear sooner and more often than when running the builds on a USB stick or drive.

Finally, with the best failure rate at 1/week, none of those boards are able to be build slaves.

Chromebook

That’s when I found the Samsung Chromebook. I had one for personal testing and it was really stable, so amidst all that trouble with the development boards, I decided to give it a go as a buildbot slave, and after weeks running smoothly, I had found what I was looking for.

The main difference between development boards and the Chromebook is that the latter is a product. It was tested not just for its CPU, or memory, but as a whole. Its design evolved with the results of the tests, and it became more stable as it progressed. Also, Linux drivers and the kernel were made to match, fine tuned and crash tested, so that it could be used by the worst kind of users. As a result, after one and a half years running Chromebooks as buildbots, I haven’t been able to make them fail yet.

But that doesn’t mean I have stopped looking for an alternative. Chromebooks are laptops, and as such, they’re build with a completely different mindset to a rack machine, and the number of modifications to make it fit the environment wasn’t short. Rack machines need to boot when powered up, give 100% of its power to the job and distribute heat efficiently under 100% load for very long periods of time. Precisely the opposite of a laptop design.

Even though they don’t fail the jobs, they did give me a lot of trouble, like having to boot manually, overheating the batteries and not having an easy way to set up a Linux image easily deployable via network boot. The steps to fix those issues are listed below.

WARNING: Anything below will void your warranty. You have been warned.

System settings

To get your Chromebook to boot anything other than ChromeOS, you need to enter developer mode. With that, you’ll be able not only to boot from SD or USB, but also change your partition and have sudo access on ChromeOS.

With that, you go to the console (CTRL+ALT+->), login with user chronos (no password) and set the boot process as described on the link above. You’ll also need to set sudo crossystem dev_boot_signed_only=0 to be able to boot anything you want.

The last step is to make your Linux image boot by default, so when you power up your machine it boots Linux, not ChromeOS. Otherwise, you’ll have to press CTRL+U every boot, and remote booting via PDUs will be pointless. You do that via cgpt.

You need to find the partition that boots on your ChromeOS by listing all of them and seeing which one booted successfully:


$ sudo cgpt show /dev/mmcblk0

The right partition will have the information below appended to the output:


Attr: priority=0 tries=5 successful=1

If it had tries, and was successful, this is probably your main partition. Move it back down the priority order (6-th place) by running:


$ sudo cgpt add -i [part] -P 6 -S 1 /dev/mmcblk0

And you can also set the SD card’s part to priority 0 by doing the same thing over mmcblk1

With this, installing a Linux on an SD card might get you booting Linux by default on next boot.

Linux installation

You can chose a few distributions to run on the Chromebooks, but I have tested both Ubuntu and Arch Linux, which work just fine.

Follow those steps and insert the SD card in the slot and boot. You should get the Developer Mode screen and waiting for long enough, it should beep and boot directly on Linux. If it doesn’t, means your cgpt meddling was unsuccessful (been there, done that) and will need a bit more fiddling. You can press CTRL+U for now to boot from the SD card.

After that, you should have complete control of the Chromebook, and I recommend adding your daemons and settings during the boot process (inid.d, systemd, etc). Turn on the network, start the SSD daemon and other services you require (like buildbots). It’s also a good idea to change the governor to performance, but only if you’re going to use it for full time heavy load, and especially if you’re going to run benchmarks. But for the latter, you can do that on demand, and don’t need to leave it on during boot time.

To change the governor:

$ echo [scale] | sudo tee /sys/bus/cpu/devices/cpu[N]/cpufreq/scaling_governor

scale above can be one of performance, conservative, ondemand (default), or any other governor that your kernel supports. If you’re doing before benchmarks, switch to performance and then back to ondemand. Use cpuN as the CPU number (starts on 0) and do it for all CPUs, not just one.

Other interesting scripts are to get the temperatures and frequencies of the CPUs:

$ cat thermal
#!/usr/bin/env bash

ROOT=/sys/devices/virtual/thermal

for dir in $ROOT/*/temp; do
temp=`cat $dir`
temp=`echo $temp/1000 | bc -l | sed 's/0\+$/0/'`
device=`dirname $dir`
device=`basename $device`
echo "$device: $temp C"
done

$ cat freq
#!/usr/bin/env bash

ROOT=/sys/bus/cpu/devices

for dir in $ROOT/*; do
if [ -e $dir/cpufreq/cpuinfo_cur_freq ]; then
freq=`sudo cat $dir/cpufreq/cpuinfo_cur_freq`
freq=`echo $freq/1000000 | bc -l | sed 's/0\+$/0/'`
echo "`basename $dir`: $freq GHz"
fi
done

Hardware changes

batteries

As expected, the hardware was also not ready to behave like a rack server, so some modifications are needed.

The most important thing you have to do is to remove the battery. First, because you won’t be able to boot it remotely with a PDU if you don’t, but more importantly, because the head from constant usage will destroy the battery. Not just as in make it stop working, which we don’t care, but it’ll slowly release gases and bloat the battery, which can be a fire hazard.

To remove the battery, follow the iFixit instructions here.

Another important change is to remove the lid magnet that tells the Chromebook to not boot on power. The iFixit post above doesn’t mention it, bit it’s as simple as prying the monitor bezel open with a sharp knife (no screws), locating the small magnet on the left side and removing it.

Stability

With all these changes, the Chromebook should be stable for years. It’ll be possible to power cycle it remotely (if you have such a unit), boot directly into Linux and start all your services with no human intervention.

The only think you won’t have is serial access to re-flash it remotely if all else fails, as you can with most (all?) rack servers.

Contrary to common sense, the Chromebooks are a lot better as build slaves are any development board I ever tested, and in my view, that’s mainly due to the amount of testing that it has gone through, given that it’s a consumer product. Now I need to test the new Samsung Chromebook 2, since it’s got the new Exynos Octa.

Conclusion

While I’d love to have more options, different CPUs and architectures to test, it seems that the Chromebooks will be the go to machine for the time being. And with all the glory going to ARMv8 servers, we may never see an ARMv7 board to run stably on a rack.


Amazon loves to annoy
June 27th, 2013 under Digital Rights, Gadgtes, rengolin, Software, Unix/Linux, Web. [ Comments: none ]

It’s amazing how Amazon will do all in their power to annoy you. They will sell you DRM-free MP3 songs, and even allow you to download on any device (via their web interface) the full version, for your own personal use, in the car, at home or when mobile. But, not without a cost, no.

For some reason, they want to have total control of the process, so if they’ll allow you to download your music, it has to be their way. In the past, you had to download the song immediately after buying, with a Windows-only binary (why?) and you had only one shot. If the link failed, you just lost a couple of pounds. To be honest, that happened to me, and customer service were glad to re-activate my “license” so I could download it again. Kudos for that.

Question 1: Why did they need an external software to download the songs when they had a full-featured on-line e-commerce solution?

It’s not hard to sell on-line music, other people have been doing it for years and not in that way, for sure. Why was it so hard for Amazon, the biggest e-commerce website on Earth, to do the same? I was not asking for them to revolutionise the music industry (I leave that for Spotify), just do what others were doing at the time. Apparently, they just couldn’t.

Recently, it got a lot better, and that’s why I started buying MP3 songs from Amazon. They now had a full-featured MP3 player on the web! They also have the Android version of it that is a little confusing but unobtrusive. The web version is great, once you buy an album you go directly to it and you can already start listening to songs and all.

Well, I’m a control freak, and I want to have all songs I own on my own server (and its backup), so I went to download my recently purchased songs. Well, it’s not that simple: you can download all your songs, on Windows and Mac… not Linux.

Question 2: Why on Earth can’t they make it work on Linux?

We’re not talking about Microsoft or Apple. This is Amazon, a web company that is supposed to know how JavaScript works, right? Why create executables, ActiveX, SilverLight or whatever those platforms demand from their developers when they can do the same just using JavaScript? The era when JavaScript was too slow and Flash rocked is over, like, 10 years ago. There simply is no excuse.

Undeterred, I knew the Android app would let me download, and as an added bonus, all songs downloaded by AmazonMP3 would be automatically added to the Android music playlists, so that both programs could play the same songs. That was great, of course, until I wanted to copy them to my laptop.

When running (the fantastic) ES File Explorer, I listed the folders consuming most of the SDCARD, found the amazonmp3 folder and saw that all my songs were in there. Since Android changed the file-system, and I can’t seem to mount it correctly via MTP (noob), I decided to use the ES File Explorer (again) to select all files and copy to my server via its own interface, that is great for that sort of thing, and well, found out that it’s not that simple. Again.

Question 3: Why can I read and delete the songs, but not copy them?

What magic Linux permission let me listen to a song (read) and delete the file (write) but not copy to another location? I can’t think of a way to natively do that on Linux, it must be a magic from Android, to allow for DRM crap.

At this time I was already getting nervous, so I just fired adb shell and navigated to the directory, and when I listed the files, adb just logged out. I tried again, and it just exited. No error message, no log, no warning, just shut down and get me back to my own prompt.

This was getting silly, but I had the directory, so I just ran adb pull /sdcard/amazonmp3/ and found that only the temp directory came out. What the hell is this sorcery?!

Question 4: What kind of magic stops me from copying files, or even listing files from a shell?

Well, I knew it was something to do with the Amazon MP3 application itself, if couldn’t be something embedded on Android, or the activists would crack on until they ceded, or at least provided means for disabling DRM crap from the core. To prove my theory, I removed the AmazonMP3 application and, as expected, I could copy all my files via adb to my server, where I could then, back them up.

So, if you use Linux and want to download all your songs from Amazon MP3 website, you’ll have to:

  1. Buy songs/albuns on Amazon’s website
  2. Download them via AmazonMP3 Android app (click on album, click on download)
  3. Un-install the AmazonMP3 app
  4. Get the files via: adb pull /sdcard/amazonmp3/
  5. Re-install the AmazonMP3 app (if you want, or to download more songs)

As usual, Amazon was a pain in the back with what should be really, really simple for them to do. And, as usual, a casual user finds its way to getting what they want, what they paid for, what they deserve.

If you know someone at Amazon, please let them know:

We’re not idiots. We know you know JavaScript, we know you use Linux, and we know you can create an amazing experience for all of us. Don’t treat us like idiots. If your creativity is lacking, just copy the design and implementation from someone else, we don’t care. We want solutions, not problems.


Fool me once, shame on you… fool me twice, shame on me (DBD)
October 23rd, 2010 under Computers, Corporate, Digital Rights, Hardware, Media, OSS, rengolin, Software, Unix/Linux. [ Comments: 4 ]

Defective by design came with a new story on Apple’s DRM. While I don’t generally re-post from other blogs (LWN already does that), this one is special, but not for the apparent reasons.

I agree that DRM is bad, not just for you but for business, innovation, science and the evolution of mankind. But that’s not the point. What Apple is doing with the App store is not just locking other applications from running on their hardware, but locking their hardware out of the real world.

In the late 80’s – early 90’s, all hardware platforms were like that, and Apple was no exception. Amiga, Commodore, MSX and dozens of others, each was a completely separate machine, with a unique chipset, architecture and software layers. But that never stopped people writing code for it, putting on a floppy disk and installing on any compatible computer they could find. Computer viruses spread out that way, too, given the ease it was to share software in those days.

Ten years later, there was only a handful of architectures. Intel for PCs, PowerPC for Mac and a few others for servers (Alpha, Sparc, etc). The consolidation of the hardware was happening at the same time as the explosion of the internet, so not only more people had the same type of computer, but they also shared software more easily, increasing the quantity of software available (and viruses) by orders of magnitude.

Linux was riding this wave since its beginning, and probably that was the most important factor why such an underground movement got so much momentum. It was considered subversive, anti-capitalist to use free software and those people (including me) were hunt down like communists, and ridiculed as idiots with no common-sense. Today we know how ridicule it is to use Linux, most companies and governments do and would be unthinkable today not to use it for what it’s good. But it’s not for every one, not for everything.

Apple’s niche

Apple always had a niche, and they were really smart not to get out of it. Companies like Intel and ARM are trying to get out of their niche and attack new markets, to maybe savage a section of economy they don’t have control over. Intel is going small, ARM is going big and both will get hurt. Who get’s more hurt doesn’t matter, what matter is that Apple never went to attack other markets directly.

Ever since the beginning, Apple’s ads were in the lines of “be smart, be cool, use Apple”. They never said their office suite was better than Microsoft’s (as MS does with Open Office), or that their hardware support was better (like MS does with Linux). Once you compare directly your products with someone else’s, you’re bound to trouble. When Microsoft started comparing their OS with Linux (late 90’s), the community fought back showing all the areas in which they were very poor, and businesses and governments started doing the same, and that was a big hit on Windows. Apple never did that directly.

By being always on the sidelines, Apple was the different. In their own niche, there was no competitor. Windows or Linux never entered that space, not even today. When Apple entered the mobile phone market, they didn’t took market from anyone else, they made a new market for themselves. Who bought iPhones didn’t want to buy anything else, they just did because there was no iPhone at the time.

Android mobile phones are widespread, growing faster than anything else, taking Symbian phones out of the market, destroying RIM’s homogeneity, but rarely touching the iPhone market. Apple fan-boys will always buy Apple products, no matter the cost or the lower quality in software and hardware. Being cool is more important than any of that.

Fool me once again, please

Being an Apple fan-boy is hard work. Whenever a new iPhone is out, the old ones disappear from the market and you’re outdated. Whenever the new MacBook arrives, the older ones look so out-dated that all your (fan-boy) friends will know you’re not keeping up. If by creating a niche to capture the naiveness of people and profit from it is fooling, than Apple is fooling those same people for decades and they won’t stop now. That has made them the second biggest company in the world (loosing only for an oil company), nobody can argue with that fact.

iPhones have a lesser hardware than most of the new Android phones, less functionality, less compatibility with the rest of the world. The new MacBook air has an Intel chip several years old, lacks connectivity options and in a short time won’t run Flash, Java or anything Steve Jobs dislike when he wakes up from a bad dream. But that doesn’t affect a bit the fan-boys. See, back in the days when Microsoft had fan-boys too, they were completely oblivious to the horrendous problems the platform had (viruses, bugs, reboots, memory hog etc) and they would still mock you for not being on their group.

That’s the same with Apple fan-boys and always have been. I had an Apple ][, and I liked it a lot. But when I saw an Amiga I was baffled. I immediately recognized the clear superiority of the architecture. The sound was amazing, the graphics was impressive and the games were awesome (all that mattered to me at that time, tbh). There was no comparison between an Amiga game and an Apple game at that time and everybody knew it. But Apple fan-boys were all the same, and there were fights in BBSs and meetings: Apple fan-boys one side, Amiga fan-boys on the other and the pizza would be over long before the discussion would cool down.

Nice little town, invaded

But today, reality is a bit harder to swallow. There is no PowerPC, or Alpha or even Sparc now. With Oracle owning Sparc’s roadmap, and following what they are doing to Java and OpenOffice, I wouldn’t be surprised if Larry Ellison one day woke up and decided to burn everything down. Now, there are only two major players in the small to huge markets: Intel and ARM. With ARM only being at the small and smaller, it leaves Intel with all the rest.

MacOS is no longer an OS per se. Its underlying sub-system is based on (or ripped off from) FreeBSD (a robust open source unix-like operating system). As it goes, FreeBSD is so similar to Linux that it’s not hard to re-compile Linux application to run on it. So, why should it be hard to run Linux application on MacOS? Well, it’s not, actually. With the same platform and a very similar sub-system, re-compiling Linux application to Mac is a matter of finding the right tools and libraries, everything else follows the natural course.

Now, this is dangerous! Windows has the protection of being completely different, even on the same platform (Intel), but MacOS doesn’t and there’s no way to keep the penguin’s invasion at bay. For the first time in history, Apple has opened its niche to other players. In Apple terms, this is the same as to kill itself.

See, capitalism is all about keeping control of the market. It’s not about competition or innovation, and it’s clearly not about re-distribution of capital, as the French suggested in their revolution. Albeit Apple never fought Microsoft or Linux directly, they had their market well in control and that was the key to their success. With very clever advertising and average quality hardware, they managed to build an entire universe of their own and attract a huge crowd that, once in, would never look back. But now, that bubble has been invaded by the penguin commies, and there’s no way for them to protect that market as they’ve done before.

One solution to rule them all

On a very good analysis of the Linux “dream”, this article suggests that it is dead. If you look to Linux as if it was a company (following the success of Canonical, I’m not surprised), he has a point. But Linux is not Canonical, nor a dream and it’s definitely not dead.

In the same line, you could argue that Windows is dead. It hasn’t grown up for a while, Vista destroyed the confidence and moved more people to Macs and Linux than ever before. The same way, more than 10 years ago, a common misconception for Microsoft’s fan-boys was that the Mac was dead. Its niche was too little, the hardware too expensive and incompatible with everything else. Windows is in the same position today, but it’s far from dead.

But Linux is not a company, it doesn’t fit the normal capitalist market analysis. Remember that Linux hackers are commies, right? It’s an organic community, it doesn’t behave like a company or anything capitalism would like to model. This is why it has been so many times wrongly predicted (Linux is dead, this is the year of Linux, Linux will kill Windows, Mac is destroying Linux and so on). All of this is pure bollocks. Linux growth is organic, not exponential, not bombastic. It won’t kill other platforms. Never had, never will. It will, as it has done so far, assimilate and enhance, like the Borg.

If we had Linux in the French revolution, the people would have a better chance of getting something out of it, rather than letting all the glory (and profit) to the newly founded bourgeoisie class. Not because Linux is magic, but because it embraces changes, expand the frontiers and expose the flaw in the current systems. That alone is enough to keep the existing software in constant check, that is vital to software engineering and that will never end. Linux is, in a nutshell, what’s driving innovation in all other software fronts.

Saying that Linux is dead is the same as saying that generic medication is dead because it doesn’t make profit or hasn’t taken over the big pharma’s markets. It simply is not the point and only shows that people are still with the same mindset that put Microsoft, Yahoo!, Google, IBM and now Apple where they are today, all afraid of the big bad wolf, that is not big, nor bad and has nothing to do with a wolf.

This wolf is, mind you, not Linux. Linux and the rest of the open source community are just the only players (and Google, I give them that) that are not afraid of that wolf, but, according to business analysts, they should to be able to play nice with the rest of the market. The big bad wolf is free content.

Free, open content

Free as in freedom is dangerous. Everybody knows what happens when you post on Facebook about your boss being an ass: you get fired. The same would happen if you said it out loud in a company’s lunch, wouldn’t it? Running random software in your machine is dangerous, everybody knows what can happen when virus invade your computer, or rogue software start stealing your bank passwords and personal data.

But all systems now are very similar, and the companies of today are still banging their heads against the same wall as 20 years ago: lock down the platform. 20 years ago that was quite simple, and actually, only the reflection of the construction process of any computer. Today, it has to be actively done.

It’s very easy to rip a DVD and send it to a friend. Today’s broadband speeds allow you to do that quite fast, indeed. But your friend haven’t paid for that, and the media companies felt threatened. They created DRM. Intel has just acquired McAfee to put security measures inside the chip itself. This is the same as DRM, but on a much lower level. Instead of dealing with the problem, those companies are actually delaying the solution and only making the problem worse.

DRM is easily crackable. It has been shown over and over that any DRM (software or hardware) so far has not resisted the will of people. There are far more ingenious people outside companies that do DRM than inside, therefore, it’s impossible to come up with a solution that will fool all outsiders, unless they hire them all (which will never happen) or kill them all (which could happen, if things keep the same pace).

Unless those companies start looking at the problem as the new reality, and create solutions to work in this new reality, they won’t make any money out of it. DRM is not just bad, but it’s very costly and hampers progress and innovation. It kills what capitalism loves most: profit. Take all the money spent on DRM that were cracked a day later, all the money RIAA spent on lawsuits, all the trouble to create software solutions to lock all users and the drop-out rate which happens when some better solution appears (see Google vs. Yahoo) and you get the picture.

Locked down society

Apple’s first popular advertisement was the one mocking Orwell’s 1984 and how Apple would break the rules by bringing something completely different that would free people of the locked down world they lived in. Funny though, how things turned out…

Steve Jobs say that Android is a segmented market, that Apple is better because it has only one solution to every problem. They said the same thing about Windows and Linux, that the segmentation is what’s driving their demise, that everybody should listen to Steve Jobs and use his own creations (one for each problem) and that the rest was just too noisy, too complicated for really cool people to use.

I don’t know you, but for me that sounds exactly like Big Brother’s speech.

With DRM and control of the ApStore, Apple has total freedom to put in, or take out, whatever they want, whenever they want. It has happened and will continue to happen. They never put Flash in iPhones, not because of any technical reason, but just because Steve Jobs doesn’t like it. They’re now taking Java out of the Mac “experience”, again, just for kicks. Microsoft at least put .NET and Silverlight in place, but Apple simply takes out, no replacements.

Oh, how Apple fan-boys like it. They applaud, they defend with their lives, even having no knowledge of why nor even if there is any reason for it. They just watch Steve Jobs speech and repeat, word by word. There is no reason, and those people are sounding every day more dumb than anything else, but who am I to say so? I’m the one out of the group, I’m the one who has no voice.

When that happened to Microsoft in the 90’s, it was hard to take it. The numbers were more like 95% of them and 1% of us, so there was absolutely no argument that would make them understand the utter garbage they were talking about. But today, Apple market is still not big enough, so the Apple fan-boys are indeed making Apple the second biggest company in the world, but they still look like idiots to the rest of the +50% of the world.

Yahoo!’s steps

Yahoo has shown us that locking users down, stuffing them with ads and ignoring completely the upgrade of their architecture for years is not a good patho. But Apple (as did Yahoo) thinks they are invulnerable. When Google exploded with their awesome search (I was at Yahoo’s search team at the time), we had a shock. It was not just better than Yahoo’s search, it really worked! Yahoo was afraid of being the copy-cat, so they started walking down other paths and in the end, it never really worked.

Yahoo, that started as a search company, now runs Microsoft’s lame search engine. This is, for me, the utmost proof that they failed miserably. The second biggest thing Yahoo had was email and Google has it better. Portals? Who need portals when you have the whole web at your finger tips with Google search? In the end, Google killed every single Yahoo business, one by one. Apple is following the same path, locking themselves out of the world, just waiting for someone to come with a better and simpler solution that will actually work. And they won’t listen, not even when it’s too late.

Before Yahoo! was IBM. After Apple there will be more. Those that don’t accept reality as it is, that stuck with their old ideas just because it worked so far, are bound to fail. Of course, Steve Jobs made all the money he could, and he’s not worried. As aren’t David Filo or Jerry Young, Bill Gates or Larry Ellison. And this is the crucial part.

Companies fade because great leaders fade. Communities fade when they’re no longer relevant. the Linux community is still very much relevant and won’t fade too soon. And, by its metamorphic nature, it’s very likely that the free, open source community will never die.

Companies better get used to it, and find ways to profit from it. Free, open content is here to stay, and there’s nothing anyone can do to stop that. Being dictators is not helping for the US patent and copyright system, not helping for Microsoft or Intel and definitely won’t help Apple. If they want to stay relevant, they better change soon.


The Ubuntu Way
May 16th, 2010 under OSS, rengolin, Software, Unix/Linux. [ Comments: 2 ]

It’s been five years now that I switched from Debian to Ubuntu, primarily for the updated software and radical changes in the user interface, and there are quite a few things that were constant all this time. When on Debian, I always used the unstable branch. It was the obvious choice for a non-mission-critical desktop environment I always needed. But even being unstable, it lacked a bit of risk-taking that made me some times having to compile (or download binary) applications by myself, working around the packaging management system.

With Ubuntu, it’s the exact opposite. The ongoing lack of support for nVidia and ATI boards, PulseAudio and the new Plymouth splash are good examples of major failures on deploying a technology that is yet too young to be in a distribution, especially a Long-Term-Support one. Recent rumours on changing Firefox to Chrome is a more critical change, since the whole community around Firefox (add-ons, plug-ins, bookmarklets, etc) cannot easily be migrated to Chrome or any other major browser. But this is all about the Ubuntu Way.

Identity

Ubuntu, like many other Linux distributions (especially Debian), has built its identity around the OS that most users share. It’s organic, and grows with time and feedback from the users, joined with the directions the “board” is taking in what goes in and what goes out. The original Linux community (back in mid-90’s) was a bit homogeneous in that respect, with most distributions being yet-another-collections-of-packages, be it RPM, DEB, Tar balls or anything else. With time, strong feelings were separating some distributions apart, and specializing others. Debian, for instance, became over preoccupied with license issues (no other than open source was allowed), while RedHat became more enterprise focused, flooded with third-party libraries, commercial products and a licensing scheme that was more like Microsoft than anything else.

Still, within the Debian community, some people (like me) thought that the release schedule was too wide and the licensing issues were too narrow to produce a really helpful desktop replacement for other commercial systems, like MacOS. Indeed, after a few releases, Ubuntu has shown that it can replace them for most uses to most users. I, as a Linux user for so many years, welcomed the ease of use of a MacOS without the lock-downs and lame packaging systems.

But they went further, and decided to be very (very) much the same as Apple. Initially, the Linux way was to offer everything there was available for everything. There were dozens of instant messengers, browsers, picture viewers, consoles, etc, all installed by default (or to pick from a selection of thousands of packages in the installation process), which was a major pain. Recently, Ubuntu has provided an installation process easier than Windows and MacOS, and for every application type, there was only one default option. That is, what has become, the Ubuntu Identity.

Taking Risks

To keep that identity, and still progress as fast as they (and me) would like, one has to take risks. I have to say that, for the most part, they were right on the spot. Some failures (as mentioned) are expected to happen and you are left with the consequences and decision of those risks. For a company with such a tight budget (and such high expectancy), there is little they can do differently. If they had bigger budgets, they could spend more time adapting the proprietary graphic drivers and the update system (that never works on fine-tuned machines), but they don’t. And based on how updates work on Windows and MacOS (ie. they don’t), I’m not surprised with Canonical’s failures.

I like Firefox, ALSA and Pidgin, but if the overall experience is more stable (and complete) with Empathy, Chrome and PulseAudio, so be it. We’re passt the time to complain about personal preferences in favour of a wider viewpoint. I’m too old to rant about how pity is the new splash screen when using ATI proprietary drivers for the time being, I just want to install and run. As long as my VIM is working and there is a browser and an IM to use, I’m happy. I don’t care Gimp is not included by default, I do dislike that GCC is not, but I understand the reasons and always install it first thing when I get a new system.

That’s the Ubuntu identity and the risks Canonical takes to move the desktop experience forward. As unstable Debian people used to say, that’s the risk of being on the edge…

Upgrades never work

So, I stated that upgrades never work for fine-tuned machines, and that has been my experiences until today. In the beginning, I thought it was that Ubuntu was still immature, but today I had to roll-back my Lucid installation I did yesterday for major incompatibility issues, mainly with the ATI proprietary graphic driver (splash and return from sleep).

So far, the only way I can upgrade Ubuntu is by installing a complete new copy of it every time, and apply the backed-up changes in configuration files manually after all is done. It may seem a lot of work, but every time I try to upgrade and every time I end up installing from scratch and applying the few manual tuning later. Now that I know exactly what I have to change and where (after years of doing), it takes me roughly 15 minutes to customize it.

My configuration is in such a state that it takes me zero maintenance and little backup disk space, as well as easy installation process. The magic is simple.

Preparation

This is one thing I recommend to any system, Linux, Windows and MacOS: Split into, at least, two partitions. One, around 50-80 GB, for your system, preferably the first one (primary partition). The other(s), taking up the rest of the hard-drive, for your data/home directories. If using Linux, of course, reserve (at the end), a space for your swap (4GB is more than enough, even if you have that, or more, of RAM). Swap is a safety measure and not to be used under any normal circumstance.

Daily Usage

Backup your home directory often, including personal configuration files, IM history, panel short-cuts, everything. Apart from your data, the rest might give rise to some complications when upgrading the user environment (Gnome, KDE) but that’s minor and can be overcame easily. That will help you in case things go awry in your update/replace process. A cron job or manual invocation to a script is recommended for that.

Also, remember to back up (manually, by copying) every system configuration you change. Because most configuration on Linux is a text file, that part is very easy. It has to be done manually because, as it’s very simple and easy (you shouldn’t change that many configuration files), you can do a detailed comparison between what’s in there and what you want to replace or add. This will be important for your post-update process.

Additionally, any non-essential data can be moved to a shared disk (with appropriated backup), accessible over the network. This way, you not only don’t have to backup all your data (photos, videos, documents) every install (could take days), but they will also be available from other computers while you upgrade your machine, so you can continue working on them as soon as your machine is ready.

Upgrade

Upgrades never work, especially if you have changed the configuration. Some systems evolve and can’t read old configurations properly, new systems won’t read other systems configurations and migration scripts never work properly on modified files. What’s worse, as Ubuntu has its own identity, the new systems will work better (or only work) with other new systems. So the integration between the new systems and your old, changed, systems will most likely fail silently. PulseAudio is the best example of that conflict.

To update, simply re-install the new version from CD (USB, or whatever) into the OS partition. So far, they have managed to make the upgrade to new systems pretty easy, if you discard your old ones. Empathy imports pidgin accounts (and history), all basic systems are properly configured if you do a fresh install. As wireless network passwords, panels, personal short-cuts, and other configurations are stored in your home directory, you just have to log in to see your old desktop, just the way it was.

The few things that aren’t installed (like GCC, VIM, gstreamer plugins) can be easily installed if you have a list of things you always install in a file (in your home dir), like build-essentials, ubuntu-restricted-extras bundles. Printer VPN, printer and share configurations can be easily copied over from your backup as soon as you installed and an apt-get upgrade can be done to get the new stuff since the CD was released.

Roll back

What’s best in this strategy is that roll backs are extremely easy. You can’t roll back a dist-upgrade using apt, but you can safely re-install the previous CD in case it breaks up things so badly it becomes unusable. Like the new Ubuntu, it’s still bad with proprietary graphic drivers and the open source ones are not nearly as good. So I just rolled back and will wait until it stabilises.

Instabilities occur most often in Long-Term-Support releases (like the current). It might seem weird, but it’s pretty simple: they commit to three to five years of support, so they must get new software that will last that time. The lifetime of open source projects is not great (still, longer than many commercial products), but a five year commitment on a software that is already five years old is a big risk. Ext3 and Ext4 filesystems are a good example of this case.

So, instead of providing the stable components, they change radically the interface and sub-systems and wait for them to stabilise, hoping that the production state of the release will remind developers to speed up the fixes. While not optimal to the users, it’s more or less the only way they can go without breaking the promise of support when the application goes dead. This is why enterprise Linux is so expensive, because companies require stability as well as support, and ultimately, the distribution companies will have to maintain some of the dead application for years, if not decades.

Not only roll back is easy, but changing distribution entirely. As your data is distribution agnostic (Linux centric, not package-system centric), you can re-install virtually any other Linux distribution, as many times as you want, and keep the same look and feel.

Conclusion

In summary, it might look more complicated to use and maintain, but it’s not. Once your setup is done (partitions, backup scripts), the rest is pretty easy and quick. So far, I have stubbornly upgraded every release (since 7.04) to make sure it’s still harder than re-installing and it has been the case for every release.

Also, if you have nVidia or ATI graphic boards, never upgrade in less than a month after the release is out. I recommend you upgrade at least two or three months later (mid-releases), as most of the vendors will have updated to match the new Ubuntu Way.

Lastly, as I normally fine-tune my computer, I haven’t had a successful migration of any operating system until today. I always try to upgrade, if available, and end up re-installing everything. That was true with DOS, Linux and Windows, since 1990 and I doubt it’ll change any time soon. It’ll be necessary an intelligent installation process (which our computers are not able to run, yet), to do that.

In the far future, it lies, then.


Humble Bundle
May 10th, 2010 under Digital Rights, Fun, Games, rengolin, Software, Unix/Linux. [ Comments: none ]

I’m not the one to normally do reviews or ads, but this is one well worth doing. Humble bundle is an initiative hosted by Wolfire studio, in which five other studios (2D Boy, Bit Blot, Cryptic Sea, Frictional Games and the recently joined Amanita Design) joined their award-winning indie games into a bundle with two charities (EFF and Child’s Play) that you can pay whatever you want, to be shared amongst them.

All games work on Linux and Mac (as well as Windows), are of excellent quality (I loved them) and separately would cost around 80 bucks. The average buy price for the bundle is around $8.50, but some people have paid $1000 already. Funny, though, that now they’re separating the average per platform, and Linux users pay, on average, $14 while Windows users pay $7, with Mac in between. A clear message to professional game studios out there, isn’t it?

About the games, they’re the type that are always fun to play and don’t try to be more than they should. There are no state-of-the-art 3D graphics, blood, bullets and zillions of details, but they’re solid, consistent and plain fun. I already had World of Goo (from 2D Boy) and loved it. All the rest I discovered with the bundle and I have to say that I was not expecting them to be that good. The only bad news is that you have only one more day to buy them, so hurry, get your bundle now while it’s still available.

The games

World of Goo: Maybe the most famous of all, it’s even available for Wii. It’s addictive and family friendly, has many tricks and very clever levels to play. It’s a very simple concept, balls stick to other balls and you have to reach the pipe to save them. But what they’ve done with that simple concept was a powerful and very clever combination of physical properties that give the game an extra challenge. What most impressed me was the way physics was embedded in the game. Things have weight and momentum, sticks break if the momentum is too great, some balls weight less than air and float, while others burn in contact with fire. A masterpiece.

Aquaria: I thought this would be the least interesting of all, but I was wrong. Very wrong. The graphics and music are very nice and the physics of the game is well built, but the way the game builds up is the best. It’s a mix of Ecco with Loom, where you’re a sea creature (mermaid?) and have to sing songs to get powers or to interact with the game. The more you play, the more you discover new things and the more powerful you become. Really clever and a bit more addictive than I was waiting for… ;)

Gish: You are a tar ball (not the Unix tar, though) and have to go through tunnels with dangers to find your tar girl (?). The story is stupid, but the game is fun. You can be slippery or sticky to interact with the maze and some elements that have simple physics, which add some fun. There are also some enemies to make it more difficult. Sometimes it’s a bit annoying, when it depends more on luck (if you get the timing of many things right in a row) than actually logic or skill. The save style is also not the best, I was on the fourth level and asked for a reset (to restart the fourth level again), but it reset the whole thing and sent me to the first level, which I’m not playing again. The music is great, though.

Lugaru HD: A 3D Lara Croft bloody kung-fu bunny style. The background story is more for necessity of having one than actually relevant. The idea is to go on skirmishing, cutting jugulars, sneaking and knocking down characters in the game as you go along. The 3D graphics are not particularly impressive and the camera is not innovative, but the game has some charm for those that like a fight for the sake of fights. Funny.

Penumbra: If you like being scared, this is your game. It’s rated 16+ and you can see very little while playing. But you can hear things growling, your own heart beating and the best part is when you see something that scares the hell out of you and you despair and give away your hide out. The graphics are good, simple but well cared for. The effects (blurs, fades, night vision, fear) are very well done and in sync with the game and story. The interface is pretty simple and impressively easy, making the game much more fun than the traditional FPS I’ve played so far. The best part is, you don’t fight, you hide and run. It remembers me Thief, where fighting is the last thing you want to do, but with the difference is that in Thief, you could, in this one, you’re a puss. If you fight, you’ll most likely die.

Samorost 2: It’s a flash game, that’s all I know. Flash is not particularly stable on any platform and Linux is especially unstable, so I couldn’t make it run in the first attempt. For me, and most gamers I know, a game has to work. This is why it’s so hard to play early open source games, because you’re looking for a few minutes of fun and not actually fiddling with your system. I have spent more time writing this paragraph than trying to play Samorost and I will only try it again if I upgrade my Linux (in hoping the Flash problem will go away by itself). Pity.

Well, that’s it. Go and get your humble bundle that it’s well worth, plus you help some other people in the process. Helping indie studios is very important for me. First, it levels the play-field and help them grow. Second, they tend to be much more platform independent, and decent games for Linux are scarce. Last, they tend to have the best ideas. Most game studios license one or two game engines and create dozens of similar games with that, in hope to get more value for their money. Also, they tend to stick with the current ideas that sell, instead of innovating.

By buying the bundle you are, at the very least, helping to have better games in the future.


2010 – Year of what?
January 29th, 2010 under Computers, Life, OSS, Physics, rengolin, Unix/Linux, World. [ Comments: 2 ]

Ever since 1995 I hear the same phrase, and ever since 2000 I stopped listening. It was already the year of Linux in 95 for me, so why bother?

But this year is different, and Linux is not the only revolution in town… By the end of last year, the first tera-electronvolt collisions were recorded in the LHC, getting closer to see (or not) the infamous Higgs boson. Now, the NIF reports a massive 700 kilojoules in a 10 billionth of a second laser, that, if it continues on schedule, could lead us to cold fusion!!

The human race is about to finally put the full stop on the standard model and achieve cold fusion by the end of this year, who cares about Linux?!

Well, for one thing, Linux is running all the clusters being used to compute and maintain all those facilities. So, if it were for Microsoft, we’d still be in the stone age…

UPDATE: More news on cold fusion


Linux is whatever you want it to be
November 5th, 2009 under OSS, rengolin, Software, Unix/Linux. [ Comments: 5 ]

Normally the Linux Magazine has great articles. Impartial, informative and highly technical. Unfortunately, not always. In a recent article, some perfectionist zealot stated that Ubuntu makes Linux looks bad. I couldn’t disagree more.

Ubuntu is a fast-paced, fast-adapted Linux. I was one of the early adopters and I have to say that most of the problems I had with the previous release were fixed. Some bugs went through, of course, but they were reported and quickly fixed. Moreover, Ubuntu has the support from hardware manufacturers, such as Dell, and that makes a big difference.

Linux is everything

Linux is excellent for embedded systems, great for network appliances, wonderful for desktops, irreplaceable as a development platform, marvellous on servers and the only choice for real clusters. It also sucks when you have to find the configuration manually, it’s horrible to newbies, it breaks whenever a new release is out, it takes longer to get new software (such as Firefox) but also helps a lot with package dependencies. Something that neiter Mac nor Windows managed to do properly over the past decades.

Linux is great as any piece of software could be but horrible as every operating system that was release since the beginning of times. Some Linux distributions are stable, others not so. Debian takes 10 years to release and when it does, the software it contains is already 10 years old. Ubuntu tries to be a bit faster but that obviously breaks a few things. If you’re fast enough fixing, the early adopters will be pleased that they helped the community.

“Unfortunately what most often comes is a system full of bugs, pain, anguish, wailing and gnashing of teeth – as many “early” adopters of Karmic Koala have discovered.”

As any piece of software, open or closed, free or paid, free or non-free. It takes time to mature. A real software engineer should know better, that a system is only fully tested when it reaches the community, the user base. Google uses their own users (your granny too!) as beta testers for years and everyone seem to understand it.

Debian zealots hate Red Hat zealots and both hate Ubuntu zealots that probably hate other zealots anywhere else. It’s funny to see how opinions vary greatly from a zealot clan to the other about what Linux really is. All of them have a great knowledge on what Linux is comprised of, but few seems to understand what Linux really is. Linux, or better, GNU/Linux is a big bunch of software tied together with so many different points of view that it’s impossible to state in less than a thousand words what it really is.

“Linux is meant to be stable, secure, reliable.”

NO, IT’S NOT! Linux is meant to be whatever you make of it, that’s the real beauty. If Canonical thought it was ready to launch is because they thought that, whatever bug pased the safety net was safe enough for the users to grab and report, which we did! If you’re not an expert, wait for the system to cool down. A non-expert will not be an “early adopter” anyway, that’s for sure.

Idiosyncrasies

Each Linux has its own idiosyncrasies, that’s what makes it powerful, and painful. The way Ubuntu updates/upgrades itself is particular to Ubuntu. Debian, Red Hat, Suse, all of them do it differently, and that’s life. Get over it.

“As usual, some things which were broken in the previous release are now fixed, but things which were working are now broken.”

One pleonasm after another. There is no new software without new bugs. There is no software without bugs. What was broken was known, what is new is unknown. How can someone fix something they don’t know? When eventually the user tested it, found it broken, reported, they fixed! Isn’t it simple?

“There’s gotta be a better way to do this.”

No, there isn’t. Ubuntu is like any other Linux: Like it? Use it. Don’t like it? Get another one. If you don’t like the way Ubuntu works, get over it, use another Linux and stop ranting.

Red Hat charges money, Debian has ubber-stable-decade-old releases, Gentoo is for those that have a lot of time in their hands, etc. Each has its own particularities, each is good for a different set of people.

Why Ubuntu?

I use Ubuntu because it’s easy to install, use and update. The rate of bugs is lower than on most other distros I’ve used and the rate of updates is much faster and stable than some other distros. It’s a good balance for me. Is it perfect? Of course not! There are lots of things I don’t like about Ubuntu, but that won’t make me use Windows 7, that’s for sure!

I have friends that use Suse, Debian, Fedora, Gentoo and they’re all as happy as I am, not too much, but not too few. Each has problems and solutions, you just have to choose the ones that are best for you.


Gtk example
September 26th, 2009 under Devel, OSS, rengolin, Software, Unix/Linux. [ Comments: none ]

Gtk, the graphical interface behind Gnome, is very simple to use. It doesn’t have an all-in-one IDE such as KDevelop, which is very powerful and complete, but it features a simple and functional interface designer called Glade. Once you have the widgets and signals done, filling the blanks is easy.

As an example, I wrote a simple dice throwing application, which took me about an hour from install Glade to publish it on the website. Basically, my route was to apt-get install glade, open it and create a few widgets, assign some callbacks (signals) and generate the C source code.

After that, the file src/callbacks.c contain all the signal handlers to which you have to implement. Adding just a bit of code and browsing this tutorial to get the function names was enough to get it running.

Glade generates all autoconf/automake files, so it was extremely easy to compile and run the mock window right at the beginning. The rest of the code I’ve added was even less code than I would add if doing a console based application to do just the same. Also, because of the code generation, I was afraid it’d replace my already changed callbacks.c when I changed the layout. Luckily, I was really pleased to see that Glade was smart enough not to mess up with my changes.

My example is not particularly good looking (I’m terrible with design), but that wasn’t the intention anyway. It’s been 7 years since the last time I’ve built graphical interfaces myself and I’ve never did anything with Gtk before, so it shows how easy it is to use the library.

Just bear in mind a few concepts of GUI design and you’ll have very little problems:

  1. Widget arrangement is not normally fixed by default (to allow window resize). So workout how tables, frames, boxes and panes work (which is a pain) or use fixed position and disallow window resize (as I did),
  2. Widgets don’t do anything by themselves, you need to assign them callbacks. Most signals have meaningful names (resize, toggle, set focus, etc), so it’s not difficult to find them and create callbacks for them,
  3. Side effects (numbers appearing at the press of a button, for instance) are not easily done without global variables, so don’t be picky on that from start. Work your way towards a global context later on when the interface is stable and working (I didn’t even bother)

If you’re looking for a much better dice rolling program for Linux, consider using rolldice, probably available via your package manager.


40 years and full of steam
August 23rd, 2009 under Computers, OSS, rengolin, Software, Unix/Linux. [ Comments: 3 ]

Unix is turning 40 and BBC folks wrote a small article about it. What a joy to remember when I started using Unix (AIX on an IBM machine) around 1994 and was overwhelmed by it.

By that time, the only Unix that ran well on a PC was SCO and that was a fortune, but there were some others, not as mature, that would have the same concepts. FreeBSD and Linux were the two that came into my sight, but I have chosen Linux for it was a bit more popular (therefore could get more help).

The first versions I’ve installed didn’t even had a X server and I have to say that I was happier than using Windows. Partially because of all the open-source-free-software-good-for-mankind thing, but mostly because Unix has a power that is utterly ignored by other operating systems. It’s so, that Microsoft used good bits from FreeBSD (that allows it via their license) and Apple re-wrote its graphical environment to FreeBSD and made the OS X. The GNU folks certainly helped my mood, as I could find all power tools on Linux that I had on AIX, most of the time even more powerful.

The graphical interface was lame, I have to say. But in a way it was good, it reminded me of the same interface I used on the Irix (SGI’s Unix) and that was ok. With time, it got better and better and in 1999 I was working with and using it at home full time.

The funny thing is that now, I can’t use other operating systems for too long, as I start missing some functionalities and will eventually get locked, or at least, extremely limited. The Mac OS is said to be nice and tidy, and with a full FreeBSD inside, but I still lacked agility on it, mainly due to search and installation of packages and configuration of the system.

I suppose each OS is for a different type of person… Unix is for the ones that like to fine-tune their machines or those that need the power of it (servers as well) and Mac OS is for those that need something simple, with the biggest change as the background colour. As for the rest, I fail to see a point, really.


FSF Settles Suit Against Cisco
May 20th, 2009 under Devel, Digital Rights, OSS, rengolin, Unix/Linux. [ Comments: none ]

The long dispute with Cisco has finally come to an agreement. For me, that means two things: first, they’re not trolls sucking money from the big corps for stupid patent infringement, as some might fear. Second, they’re very patient, understanding and sometimes a bit too naive.

Why the fear?

When building embedded systems or when you’re too close to the hardware (such as Cisco) you may take a wise decision to use open source software, as it’s quite likely to be stable and taken care by a good bunch of good people. Even though there are several ways of doing it independently, so your software is not virally infected by the GPL, it’s not always possible and you may have to re-invent the wheel because of that.

It’s not only GPL, patents can also cause a whole lot of damage, and it seems that TomTom has decided to go head first with the Linux community.

So, although the fear is understandable, it’s more of a hysteria than based on actual facts. The FSF hasn’t had much to show on court, and that adds up to the uncertainty of the lawyers, but it’s in cases like the Cisco that they show a much higher maturity that most companies have shown recently, even mature companies like Microsoft.

Richard Stallman

The FSF is not only Stallman. Even though he’s the boss, the organization is a large list of people, sponsors, advisers (and now interns). One thing is to fear what RMS will do when he finds you using GPL in your kitchen scale, but a completely different matter is what the FSF (as an organization) does.

The Cisco case has been going for several years. They offered help, they’ve asked politely, they’ve warned about the potential dangers and so on. A lot has been made before they have actually filled the suit, and they’ve settled it nicely. This shows that they’re not just waiting the next infringement to get you down, they actually care about their (and your) freedom.

The day the FSF starts acting stupid is the day people will drive away. It’s not like Microsoft that you have no option, there’s plenty of options out there, software, licences, partners, advisers, programmers, etc. GNU/Linux is not the decent open source operating system, the BSDs are as good, sometimes better, especially in the embedded case.

The year of Linux

Every year since 1995 is the year of Linux. For me it always was, but I can’t say the same for the rest of the world. Recently, Linux (and other open source software) has played an important role in defining the future of mankind and more and more the Linux community feels that it’s their sweat and blood.

There is a great chance it’ll become the platform of all things in a very short time-frame. Cars, mobile phones, PDAs, netbooks, laptops, desktops, servers, clusters, spaceships. One platform to rule them all and in the darkness bind them, but if they play dumb, their glory might never see daylight.

Lots of people disagree with the new revisions of the GPL license, they feel it bites the hand that feeds it. Many companies feed back open source regularly and that kinda broke the synergy. I personally think that it’s excellent for some cases, but not for all. For instance, development tools should not be restricted, especially when it comes to platforms they can’t reach. Opening the platform is an obvious way around it, but not everything can be exposed and they can’t figure out every implementation detail.

Drivers might also have trouble with GPLv3 for the same reason. Again, there are ways around it, the FSF recently opened a backdoor to develop proprietary plug-ins if they’re blessed, but that might not be suitable for every case.

Solution?

Sorry, not today. Stick to FreeBSD if you can’t cope with GPLv3, find a way to co-exist with the GCC exception and provide the source code of what you have to. If it’s not your core business, you could donate your code to the community and make it GPL too and treat your program as enabling technology, of course, providing your code doesn’t expose any patent or trade secret.

So, well, yeah. Each case is a different case, that’s the problem of being in the long tail.


« Previous entries 


License
Creative Commons License
We Support

WWF

National Autistic Society

Royal Society for the Prevention of Cruelty to Animals

DefectiveByDesign.org

End Software Patents

See Also
Disclaimer

The information in this weblog is provided “AS IS” with no warranties, and confers no rights.

This weblog does not represent the thoughts, intentions, plans or strategies of our employers. It is solely our opinion.

Feel free to challenge and disagree, and do not take any of it personally. It is not intended to harm or offend.

We will easily back down on our strong opinions by presentation of facts and proofs, not beliefs or myths. Be sensible.

Recent Posts