header image
Trashing Chromebooks
June 5th, 2014 under Computers, Hardware, rengolin, Unix/Linux. [ Comments: 8 ]

At Linaro, we do lots of toolchain tests: GCC, LLVM, binutils, libraries and so on. Normally, you’d find a fast machine where you could build toolchains and run all the tests, integrated with some dispatch mechanism (like Jenkins). Normally, you’d have a vast choice of hardware to chose from, for each different form-factor (workstation, server, rack mount) and you’d pick the fastest CPUs and a fast SSD disk with space enough for the huge temporary files that toolchain testing produces.

tcwg-rack

The only problem is, there aren’t any ARM rack-servers or workstations. In the ARM world, you either have many cheap development boards, or one very expensive (100x more) professional development board. Servers, workstations and desktops are still non-existent. Some have tried (Calxeda, for ex.) but they have failed. Others are trying with ARMv8 (the new 32/64-bit architecture), but all of them are under heavy development, so not even Alpha quality.

Meanwhile, we need to test the toolchain, and we have been doing it for years, so waiting for a stable ARM server was not an option and still isn’t. A year ago I took the task of finding the most stable development board that is fast enough for toolchain testing and fill a rack with it. Easier said than done.

Choices

Amongst the choices I had, Panda, Beagle, Arndale and Odroid boards were the obvious candidates. After initial testing, it was clear that Beagles, with only 500MB or RAM, were not able to compile anything natively without some major refactoring of the build systems involved. So, while they’re fine for running remote tests (SSH execution), they have very little use for anything else related to toolchain testing.

panda

Pandas, on the other hand, have 1GB or RAM and can compile any toolchain product, but the timing is a bit on the wrong side. Taking 5+ hours to compile a full LLVM+Clang build, a full bootstrap with testing would take a whole day. For background testing on the architecture, it’s fine, but for regression tracking and investigative work, they’re useless.

With the Arndales, we haven’t had such luck. They’re either unstable or deprecated months after release, which makes it really hard to acquire them in any meaningful volumes for contingency and scalability plans. We were left then, with the Odroids.

arndale

HardKernel makes very decent boards, with fast quad-A9 and octa-A15 chips, 2GB of RAM and a big heat sink. Compilation times were in the right ball park (40~80 min) so they’re good for both regression catching and bootstrapping toolchains. But they had the same problem as every other board we tried: instability under heavy load.

Development boards are built for hobby projects and prototyping. They normally can get at very high frequencies (1~2 GHz) and are normally designed for low-power, stand-by usage most of the time. But toolchain testing involves building the whole compiler and running the full test-suite on every commit, and that puts it on 100% CPU usage, 24/7. Since the build times are around an hour or more, by the time that the build finishes, other commits have gone through and need to be tested, making it a non-stop job.

CPUs are designed to scale down the frequency when they get too hot, so throughout the normal testing, they stay stable at their operating temperatures (~60C), and adding a heat sink only makes it go further on frequency and keeping the same temperature, so it won’t solve the temperature problem.

The issue is that, after running for a while (a few hours, days, weeks), the compilation jobs start to fail randomly (the infamous “internal compiler error”) in different places of different files every time. This is clearly not a software problem, but if it were the CPU’s fault, it’d have happened a lot earlier, since it reaches the operating temperature seconds after the test starts, and only fails hours or days after they’re running full time. Also, that same argument rules out any trouble in the power supply, since it should have failed in the beginning, not days later.

The problem that the heat sink doesn’t solve, however, is the board’s overall temperature, which gets quite hot (40C~50C), and has negative effects on other components, like the SD reader and the card itself, or the USB port and the stick itself. Those boards can’t boot from USB, so we must use SD cards for the system, and even using a USB external hard drive with a powered USB hub, we still see the failures, which hints that the SD card is failing under high load and high temperatures.

According to SanDisk, their SD cards should be ok on that temperature range, but other parties might be at play, like the kernel drivers (which aren’t build for that kind of load). What pointed me to the SD card is the first place was that when running solely on the SD card (for system and build directories), the failures appear sooner and more often than when running the builds on a USB stick or drive.

Finally, with the best failure rate at 1/week, none of those boards are able to be build slaves.

Chromebook

That’s when I found the Samsung Chromebook. I had one for personal testing and it was really stable, so amidst all that trouble with the development boards, I decided to give it a go as a buildbot slave, and after weeks running smoothly, I had found what I was looking for.

The main difference between development boards and the Chromebook is that the latter is a product. It was tested not just for its CPU, or memory, but as a whole. Its design evolved with the results of the tests, and it became more stable as it progressed. Also, Linux drivers and the kernel were made to match, fine tuned and crash tested, so that it could be used by the worst kind of users. As a result, after one and a half years running Chromebooks as buildbots, I haven’t been able to make them fail yet.

But that doesn’t mean I have stopped looking for an alternative. Chromebooks are laptops, and as such, they’re build with a completely different mindset to a rack machine, and the number of modifications to make it fit the environment wasn’t short. Rack machines need to boot when powered up, give 100% of its power to the job and distribute heat efficiently under 100% load for very long periods of time. Precisely the opposite of a laptop design.

Even though they don’t fail the jobs, they did give me a lot of trouble, like having to boot manually, overheating the batteries and not having an easy way to set up a Linux image easily deployable via network boot. The steps to fix those issues are listed below.

WARNING: Anything below will void your warranty. You have been warned.

System settings

To get your Chromebook to boot anything other than ChromeOS, you need to enter developer mode. With that, you’ll be able not only to boot from SD or USB, but also change your partition and have sudo access on ChromeOS.

With that, you go to the console (CTRL+ALT+->), login with user chronos (no password) and set the boot process as described on the link above. You’ll also need to set sudo crossystem dev_boot_signed_only=0 to be able to boot anything you want.

The last step is to make your Linux image boot by default, so when you power up your machine it boots Linux, not ChromeOS. Otherwise, you’ll have to press CTRL+U every boot, and remote booting via PDUs will be pointless. You do that via cgpt.

You need to find the partition that boots on your ChromeOS by listing all of them and seeing which one booted successfully:


$ sudo cgpt show /dev/mmcblk0

The right partition will have the information below appended to the output:


Attr: priority=0 tries=5 successful=1

If it had tries, and was successful, this is probably your main partition. Move it back down the priority order (6-th place) by running:


$ sudo cgpt add -i [part] -P 6 -S 1 /dev/mmcblk0

And you can also set the SD card’s part to priority 0 by doing the same thing over mmcblk1

With this, installing a Linux on an SD card might get you booting Linux by default on next boot.

Linux installation

You can chose a few distributions to run on the Chromebooks, but I have tested both Ubuntu and Arch Linux, which work just fine.

Follow those steps and insert the SD card in the slot and boot. You should get the Developer Mode screen and waiting for long enough, it should beep and boot directly on Linux. If it doesn’t, means your cgpt meddling was unsuccessful (been there, done that) and will need a bit more fiddling. You can press CTRL+U for now to boot from the SD card.

After that, you should have complete control of the Chromebook, and I recommend adding your daemons and settings during the boot process (inid.d, systemd, etc). Turn on the network, start the SSD daemon and other services you require (like buildbots). It’s also a good idea to change the governor to performance, but only if you’re going to use it for full time heavy load, and especially if you’re going to run benchmarks. But for the latter, you can do that on demand, and don’t need to leave it on during boot time.

To change the governor:

$ echo [scale] | sudo tee /sys/bus/cpu/devices/cpu[N]/cpufreq/scaling_governor

scale above can be one of performance, conservative, ondemand (default), or any other governor that your kernel supports. If you’re doing before benchmarks, switch to performance and then back to ondemand. Use cpuN as the CPU number (starts on 0) and do it for all CPUs, not just one.

Other interesting scripts are to get the temperatures and frequencies of the CPUs:

$ cat thermal
#!/usr/bin/env bash

ROOT=/sys/devices/virtual/thermal

for dir in $ROOT/*/temp; do
temp=`cat $dir`
temp=`echo $temp/1000 | bc -l | sed 's/0\+$/0/'`
device=`dirname $dir`
device=`basename $device`
echo "$device: $temp C"
done

$ cat freq
#!/usr/bin/env bash

ROOT=/sys/bus/cpu/devices

for dir in $ROOT/*; do
if [ -e $dir/cpufreq/cpuinfo_cur_freq ]; then
freq=`sudo cat $dir/cpufreq/cpuinfo_cur_freq`
freq=`echo $freq/1000000 | bc -l | sed 's/0\+$/0/'`
echo "`basename $dir`: $freq GHz"
fi
done

Hardware changes

batteries

As expected, the hardware was also not ready to behave like a rack server, so some modifications are needed.

The most important thing you have to do is to remove the battery. First, because you won’t be able to boot it remotely with a PDU if you don’t, but more importantly, because the head from constant usage will destroy the battery. Not just as in make it stop working, which we don’t care, but it’ll slowly release gases and bloat the battery, which can be a fire hazard.

To remove the battery, follow the iFixit instructions here.

Another important change is to remove the lid magnet that tells the Chromebook to not boot on power. The iFixit post above doesn’t mention it, bit it’s as simple as prying the monitor bezel open with a sharp knife (no screws), locating the small magnet on the left side and removing it.

Stability

With all these changes, the Chromebook should be stable for years. It’ll be possible to power cycle it remotely (if you have such a unit), boot directly into Linux and start all your services with no human intervention.

The only think you won’t have is serial access to re-flash it remotely if all else fails, as you can with most (all?) rack servers.

Contrary to common sense, the Chromebooks are a lot better as build slaves are any development board I ever tested, and in my view, that’s mainly due to the amount of testing that it has gone through, given that it’s a consumer product. Now I need to test the new Samsung Chromebook 2, since it’s got the new Exynos Octa.

Conclusion

While I’d love to have more options, different CPUs and architectures to test, it seems that the Chromebooks will be the go to machine for the time being. And with all the glory going to ARMv8 servers, we may never see an ARMv7 board to run stably on a rack.


Amazon loves to annoy
June 27th, 2013 under Digital Rights, Gadgtes, rengolin, Software, Unix/Linux, Web. [ Comments: none ]

It’s amazing how Amazon will do all in their power to annoy you. They will sell you DRM-free MP3 songs, and even allow you to download on any device (via their web interface) the full version, for your own personal use, in the car, at home or when mobile. But, not without a cost, no.

For some reason, they want to have total control of the process, so if they’ll allow you to download your music, it has to be their way. In the past, you had to download the song immediately after buying, with a Windows-only binary (why?) and you had only one shot. If the link failed, you just lost a couple of pounds. To be honest, that happened to me, and customer service were glad to re-activate my “license” so I could download it again. Kudos for that.

Question 1: Why did they need an external software to download the songs when they had a full-featured on-line e-commerce solution?

It’s not hard to sell on-line music, other people have been doing it for years and not in that way, for sure. Why was it so hard for Amazon, the biggest e-commerce website on Earth, to do the same? I was not asking for them to revolutionise the music industry (I leave that for Spotify), just do what others were doing at the time. Apparently, they just couldn’t.

Recently, it got a lot better, and that’s why I started buying MP3 songs from Amazon. They now had a full-featured MP3 player on the web! They also have the Android version of it that is a little confusing but unobtrusive. The web version is great, once you buy an album you go directly to it and you can already start listening to songs and all.

Well, I’m a control freak, and I want to have all songs I own on my own server (and its backup), so I went to download my recently purchased songs. Well, it’s not that simple: you can download all your songs, on Windows and Mac… not Linux.

Question 2: Why on Earth can’t they make it work on Linux?

We’re not talking about Microsoft or Apple. This is Amazon, a web company that is supposed to know how JavaScript works, right? Why create executables, ActiveX, SilverLight or whatever those platforms demand from their developers when they can do the same just using JavaScript? The era when JavaScript was too slow and Flash rocked is over, like, 10 years ago. There simply is no excuse.

Undeterred, I knew the Android app would let me download, and as an added bonus, all songs downloaded by AmazonMP3 would be automatically added to the Android music playlists, so that both programs could play the same songs. That was great, of course, until I wanted to copy them to my laptop.

When running (the fantastic) ES File Explorer, I listed the folders consuming most of the SDCARD, found the amazonmp3 folder and saw that all my songs were in there. Since Android changed the file-system, and I can’t seem to mount it correctly via MTP (noob), I decided to use the ES File Explorer (again) to select all files and copy to my server via its own interface, that is great for that sort of thing, and well, found out that it’s not that simple. Again.

Question 3: Why can I read and delete the songs, but not copy them?

What magic Linux permission let me listen to a song (read) and delete the file (write) but not copy to another location? I can’t think of a way to natively do that on Linux, it must be a magic from Android, to allow for DRM crap.

At this time I was already getting nervous, so I just fired adb shell and navigated to the directory, and when I listed the files, adb just logged out. I tried again, and it just exited. No error message, no log, no warning, just shut down and get me back to my own prompt.

This was getting silly, but I had the directory, so I just ran adb pull /sdcard/amazonmp3/ and found that only the temp directory came out. What the hell is this sorcery?!

Question 4: What kind of magic stops me from copying files, or even listing files from a shell?

Well, I knew it was something to do with the Amazon MP3 application itself, if couldn’t be something embedded on Android, or the activists would crack on until they ceded, or at least provided means for disabling DRM crap from the core. To prove my theory, I removed the AmazonMP3 application and, as expected, I could copy all my files via adb to my server, where I could then, back them up.

So, if you use Linux and want to download all your songs from Amazon MP3 website, you’ll have to:

  1. Buy songs/albuns on Amazon’s website
  2. Download them via AmazonMP3 Android app (click on album, click on download)
  3. Un-install the AmazonMP3 app
  4. Get the files via: adb pull /sdcard/amazonmp3/
  5. Re-install the AmazonMP3 app (if you want, or to download more songs)

As usual, Amazon was a pain in the back with what should be really, really simple for them to do. And, as usual, a casual user finds its way to getting what they want, what they paid for, what they deserve.

If you know someone at Amazon, please let them know:

We’re not idiots. We know you know JavaScript, we know you use Linux, and we know you can create an amazing experience for all of us. Don’t treat us like idiots. If your creativity is lacking, just copy the design and implementation from someone else, we don’t care. We want solutions, not problems.


Uno score keeper
March 31st, 2013 under Devel, OSS, rengolin, Software. [ Comments: none ]

With the spring not coming soon, we had to improvise during the Easter break and play Uno every night. It’s a lot of fun, but it can take quite a while to find a piece of clean paper and a pen that works around the house, so I wondered if there was an app for that. It turns out, there wasn’t!

There were several apps to keep card game scores, but every one was specific to the game, and they had ads, and wanted access to the Internet, so I decided it was worth it writing one myself. Plus, that would finally teach me to write Android apps, a thing I was delaying to get started for years.

The App

Adding new players

Card Game Scores

The app is not just a Uno score keeper, it’s actually pretty generic. You just keep adding points until someone passes the threshold, when the poor soul will be declared a winner or a loser, depending on how you set up the game. Since we’re playing every night, even the 30 seconds I spent re-writing our names was adding up, so I made it to save the last game in the Android tuple store, so you can retrieve it via the “Last Game” button.

It’s also surprisingly easy to use (I had no idea), but if you go back and forth inside the app, it cleans the game and start over a new one, with the same players, so you can go on as many rounds as you want. I might add a button to restart (or leave the app) when there’s a winner, though.

I’m also thinking about printing the names in order in the end (from victorious to loser), and some other small changes, but the way it is, is good enough to advertise and see what people think.

If you end up using, please let me know!

Download and Source Code

The app is open source (GPL), so rest assured it has no tricks or money involved. Feel free to download it from here, and get the source code at GitHub.


Hypocrisy in Hollywood
March 3rd, 2012 under Articles, Digital Rights, rengolin. [ Comments: none ]

Paralegal‘s Peter Kim sent me this nice info-graphic about a short history of the media industry in Hollywood, and I thought I would share with you.

I’m not a Lawyer, but his site seems to have some good bite-sized information about copyrights and other law terms that we should all know if we are to avoid The Big Brother in our society. Most of it obviously only apply to the US, but as we all know, US law has been extended to the world far too much. British hackers being extradited to US, European citizens getting harassed by US media companies and Asian companies being shut down by the mighty power of Hollywood.

There are other info-graphics on the site that are worth looking at. Thanks for the tip, Peter.


Eventually everyone wants to be AOL
January 25th, 2012 under Articles, Corporate, Media, Politics, rengolin, Web. [ Comments: none ]

After a good week battling against SOPA, it’s time to go back to real life, to battling our own close enemies.

As was reported over, and over, and over again (at least in this blog), Google is dragging itself towards a giant dominant player it’s becoming, much like Yahoo! and AOL in previous times.

Lifehacker has a very good post about the same subject (from where the title of this post was deliberately taken), around Google+ and the new Search+ (or whatever they’re calling that), and how the giant is loosing its steam and trying so solidify its market, where it’ll comfortably lay until the end of its days.

True, Google has a somewhat strong research department, and is working towards new TCP/IP standards, but much of it was done by Yahoo! in the past, towards FreeBSD, PHP and MySQL. Yahoo! actually hired top notch BSD kernel hackers (like Paul Saab), MySQL gurus (like Jimi Cole and Zawodny) and the PHP creator, Rasmus Lerdorf. And they put a lot back to the community. But none of that is true revolution, only short reforms to keep themselves in power for a bit longer.

The issue is simple, Google doesn’t need to innovate as much as they did in the past, as did Yahoo! and AOL. Even Microsoft and Apple need to innovate more than Google, because they have to sell things. Software, hardware and services, not only cost money, and time, but they age too rapidly and it’s not hard to throw loads of money at a project that is borne dead (like Vista). But Google get its money for free (so to speak), their users are not paying a penny for their services. How hard it is to compete with that model?

Like Google, Yahoo! had the same comfort in their days. They had more users than anyone else, and that was the same as money. They did get money from ads, like Google, only not as efficient. And that put them in a comfort zone that it’s hard not to get used to, which was their ultimate doom. This is why, after 25 and so years failing, Microsoft is still a strong player. This is why Apple, after being in the shadow for than 20 years, got to be the biggest Tech company in the world. The must innovate at every turn.

Yahoo! displaced AOL and bought pretty much everyone else because they’ve outsmarted the competition, by doing the same thing, but cheaper and easier. Google repeated the same stunt, on Yahoo! and is beginning to age. How long would that last? When the next big thing appears, making money even easier, Google will be a giant. An arrogant, slow and blind giant. And natural selection will take care of them as quick as it took of AOL and Yahoo!


Why no MMORPG is good enough?
March 8th, 2011 under Devel, Fun, Games, rengolin, Software, Web. [ Comments: none ]

Massively multiplayer online role-playing game (MMORPG) are not new. The first I remember playing is the Legend Of the Red Dragon (LORD), but before that, of course, I’ve played other (real-life) multiplayer RPG games as well, and they were never quite the same thing.

Of course, at that time the graphic cards couldn’t quite compete with our imagination (not to mention connection speeds), but a lot has improved in both fronts, and lots of very good looking games have arrived, but still, there’s something missing. For years I couldn’t put my finger on it, but recently I think I nailed the issue: user driven content.

The interface

Most of the MMORGP are war games. World of Warcraft, LOTR online, Vendetta, Star Trek Online, Regnum and so many others rely on war to be fun. Of course, all of them have the side issues, some trade or silly missions, but the real fun is going to the battlefield.

If you look from the technical side of things, this is not surprising at all. Aside from good graphics, one of the hardest things to do in a game is a good interface. Very few games are perfect like Tetris. I gave Tetris to both my sons when they were about 2 years old and after about a minute they were already playing. There is no need to instructions, tutorials or any training and still today I find it quite fun. This is why it’s probably the most successful game in history.

But it’s all about the interface. It’s easy to create a seamless interface for Tetris. Try to create an interface for a strategy game that doesn’t require some hours of training, or an interface for first-person games that actually allows you to know where you are, or an interface for adventure games that doesn’t make you click in half-dozen places to get anything. Many have tried, all have failed.

At the dawn of internet games, strategy and quake were dominant. That’s because the internet wasn’t so fast and both were quite good in saving bandwidth. Quake had a special fix to avoid sending one packet for every bullet and only one packet when you press the trigger and another when you release it, the rest was up to the client.

But in war games, the interface is pretty much established. World of Warcraft didn’t invent anything, they just mixed Warcraft with Lara Croft (rubbish interface, by the way). Space ship games got the interface from Wing Commander (Vendetta got it from W.C. Privateer), Lord of the Rings and Regnum mixed Second Life with old-style RPG (even with the same deficiencies) and Star Trek Online copied from everyone above.

Now, the interface for a good strategy or adventure game is somewhat complicated. For a first-person 3D RPG, even worse. It doesn’t have to be mind controlled to be easy, nor you have to use 3D glasses or any immersion technology to be fun. Simplifying the game is one way, but then it’s even harder to make it more interesting.

It’s the user, stupid!

I can see only one way to add value to a game that is simple but still fun: user driven content.

You can enrich the world in which you’re immersed into. For instance, Zynga is quickly gathering an impressive amount of users by adding a lot of content. I don’t play those games, but people around me do and I can see why it’s so addictive. There are so many things to do and the frequency of updates is so impressive that it’s almost impossible not to be driven to it.

You might think that the games are too simple, and the graphics are average, but the interface is extremely simple, the challenges are simple, yet still challenging, and the number of paths you can choose for your character are enormous. In this case, the user experience is driven by his own choices. The content is so rich that each and every place is completely different from every other, solely driven by user choices.

Not many game companies (certainly not the indie ones) have time or money to do that. So, why are indie games generally more interesting than commercial ones? They go back to square one, simplify the game and optimise the interface. EA would never release something like Angry Birds or World of Goo, and yet those are the two best games I played in a long time. But, world of Goo is over and Angry Birds require constant attention (seasonal versions) to keep selling or making money (from ads).

They are missing the user content. It might not be their style, nor objective, but that’s a difference between Deep Purple and a one-hit-band.

Back to MMORGP

So, from all MMORPGs, there are many good looking, some with good challenges and a lot of slaughtering fun, but I tire quite quickly from playing them. The last I played was Vendetta. Quite good graphically, it has some reasonably accurate physics simulation (what drove me to it) but not accurate enough to keep me playing. The content tires too quickly to be fun after a few hours and even though I had 8 hours of free play, I spent less than two and dropped it.

This was always a pain, since Final Fantasy and the like, building up the character, hitting slugs for XP, fight-heal-run until you level up. Though Final Fantasy was better, as it normally would throw you on level 10 or so, so you didn’t need too much of levelling up. But why? Who likes beating 253 slugs to get 1000 experience points, going to level 2 and being able to equip a copper sword that doesn’t even cut a snail’s shell?

One of the best MMORGP experiences I had recently was Regnum. This is a free game made in Argentina and has a lot of content, good interface and a healthy community. They do the normal quest levelling up technique and it works quite well until level 15 or so. After that, it’s hitting bears, golems and phantoms for half a year until you can go outside and beat other users.

I got outside prematurely (couldn’t bother to wait) and the experience was also not great. Huge lag on big crowds, people disappearing in mid-air and appearing a few meters away, etc. But the most annoying of all was the content. It was always the same fort that we had to protect, always the same keep we had to attack, always the same talk on how our race is superior to your race, etc.

I haven’t seen Lord of the Rings (which sucks on Linux) or Star Trek Online (which doesn’t even run), but I bet they can get a bit further. They’re there to compete with World of Warcraft, not Regnum, but the fate will be the same: boring.

So, after quite a big rant, how would I do it?

User content

First, a memory refresh: all free first-person shooter I know of are a re-make of Quake. They use the same engine, the same world builders, the same techniques. On Debian repositories you’ll find at least a dozen, all of them running on some version of Quake. Nexuiz, Tremulous, Open Arena, Urban Terror, etc.

Not only the Quake engine is open source, but it was built to be extensible and that, even before the source was opened by ID. I made some levels for Doom, there were good editors at the time (1994?), probably there are full development suites today.

The user has the power to extend, create, evolve and transform your game in ways that you never thought possible. To think that only the few people you hire are good enough to create game content is to be, to say the least, naive.

Now, all those games are segmented. Nexuiz levels don’t connect to Tremulous levels. That’s because the mods (as they’re called) are independent. To be able to play all those different games you need to download a whole lot of data (objects, music, game logic, physics settings, etc) and each game has it radically different. Sarge with a rocket launcher would be invincible in most of other quake variants.

But that is, in my opinion, the missing link between short spurs of fun and long lasting enjoyment. I want to be able t build my world (like Zynga), but in a world with free movement (like Quake) with quests (like MMORPGs) made by the users themselves (like no FP-game I know) in a connected world. It cannot penalise those that connect seldom, or those that connect through text terminals, Android phones or browser users in any way.

As some games have started to understand, people like different things in games. I like building stuff and optimizing structures, some like carnage, others like to level up or wait 45 minutes for a virtual beef pie to be ready. You cannot have all that in one game if you’re the only content generator.

Also, if the engine is not open, people won’t be able to enhance it for their particular worlds. It doesn’t have to be open source, but it has to have a good API and an efficient plugin system. Tools to create mods and content is also essential, but the real deal is to give the users freedom to create their versions of the game and be able to connect them all.

If every game is also a server, people can create their small worlds inside the bigger world, that is in the central server. A business strategy would be, then, to host those worlds for people that really cared about them. Either host for free in exchange of ads and content generation, or paid hosting for the more serious users. You can also easily sell content, but more importantly, you can create a whole marketplace of content driven by the users! Read Neil Stephenson’s Snow Crash and you know what I mean.

I think Apple and Google have proven over and over that a market with apps generated by the users is very effective indeed! Intel is just following the same path with their new App store, so yes, this is a good business strategy. But, it’s still fun for a wider range of people, from game addicts to casual gamers, from heavy modders to passive Facebook users.

There are many ways of doing that, maybe not all of them will be successful, but at least from my point of view, until that day arrives, no game will be fun.


Dream Machine (take 2)
January 18th, 2011 under Computers, Gadgtes, Hardware, rengolin, Technology, Thoughts. [ Comments: none ]

More than three years ago I wrote about the desktop I really wanted… Now it’s time to review that and make some new speculations…

Back Then

The key issues I raised back then were wireless technology, box size, noise, temperature and the interface.

Wireless power hasn’t progressed as much as I’d like, but all the rest (including wireless graphic cards) are already at full steam. So, apart from power, you don’t need any cables. Also, batteries are getting a bit better (not as fast as I’d like, too), so there is another stop-gap for wireless power.

Box size has reduced dramatically since 2007. All the tablets are almost full computers and with Intel and ARM battling for the mid-size form-factor, we’ll see radical improvements with lower power consumption, smaller sizes, much cooler CPUs and consequently, no noisy fans. Another thing that is bound to reduce temperature and noise is the speed in which solid-state drives are catching up with magnetic ones.

But with regard to the interface, I have to admit I was a bit too retro. Who needs 3D glasses, or pointer hats to drive the cursor on the screen? Why does anyone needs a cursor in the first place? Well, that comes to my second dream machine.

Form Factor

I love keyboards. Writing for (int i=0; i<10; i++) { a[i] = i*M_PI; } is way easier than try to dictate that and hope it gets the brackets, increments and semi-colons correctly. Even if the dictation software was super-smart, I still would feel silly dictating that. Unless I can think and the computer creates the code for me the way I want, there no better interface than the keyboard.

Having a full-size keyboard also allows you to spare some space for the rest of the machine. Transparent CPUs, GPUs and storage are still not available (nor I think will be in the next three years), so putting it into the monitor is a no-go. Flat keyboards (like the Mac ones) are a bit odd and bad for ergonomics, so a simple ergonomic keyboard with the basic hardware inside would do. No mouse, of course, nor any other device except the keyboard.

A flat transparent screen, of some organic LED or electronic paper, with the camera built-in in the centre of the screen, just behind it. So, on VoIP conversations, you look straight into the eyes of the interlocutor. Also, transparent speakers are part of the screen, half-right and half-left are screen + speakers, with transparent wiring as well. All of that, wireless of course. It should be extra-light, so just a single arm to hold the monitor, not attached to the keyboard. You should be able to control the transparency of the screen, to change between VoIP and video modes.

Hardware

CPUs and GPUs are so 10's. The best way to go forward is to have multi-purpose chips, that can turn themselves (or their parts) on and off at will, that can execute serial or vector code (or both) when required. So, a 16/32 core machine, with heavily pipelined CPU/GPUs, on multiple buses (not necessarily all active at the same time, or for the same communication purpose), could deal with on-demand gaming, video streaming, real-time ray-tracing and multi-threaded compilation without wasting too much power.

On a direct comparison, any of those CPU/GPU dies would have a fraction of the performance of a traditional mono-block chip, but their inherent parallelism and if the OS/drivers are written based on that assumption, a lot of power can be extracted from them. Also, with so many chips, you can selectively use only as much as you need for each task for specific applications. So, a game would use more GPUs than CPUs, probably with one or two CPUs to handle interface and sound. When programming, one or two CPUs can handle the IDE, while the other can compile your code in background. As all of this is on-demand, even during the game you could have a variable number of chips working as GPUs, depending on the depth of the world it's rendering.

Memory and disk are getting cheaper by the second. I wouldn't be surprised if in three years 128GB of memory and 10TB of solid-state disk are the new minima. All that, fitting nicely alongside the CPU/GPU bus, avoiding too many hops (NB+PCI+SATA+etc) to get the data in and out would also speed up the storage/retrieval of information. You can probably do a 1s boot up from scratch without the necessity of sleeping any more, just pure hibernate.

Network, again, wireless of course. It's already a reality for a while, but I don't expect it to increase considerably in the next 3 years. I assume broadband would increase a few percent, 4G will fail to deliver what it promises when the number of active clients reach a few hundred and the TV spectrum requires more bureaucracy than the world can handle. The cloud will have to wait a bit more to get where hard drives are today.

Interface

A few designs have revolutionized interfaces in the last three years. I consider the pointer-less interface (decent touch screen, camera-ware) and the brain interface as the two most important ones. Touch-screens are interesting, but they are cumbersome as your limbs get in the way of the screen you're trying to interact with. The Wii-mote was a pioneer, but the MS Kinect broke the barrier of usability. It's still in its early stages, but as such, it's a great revolution and because of the unnatural openness of Microsoft about it, I expect it to baffle even the most open minded ones.

On the other hand, brain interfaces only began this year to be usable (and not that much so), the combination of a Kinect, with a camera that reads your eyes and the brain interface to control interactions with the items on the screen should be enough to work efficiently and effectively.

People already follow the mouse with their eyes, it's easy to teach people to make the pointer follow their eyes. But to remove uncertainties and get rid once and for all of the annoying cursor, you need a 3D camera to take into account your position relative to the screen, the position of other people (that could also interact with the screen on a multi-look interface) and think together to achieve goals. That has applications from games to XP programming.

Voice control could also be used for more natural commands such as "shut-up" or "play some jazz, will ya?". Nothing too complex, as that's another field that is crawling for decades and hasn't have a decent sprint since it started...

Cost

The cost of such a machine wouldn't be too high, as the components are cheaper than today's complex motherboard designs, with multiple interconnection standards, different manufacturing processes and tests (very expensive!). The parts themselves would maybe be a bit expensive, but in such volumes (and standardised production) the cost would be greatly reduced.

To the environment, not so much. If mankind continues with the ridiculous necessity of changing their computers every year, a computer like that would fill up the landfills. The integration of the parts is so dense (eg monitor+cameras+speakers in one package) that would be impossible to recycle that cheaper than sending it to the sun to burn (a not so bad alternative).

But in life, we have to choose what's really important. A nice computer that puts you in a chair for the majority of your life is more important that some pandas and bumble bees, right?


iPad
December 19th, 2010 under Digital Rights, Gadgtes, Hardware, rengolin, Software. [ Comments: none ]

I got an iPad for Christmas. Didn’t buy it, got as a gift, and I have to say that it didn’t change my point of view on Apple a single bit.

A few years ago, while getting an iBook for my sister, I had to configure it to speak French for her and still English for me, which was a pain. I wanted to run OpenOffice, only to learn that there wasn’t one. I couldn’t find the configuration files or anything that would resemble running a Unix system. Some people say I just didn’t find it in the right place, that I could have used such and such software to make it the way I like it, but that kinda killed completely Apple’s spirit of “just work”.

All in all, I was happy to go back to my old faithful Linux and eventually bought a Dell Studio, now running a vanilla Ubuntu 10.10. I used to be the hard core Linux user, compiling the kernel, changing modules and fiddling with the configuration a lot, but there’s something I’ve learned in all these years is that a desktop (or a laptop) has to just work. And having used a iBook and an iPad, the créme de la créme of usability and user experience, I have to say that, unfortunately, there is no miracle.

To summarize my experience with the iPad in a sentence: the hardware is good, the software is average, the philosophy is disgusting.

The hardware

The hardware is good, not great. First, it’s got a good CPU+GPU combo and memory enough to run some cool games without glitches. I was actually surprised with the quality of some games, and the screen resolution and the quality of the capacitive touch-scree is really something.

But the (stereo) speakers I have in my Nokia N95 are far better than the (mono) speaker in the iPad, even in quality (despite its smaller size). There is no camera, and no easy way to interconnect it to the world, unless this “world” is made of Apples. You can only print to an AirPrinter (or whatever that’s called), you can only connect Bluetooth with other iPads, maybe iPhones but it didn’t even recognize my Nokia.

Despite its lack of hardware, the case is pretty heavy, almost a kilogram. I normally think that heavy is good, but in this case, to hold the iPad while you play is quite tiring after a few minutes. I bought the Need for Speed (quite good game) and I ended up using cushions to rest my elbows after a while and a few minutes later I stopped playing because my arms were hurting.

All in all, the responsiveness and screen quality are really amazing, the rest is just not what I’d expect from Apple. However, I hear that since 2005 Apple has slowly and constantly reducing the quality of the parts not to increase the price of the gadgets. It’s a clever move for a while, works even better with a fan base (instead of customers) but that’s bound to fail one day.

Finally, a minor thing. There is a side button for the volume, and one to mute. Problem is, it doesn’t work with everything (even some things made by Apple). It’s mute and you can still hear the sounds. Even the volume works while in mute, only for those applications that ignore the mute button. The others, you need to un-mute it to hear. I expected more from Apple…

The software

The second expectation I had from Apple was that the software would be amazing. I’m not talking about third-party AppStore software, but bundled Apple software. How naive.

My experience developing software for 20 years tells me that every piece of software is crap, people just don’t realise because software engineers can hide the crap really well. Microsoft hides it behind zillions of useless features, Oracle hides it behind zillions of useless configuration steps, Google hides it in a secret box that only his advertisers can read, open source don’t hide it at all and Apple hides it by giving poisoned apples to their fan base.

Because I’m not a fan boy, I’m unfortunately exposed to the naked truth: it sucks.

First, there is no Flash. I don’t care if HTML 5 is better than Flash, the web has zillions of Flash applications, web pages, videos and animations in Flash and it’s not going to change just because Apple doesn’t like it. Youtube has moved to HTML5 (probably because of Apple), but I can’t follow links of any other pages that have flash. That sucks.

Second, Safari sucks. Try to use eBay on safari. Try to sell something on eBay using Safari… I dare you. In many other pages it broke, as in falling back to the welcome-screen. Yesterday it locked the iPad completely. I was using the Twitter application that redirected me to an youtube page, when I opened in Safari it locked. When I closed Safari, the welcome-screen was locked. I couldn’t click (tap?) on anything. Nothing worked, and you can’t turn it off (the way to go for non-unix OSs), just make it sleep. After a few desperate taps on applications, I managed to tap on the Youtube application (that wasn’t running, so far) and when I hit on another random video on it and it played, I closed the youtube app and the rest started working again.

It breaks so many times and in so unpredictable ways, that now I only use it for Gmail and Google reader, because I know those pages were hand-crafted for the iPad. As a web experience, that sucks big time.

Another big fight I had, until I got in terms with the iPad, was iTunes. In the PC, iTunes does it all: play and download songs, books and videos, buys apps, browse the university programme (excellent, by the way). When I got some songs, videos and a few apps, I went to the iPad and where was all my stuff?

Well, I found out that you must use the iPod software to listen for songs, the Video app to view videos, the iBook to read books, the AppStore to buy apps, the… wait, every time I have an argument about Linux vs. Mac, I’m constantly reminded that normal users want less applications, less complication and with Apple you (supposedly) have the same interface all over the platforms. Well, I just learnt that, with the iPad, this is exactly the opposite. I’ve seen systems better integrated than that…

Another big problem is the bloody spell checker. If you don’t speak English, you’re screwed. First, you can’t disable the spell checker and whatever you type WILL be checked and the version that stays is the spell checker version. You can disable on a word-per-word basis, by clicking on the little X button, every time you type a word. The problem is, if you’re writing in a burst, that kills your speed. Also, in some screens you can’t cancel the spell checker. It shows up with the little X but you can’t click it. Does it make sense? To show the balloon with the X that you can’t click? I expected more from Apple.

App Store

For me, it doesn’t make sense to have a computer and not be able to run programs you want in it. Ever since I wrote my first program when I was 5 years old, I learnt that that’s what a computer is. Even Apple computers at that time were like this, I had some, and I could write programs to them and run. The fact that I have to download it from an App Store is out of my comprehension. (I understand the immediate business model, but I still think that it kills in the long term, lets wait and see).

The same friends again had the excuse of it being a quality control, that Apple can control what’s going in and make sure it won’t break the user experience. Well, if you have used the iPhone or the iPad you know very well that that’s far from the truth. Most applications suck, break, explode, or are just badly coded. And let’s be honest, do you really think that Apple spend time reviewing every single application?

In the end, I found some pretty cool apps, but nothing that I wouldn’t have found if there was no App Store.

So, in a nutshell, the software side of the iPad is mediocre, at best.

The philosophy

And here’s where we get the nasty bits. I could go on and on about all the little details, but I’ve said enough already about Apple, DRM and everything. As I read in another blog reviewing the iPhone vs. Android: “Apple, I’m not your bitch”. I don’t like someone else deciding what applications I can use, what books I can read, what songs (and where) I can hear, etc, etc.

For me, this is the crucial point and to have used a iBook before and to have an iPad now, I can categorically say: I don’t like Apple products, I’m not their bitch.

Tablets

To be fair to Apple, they do get one thing right: what people want. Before the iPhone, everyone wanted something like an iPhone, but Nokia was too busy fixing Symbian to realise that (and when they finally realised, they copied Motorola). I always wanted a tablet, really, since I saw it in Star Trek, 23 years ago and I bet every one want one, too. When the first tablets arrived in the 90’s, they were absurdly expensive and only ran a few programs that actually used the tablet, in other words, the touch-screen was merely a substitute for the mouse.

What Apple did was to consolidate the interface into a simple and easy to use touch-screen, which children and animals alike can use as if it was their third hand. What is really disappointing is that they know so well what people want and give so little effort to actually make it complete. They create a very good interface and fail to consolidate the tools, they create a quality control mechanism and fail to control the quality, they give freedom to people, that otherwise wouldn’t be able to use computers, and take it away with so many restrictions, they simplify the use of so many things, and take away the basic assumptions people have about things, like being able to play songs anywhere or to borrow a book from a friend.

It’s amazing that a high tech company such as Apple haven’t yet realised that technology changes the way people live, communicate and do business. There’s no point is give half the freedom technology allows you to, just because you can’t monetise the other half. I’m sure Apple has lots of good people inside that could share some ideas on how to progress without handcuffs, if they would just listen to them…

In the end, tablets are really as great as I thought they would be, and I’m loving it. Pity it’s an Apple tablet… However, that gave me reassurance that I must buy an Android tablet next year or so, when they become as good as I hope them to be.

Final Veredict

  • Idea: 0, at least 23 years old and has been done before many times.
  • Time-to-market: 10, as usual, first to make it right.
  • Hardware: 7, a camera and good speakers would do nice.
  • Software: 5, Flash, Safari don’t work well, bad AppStore quality.
  • Integration: 3, only interconnects with Apple, DRM, iTunes on iPad.
  • Usability: 7, the interface is good and simple and always ready to work.
  • Philosophy: 0, DRM, dev. license only works on Macs.
  • Average: 4.6, don’t buy, wait for the Android tablets to arrive in full.


Fool me once, shame on you… fool me twice, shame on me (DBD)
October 23rd, 2010 under Computers, Corporate, Digital Rights, Hardware, Media, OSS, rengolin, Software, Unix/Linux. [ Comments: 4 ]

Defective by design came with a new story on Apple’s DRM. While I don’t generally re-post from other blogs (LWN already does that), this one is special, but not for the apparent reasons.

I agree that DRM is bad, not just for you but for business, innovation, science and the evolution of mankind. But that’s not the point. What Apple is doing with the App store is not just locking other applications from running on their hardware, but locking their hardware out of the real world.

In the late 80’s – early 90’s, all hardware platforms were like that, and Apple was no exception. Amiga, Commodore, MSX and dozens of others, each was a completely separate machine, with a unique chipset, architecture and software layers. But that never stopped people writing code for it, putting on a floppy disk and installing on any compatible computer they could find. Computer viruses spread out that way, too, given the ease it was to share software in those days.

Ten years later, there was only a handful of architectures. Intel for PCs, PowerPC for Mac and a few others for servers (Alpha, Sparc, etc). The consolidation of the hardware was happening at the same time as the explosion of the internet, so not only more people had the same type of computer, but they also shared software more easily, increasing the quantity of software available (and viruses) by orders of magnitude.

Linux was riding this wave since its beginning, and probably that was the most important factor why such an underground movement got so much momentum. It was considered subversive, anti-capitalist to use free software and those people (including me) were hunt down like communists, and ridiculed as idiots with no common-sense. Today we know how ridicule it is to use Linux, most companies and governments do and would be unthinkable today not to use it for what it’s good. But it’s not for every one, not for everything.

Apple’s niche

Apple always had a niche, and they were really smart not to get out of it. Companies like Intel and ARM are trying to get out of their niche and attack new markets, to maybe savage a section of economy they don’t have control over. Intel is going small, ARM is going big and both will get hurt. Who get’s more hurt doesn’t matter, what matter is that Apple never went to attack other markets directly.

Ever since the beginning, Apple’s ads were in the lines of “be smart, be cool, use Apple”. They never said their office suite was better than Microsoft’s (as MS does with Open Office), or that their hardware support was better (like MS does with Linux). Once you compare directly your products with someone else’s, you’re bound to trouble. When Microsoft started comparing their OS with Linux (late 90’s), the community fought back showing all the areas in which they were very poor, and businesses and governments started doing the same, and that was a big hit on Windows. Apple never did that directly.

By being always on the sidelines, Apple was the different. In their own niche, there was no competitor. Windows or Linux never entered that space, not even today. When Apple entered the mobile phone market, they didn’t took market from anyone else, they made a new market for themselves. Who bought iPhones didn’t want to buy anything else, they just did because there was no iPhone at the time.

Android mobile phones are widespread, growing faster than anything else, taking Symbian phones out of the market, destroying RIM’s homogeneity, but rarely touching the iPhone market. Apple fan-boys will always buy Apple products, no matter the cost or the lower quality in software and hardware. Being cool is more important than any of that.

Fool me once again, please

Being an Apple fan-boy is hard work. Whenever a new iPhone is out, the old ones disappear from the market and you’re outdated. Whenever the new MacBook arrives, the older ones look so out-dated that all your (fan-boy) friends will know you’re not keeping up. If by creating a niche to capture the naiveness of people and profit from it is fooling, than Apple is fooling those same people for decades and they won’t stop now. That has made them the second biggest company in the world (loosing only for an oil company), nobody can argue with that fact.

iPhones have a lesser hardware than most of the new Android phones, less functionality, less compatibility with the rest of the world. The new MacBook air has an Intel chip several years old, lacks connectivity options and in a short time won’t run Flash, Java or anything Steve Jobs dislike when he wakes up from a bad dream. But that doesn’t affect a bit the fan-boys. See, back in the days when Microsoft had fan-boys too, they were completely oblivious to the horrendous problems the platform had (viruses, bugs, reboots, memory hog etc) and they would still mock you for not being on their group.

That’s the same with Apple fan-boys and always have been. I had an Apple ][, and I liked it a lot. But when I saw an Amiga I was baffled. I immediately recognized the clear superiority of the architecture. The sound was amazing, the graphics was impressive and the games were awesome (all that mattered to me at that time, tbh). There was no comparison between an Amiga game and an Apple game at that time and everybody knew it. But Apple fan-boys were all the same, and there were fights in BBSs and meetings: Apple fan-boys one side, Amiga fan-boys on the other and the pizza would be over long before the discussion would cool down.

Nice little town, invaded

But today, reality is a bit harder to swallow. There is no PowerPC, or Alpha or even Sparc now. With Oracle owning Sparc’s roadmap, and following what they are doing to Java and OpenOffice, I wouldn’t be surprised if Larry Ellison one day woke up and decided to burn everything down. Now, there are only two major players in the small to huge markets: Intel and ARM. With ARM only being at the small and smaller, it leaves Intel with all the rest.

MacOS is no longer an OS per se. Its underlying sub-system is based on (or ripped off from) FreeBSD (a robust open source unix-like operating system). As it goes, FreeBSD is so similar to Linux that it’s not hard to re-compile Linux application to run on it. So, why should it be hard to run Linux application on MacOS? Well, it’s not, actually. With the same platform and a very similar sub-system, re-compiling Linux application to Mac is a matter of finding the right tools and libraries, everything else follows the natural course.

Now, this is dangerous! Windows has the protection of being completely different, even on the same platform (Intel), but MacOS doesn’t and there’s no way to keep the penguin’s invasion at bay. For the first time in history, Apple has opened its niche to other players. In Apple terms, this is the same as to kill itself.

See, capitalism is all about keeping control of the market. It’s not about competition or innovation, and it’s clearly not about re-distribution of capital, as the French suggested in their revolution. Albeit Apple never fought Microsoft or Linux directly, they had their market well in control and that was the key to their success. With very clever advertising and average quality hardware, they managed to build an entire universe of their own and attract a huge crowd that, once in, would never look back. But now, that bubble has been invaded by the penguin commies, and there’s no way for them to protect that market as they’ve done before.

One solution to rule them all

On a very good analysis of the Linux “dream”, this article suggests that it is dead. If you look to Linux as if it was a company (following the success of Canonical, I’m not surprised), he has a point. But Linux is not Canonical, nor a dream and it’s definitely not dead.

In the same line, you could argue that Windows is dead. It hasn’t grown up for a while, Vista destroyed the confidence and moved more people to Macs and Linux than ever before. The same way, more than 10 years ago, a common misconception for Microsoft’s fan-boys was that the Mac was dead. Its niche was too little, the hardware too expensive and incompatible with everything else. Windows is in the same position today, but it’s far from dead.

But Linux is not a company, it doesn’t fit the normal capitalist market analysis. Remember that Linux hackers are commies, right? It’s an organic community, it doesn’t behave like a company or anything capitalism would like to model. This is why it has been so many times wrongly predicted (Linux is dead, this is the year of Linux, Linux will kill Windows, Mac is destroying Linux and so on). All of this is pure bollocks. Linux growth is organic, not exponential, not bombastic. It won’t kill other platforms. Never had, never will. It will, as it has done so far, assimilate and enhance, like the Borg.

If we had Linux in the French revolution, the people would have a better chance of getting something out of it, rather than letting all the glory (and profit) to the newly founded bourgeoisie class. Not because Linux is magic, but because it embraces changes, expand the frontiers and expose the flaw in the current systems. That alone is enough to keep the existing software in constant check, that is vital to software engineering and that will never end. Linux is, in a nutshell, what’s driving innovation in all other software fronts.

Saying that Linux is dead is the same as saying that generic medication is dead because it doesn’t make profit or hasn’t taken over the big pharma’s markets. It simply is not the point and only shows that people are still with the same mindset that put Microsoft, Yahoo!, Google, IBM and now Apple where they are today, all afraid of the big bad wolf, that is not big, nor bad and has nothing to do with a wolf.

This wolf is, mind you, not Linux. Linux and the rest of the open source community are just the only players (and Google, I give them that) that are not afraid of that wolf, but, according to business analysts, they should to be able to play nice with the rest of the market. The big bad wolf is free content.

Free, open content

Free as in freedom is dangerous. Everybody knows what happens when you post on Facebook about your boss being an ass: you get fired. The same would happen if you said it out loud in a company’s lunch, wouldn’t it? Running random software in your machine is dangerous, everybody knows what can happen when virus invade your computer, or rogue software start stealing your bank passwords and personal data.

But all systems now are very similar, and the companies of today are still banging their heads against the same wall as 20 years ago: lock down the platform. 20 years ago that was quite simple, and actually, only the reflection of the construction process of any computer. Today, it has to be actively done.

It’s very easy to rip a DVD and send it to a friend. Today’s broadband speeds allow you to do that quite fast, indeed. But your friend haven’t paid for that, and the media companies felt threatened. They created DRM. Intel has just acquired McAfee to put security measures inside the chip itself. This is the same as DRM, but on a much lower level. Instead of dealing with the problem, those companies are actually delaying the solution and only making the problem worse.

DRM is easily crackable. It has been shown over and over that any DRM (software or hardware) so far has not resisted the will of people. There are far more ingenious people outside companies that do DRM than inside, therefore, it’s impossible to come up with a solution that will fool all outsiders, unless they hire them all (which will never happen) or kill them all (which could happen, if things keep the same pace).

Unless those companies start looking at the problem as the new reality, and create solutions to work in this new reality, they won’t make any money out of it. DRM is not just bad, but it’s very costly and hampers progress and innovation. It kills what capitalism loves most: profit. Take all the money spent on DRM that were cracked a day later, all the money RIAA spent on lawsuits, all the trouble to create software solutions to lock all users and the drop-out rate which happens when some better solution appears (see Google vs. Yahoo) and you get the picture.

Locked down society

Apple’s first popular advertisement was the one mocking Orwell’s 1984 and how Apple would break the rules by bringing something completely different that would free people of the locked down world they lived in. Funny though, how things turned out…

Steve Jobs say that Android is a segmented market, that Apple is better because it has only one solution to every problem. They said the same thing about Windows and Linux, that the segmentation is what’s driving their demise, that everybody should listen to Steve Jobs and use his own creations (one for each problem) and that the rest was just too noisy, too complicated for really cool people to use.

I don’t know you, but for me that sounds exactly like Big Brother’s speech.

With DRM and control of the ApStore, Apple has total freedom to put in, or take out, whatever they want, whenever they want. It has happened and will continue to happen. They never put Flash in iPhones, not because of any technical reason, but just because Steve Jobs doesn’t like it. They’re now taking Java out of the Mac “experience”, again, just for kicks. Microsoft at least put .NET and Silverlight in place, but Apple simply takes out, no replacements.

Oh, how Apple fan-boys like it. They applaud, they defend with their lives, even having no knowledge of why nor even if there is any reason for it. They just watch Steve Jobs speech and repeat, word by word. There is no reason, and those people are sounding every day more dumb than anything else, but who am I to say so? I’m the one out of the group, I’m the one who has no voice.

When that happened to Microsoft in the 90’s, it was hard to take it. The numbers were more like 95% of them and 1% of us, so there was absolutely no argument that would make them understand the utter garbage they were talking about. But today, Apple market is still not big enough, so the Apple fan-boys are indeed making Apple the second biggest company in the world, but they still look like idiots to the rest of the +50% of the world.

Yahoo!’s steps

Yahoo has shown us that locking users down, stuffing them with ads and ignoring completely the upgrade of their architecture for years is not a good patho. But Apple (as did Yahoo) thinks they are invulnerable. When Google exploded with their awesome search (I was at Yahoo’s search team at the time), we had a shock. It was not just better than Yahoo’s search, it really worked! Yahoo was afraid of being the copy-cat, so they started walking down other paths and in the end, it never really worked.

Yahoo, that started as a search company, now runs Microsoft’s lame search engine. This is, for me, the utmost proof that they failed miserably. The second biggest thing Yahoo had was email and Google has it better. Portals? Who need portals when you have the whole web at your finger tips with Google search? In the end, Google killed every single Yahoo business, one by one. Apple is following the same path, locking themselves out of the world, just waiting for someone to come with a better and simpler solution that will actually work. And they won’t listen, not even when it’s too late.

Before Yahoo! was IBM. After Apple there will be more. Those that don’t accept reality as it is, that stuck with their old ideas just because it worked so far, are bound to fail. Of course, Steve Jobs made all the money he could, and he’s not worried. As aren’t David Filo or Jerry Young, Bill Gates or Larry Ellison. And this is the crucial part.

Companies fade because great leaders fade. Communities fade when they’re no longer relevant. the Linux community is still very much relevant and won’t fade too soon. And, by its metamorphic nature, it’s very likely that the free, open source community will never die.

Companies better get used to it, and find ways to profit from it. Free, open content is here to stay, and there’s nothing anyone can do to stop that. Being dictators is not helping for the US patent and copyright system, not helping for Microsoft or Intel and definitely won’t help Apple. If they want to stay relevant, they better change soon.


What I don’t miss about Java
July 26th, 2010 under Devel, rengolin, Software. [ Comments: 5 ]

Disclaimer: This is not a rant

I spent my last year working with Java, and it was not at all bad. But while Java has its moments and shines, I always felt a bit out of place when using it. In fact, when I moved back to C++, contrary to when I moved to Java, I felt that I actually wasn’t missing much…

Last year, while writing in Java at work, I felt compelled more often than usual to write C++ programs at home. Even simple programs, that would do better with scripting languages, they all came in C++.

Recently, working full time with C++, I noticed I’m doing very little home development and definitely not doing any Java. So, what did I miss about C++ that I don’t miss about Java?

Expressiveness: While functional languages are much more expressive than C++, there are few languages less expressive than Java. Java encourages child-like programming like forcing to call everything by methods not operators. By not having explicit pointers, operator overload and other dangerous things from C++, you end up repeating yourself quite a lot and it’s very hard to understand the logic afterwards, when all you have is bloatware.

While Java designers tried to avoid pointers and operators, they couldn’t. We still have null references (throwing null pointer exceptions) and the fake operators (like toString(), hash(), compare()) that can easily be overridden to change the expected behaviour pretty much the same way as C++ operators, but in the “method” notation.

In the end, you can do some bad things, but not all. So, they took away dangers by taking away functionality, without a proper redesign of C++.

Abuse of Object Orientation: While in Ruby, everything is an object, in Java, almost everything can be. Every class derive from Object silently, but base types do not. So you have the basic objects (Integer et al) which get automatically converted into basic types in subtle ways it’s hard to predict and has a huge performance impact (see auto-boxing).

Not just performance, but the language design is, again, incomplete.

Most OO programmers (mainly Java ones) complain a lot about Perl OO. They say Perl (or Python for that matter) has no proper OO, since everything is a hash and there is no concept of protection.

While Java objects and members are strongly typed, and you have the concept of protection, it’s way too easy to transform Java OO into Perl OO with reflection.

Of course, with C++ you can cast things to void pointers, mess up in the memory and so on, but getting objects by name, removing the private protection in a safe way is simply wrong. It’s like giving loaded guns to children and telling them where the lock is.

Abuse of Design Patterns: Java developers are encourage to use design patterns, to the point of stupidity. The first thing I learnt from design patterns is that their misuse is actually an anti-pattern.

Properties are important when the requirements change too often, not when they’re static. Factories are used when the objects created may differ or be customized, not for never-changing one-object construction. Still, most libraries (all?) will have Factories, Properties and so on, just for the sake of Design Patterns Compliance ™.

Fact, one of the strengths of Java development is that every one is encouraged to do things the same way. No Larry Wall style, all factory workers, doing their share in the big picture. While this is good for big, quick projects on companies with high turn-over (like consultancy companies), it’s horrible for start-ups or more creative development.

Half-implemented features: Well, templates is an issue. There is no template mechanism in Java. With the so-called Generics (like cheap version of meds), there is no type safety at all, it’s just syntax sugar for lists of Objects.

That generates a lot of misunderstandings and bad code being generated when the syntax is obviously correct, that is, if the types were actually being checked.

Again, incomplete design for the sake of backward compatibility with old codes and VMs.

Performance: Running in a JVM is already a bad start for performance, but a good compiler and a well done JIT environment can take most of it away by intelligently removing unused code, re-optimizing most used code during run-time and using profiling results to change branch-prediction code.

While the JVM does some of it, it also introduces several problems that take away the advantage and put it back on the back of the class. Auto-boxing and generics create a lot of useless casts, that can be a huge performance hit. Very few Java programmers really care about it and the compiler doesn’t do a good job in reducing that impact or even warning the programmer.

I often see Java developer scorn at performance issues. The phrase used most is “a programmer shouldn’t care about memory footprint or performance, only about business logic”. That, together with the fact that almost all universities now are teaching Java in undergraduate courses, kinda frightens me a bit.

Strong dependency on IDEs: Borland made quite a lot of money out of C++ IDEs in the 90’s, but most C++ programmers I know still use VIM or Emacs. On the other hand, every Java programmer I know use Eclipse, IntelliJ or something of the sort.

This is not just ease of use (code completion, syntax colouring, hints, navigation), it’s all about speeding up the development process by taking away boiler-plate code generation and refactoring.

IDEs are capable of writing complete pieces of code, refactor and re-write things (even behind your back). The programmers don’t care about it, the code becomes bloated, unintelligible and forgotten. Not to mention the desire of IDEs and people following IDE-style to use certain patterns for everything, like using Properties where simple structures would suffice. (see above, Abuse of Design-Patterns).

False Guarantees: The big selling point of Java, besides cheap cross-platform development, is it’s apparent safety and ease of use. But it isn’t in so many levels…

The abuses and problems related above are only part of the story. The garbage collector is another…

Some good garbage collection routines can help the initial development of programs, and they do take away the job of the lazy programmers to manage their own memory, but the Java garbage collection became a beast, with incomprehensible command-line options, undefined behaviour and total lack of control over it. You’re rendered hostage to its desires.

Not to mention the complete memory management that won’t cope with dynamic memory allocation. I mean, if you want to make memory management easy for programmers (as they went to all that trouble for a garbage collection), you could have gone a bit further and actually figured out the available memory and used it politely.

Join those with the fact that pointers and operators are still available, and you have a language that is not so much simpler than C++, with a huge price in performance and weirdness.

Undocumented APIs: Java claims to be platform independent, but has quite a few available (but undocumented) APIs to use platforms specific functionality (like signals). Still, Sun (now Oracle) reserves the right to change whenever they wish and there’s little you (or anyone) can do about it.

And that takes us to the final point:

Standards (or lack thereof): Sun did a nice job at many things (mostly hardware and OS), but they screwed up neatly when it came to support software. There is no standard, IBM and even Microsoft created their own JVM (which was better than Sun’s, btw) without any final definition about the standard API. During the Java 1.1 days, it was possible to be platform agnostic but VM specific in the same platform!

Conclusion: Java was meant to be an easy language, but it turns out that it’s deceitful enough to be just as bad as any other. And recent changes are making it worse.

Programmers are loosing the ability to understand how the machine works, how their languages behave and, more importantly, to know the implications of their actions.

Why spend time understanding the fiddlings some people had with Java if you can spend the same time understanding how the machines actually work and therefore be able to use any programming language you want?

Some argue that Java is the new Cobol and will disappear the same way… I tend to agree…


« Previous entries 


License
Creative Commons License
We Support

WWF

DefectiveByDesign.org

End Software Patents

National Autistic Society

See Also
Disclaimer

The information in this weblog is provided “AS IS” with no warranties, and confers no rights.

This weblog does not represent the thoughts, intentions, plans or strategies of our employers. It is solely our opinion.

Feel free to challenge and disagree, and do not take any of it personally. It is not intended to harm or offend.

We will easily back down on our strong opinions by presentation of facts and proofs, not beliefs or myths. Be sensible.

Recent Posts