header image
Fool me once, shame on you… fool me twice, shame on me (DBD)
October 23rd, 2010 under Computers, Corporate, Digital Rights, Hardware, Media, OSS, rengolin, Software, Unix/Linux. [ Comments: 4 ]

Defective by design came with a new story on Apple’s DRM. While I don’t generally re-post from other blogs (LWN already does that), this one is special, but not for the apparent reasons.

I agree that DRM is bad, not just for you but for business, innovation, science and the evolution of mankind. But that’s not the point. What Apple is doing with the App store is not just locking other applications from running on their hardware, but locking their hardware out of the real world.

In the late 80’s – early 90’s, all hardware platforms were like that, and Apple was no exception. Amiga, Commodore, MSX and dozens of others, each was a completely separate machine, with a unique chipset, architecture and software layers. But that never stopped people writing code for it, putting on a floppy disk and installing on any compatible computer they could find. Computer viruses spread out that way, too, given the ease it was to share software in those days.

Ten years later, there was only a handful of architectures. Intel for PCs, PowerPC for Mac and a few others for servers (Alpha, Sparc, etc). The consolidation of the hardware was happening at the same time as the explosion of the internet, so not only more people had the same type of computer, but they also shared software more easily, increasing the quantity of software available (and viruses) by orders of magnitude.

Linux was riding this wave since its beginning, and probably that was the most important factor why such an underground movement got so much momentum. It was considered subversive, anti-capitalist to use free software and those people (including me) were hunt down like communists, and ridiculed as idiots with no common-sense. Today we know how ridicule it is to use Linux, most companies and governments do and would be unthinkable today not to use it for what it’s good. But it’s not for every one, not for everything.

Apple’s niche

Apple always had a niche, and they were really smart not to get out of it. Companies like Intel and ARM are trying to get out of their niche and attack new markets, to maybe savage a section of economy they don’t have control over. Intel is going small, ARM is going big and both will get hurt. Who get’s more hurt doesn’t matter, what matter is that Apple never went to attack other markets directly.

Ever since the beginning, Apple’s ads were in the lines of “be smart, be cool, use Apple”. They never said their office suite was better than Microsoft’s (as MS does with Open Office), or that their hardware support was better (like MS does with Linux). Once you compare directly your products with someone else’s, you’re bound to trouble. When Microsoft started comparing their OS with Linux (late 90’s), the community fought back showing all the areas in which they were very poor, and businesses and governments started doing the same, and that was a big hit on Windows. Apple never did that directly.

By being always on the sidelines, Apple was the different. In their own niche, there was no competitor. Windows or Linux never entered that space, not even today. When Apple entered the mobile phone market, they didn’t took market from anyone else, they made a new market for themselves. Who bought iPhones didn’t want to buy anything else, they just did because there was no iPhone at the time.

Android mobile phones are widespread, growing faster than anything else, taking Symbian phones out of the market, destroying RIM’s homogeneity, but rarely touching the iPhone market. Apple fan-boys will always buy Apple products, no matter the cost or the lower quality in software and hardware. Being cool is more important than any of that.

Fool me once again, please

Being an Apple fan-boy is hard work. Whenever a new iPhone is out, the old ones disappear from the market and you’re outdated. Whenever the new MacBook arrives, the older ones look so out-dated that all your (fan-boy) friends will know you’re not keeping up. If by creating a niche to capture the naiveness of people and profit from it is fooling, than Apple is fooling those same people for decades and they won’t stop now. That has made them the second biggest company in the world (loosing only for an oil company), nobody can argue with that fact.

iPhones have a lesser hardware than most of the new Android phones, less functionality, less compatibility with the rest of the world. The new MacBook air has an Intel chip several years old, lacks connectivity options and in a short time won’t run Flash, Java or anything Steve Jobs dislike when he wakes up from a bad dream. But that doesn’t affect a bit the fan-boys. See, back in the days when Microsoft had fan-boys too, they were completely oblivious to the horrendous problems the platform had (viruses, bugs, reboots, memory hog etc) and they would still mock you for not being on their group.

That’s the same with Apple fan-boys and always have been. I had an Apple ][, and I liked it a lot. But when I saw an Amiga I was baffled. I immediately recognized the clear superiority of the architecture. The sound was amazing, the graphics was impressive and the games were awesome (all that mattered to me at that time, tbh). There was no comparison between an Amiga game and an Apple game at that time and everybody knew it. But Apple fan-boys were all the same, and there were fights in BBSs and meetings: Apple fan-boys one side, Amiga fan-boys on the other and the pizza would be over long before the discussion would cool down.

Nice little town, invaded

But today, reality is a bit harder to swallow. There is no PowerPC, or Alpha or even Sparc now. With Oracle owning Sparc’s roadmap, and following what they are doing to Java and OpenOffice, I wouldn’t be surprised if Larry Ellison one day woke up and decided to burn everything down. Now, there are only two major players in the small to huge markets: Intel and ARM. With ARM only being at the small and smaller, it leaves Intel with all the rest.

MacOS is no longer an OS per se. Its underlying sub-system is based on (or ripped off from) FreeBSD (a robust open source unix-like operating system). As it goes, FreeBSD is so similar to Linux that it’s not hard to re-compile Linux application to run on it. So, why should it be hard to run Linux application on MacOS? Well, it’s not, actually. With the same platform and a very similar sub-system, re-compiling Linux application to Mac is a matter of finding the right tools and libraries, everything else follows the natural course.

Now, this is dangerous! Windows has the protection of being completely different, even on the same platform (Intel), but MacOS doesn’t and there’s no way to keep the penguin’s invasion at bay. For the first time in history, Apple has opened its niche to other players. In Apple terms, this is the same as to kill itself.

See, capitalism is all about keeping control of the market. It’s not about competition or innovation, and it’s clearly not about re-distribution of capital, as the French suggested in their revolution. Albeit Apple never fought Microsoft or Linux directly, they had their market well in control and that was the key to their success. With very clever advertising and average quality hardware, they managed to build an entire universe of their own and attract a huge crowd that, once in, would never look back. But now, that bubble has been invaded by the penguin commies, and there’s no way for them to protect that market as they’ve done before.

One solution to rule them all

On a very good analysis of the Linux “dream”, this article suggests that it is dead. If you look to Linux as if it was a company (following the success of Canonical, I’m not surprised), he has a point. But Linux is not Canonical, nor a dream and it’s definitely not dead.

In the same line, you could argue that Windows is dead. It hasn’t grown up for a while, Vista destroyed the confidence and moved more people to Macs and Linux than ever before. The same way, more than 10 years ago, a common misconception for Microsoft’s fan-boys was that the Mac was dead. Its niche was too little, the hardware too expensive and incompatible with everything else. Windows is in the same position today, but it’s far from dead.

But Linux is not a company, it doesn’t fit the normal capitalist market analysis. Remember that Linux hackers are commies, right? It’s an organic community, it doesn’t behave like a company or anything capitalism would like to model. This is why it has been so many times wrongly predicted (Linux is dead, this is the year of Linux, Linux will kill Windows, Mac is destroying Linux and so on). All of this is pure bollocks. Linux growth is organic, not exponential, not bombastic. It won’t kill other platforms. Never had, never will. It will, as it has done so far, assimilate and enhance, like the Borg.

If we had Linux in the French revolution, the people would have a better chance of getting something out of it, rather than letting all the glory (and profit) to the newly founded bourgeoisie class. Not because Linux is magic, but because it embraces changes, expand the frontiers and expose the flaw in the current systems. That alone is enough to keep the existing software in constant check, that is vital to software engineering and that will never end. Linux is, in a nutshell, what’s driving innovation in all other software fronts.

Saying that Linux is dead is the same as saying that generic medication is dead because it doesn’t make profit or hasn’t taken over the big pharma’s markets. It simply is not the point and only shows that people are still with the same mindset that put Microsoft, Yahoo!, Google, IBM and now Apple where they are today, all afraid of the big bad wolf, that is not big, nor bad and has nothing to do with a wolf.

This wolf is, mind you, not Linux. Linux and the rest of the open source community are just the only players (and Google, I give them that) that are not afraid of that wolf, but, according to business analysts, they should to be able to play nice with the rest of the market. The big bad wolf is free content.

Free, open content

Free as in freedom is dangerous. Everybody knows what happens when you post on Facebook about your boss being an ass: you get fired. The same would happen if you said it out loud in a company’s lunch, wouldn’t it? Running random software in your machine is dangerous, everybody knows what can happen when virus invade your computer, or rogue software start stealing your bank passwords and personal data.

But all systems now are very similar, and the companies of today are still banging their heads against the same wall as 20 years ago: lock down the platform. 20 years ago that was quite simple, and actually, only the reflection of the construction process of any computer. Today, it has to be actively done.

It’s very easy to rip a DVD and send it to a friend. Today’s broadband speeds allow you to do that quite fast, indeed. But your friend haven’t paid for that, and the media companies felt threatened. They created DRM. Intel has just acquired McAfee to put security measures inside the chip itself. This is the same as DRM, but on a much lower level. Instead of dealing with the problem, those companies are actually delaying the solution and only making the problem worse.

DRM is easily crackable. It has been shown over and over that any DRM (software or hardware) so far has not resisted the will of people. There are far more ingenious people outside companies that do DRM than inside, therefore, it’s impossible to come up with a solution that will fool all outsiders, unless they hire them all (which will never happen) or kill them all (which could happen, if things keep the same pace).

Unless those companies start looking at the problem as the new reality, and create solutions to work in this new reality, they won’t make any money out of it. DRM is not just bad, but it’s very costly and hampers progress and innovation. It kills what capitalism loves most: profit. Take all the money spent on DRM that were cracked a day later, all the money RIAA spent on lawsuits, all the trouble to create software solutions to lock all users and the drop-out rate which happens when some better solution appears (see Google vs. Yahoo) and you get the picture.

Locked down society

Apple’s first popular advertisement was the one mocking Orwell’s 1984 and how Apple would break the rules by bringing something completely different that would free people of the locked down world they lived in. Funny though, how things turned out…

Steve Jobs say that Android is a segmented market, that Apple is better because it has only one solution to every problem. They said the same thing about Windows and Linux, that the segmentation is what’s driving their demise, that everybody should listen to Steve Jobs and use his own creations (one for each problem) and that the rest was just too noisy, too complicated for really cool people to use.

I don’t know you, but for me that sounds exactly like Big Brother’s speech.

With DRM and control of the ApStore, Apple has total freedom to put in, or take out, whatever they want, whenever they want. It has happened and will continue to happen. They never put Flash in iPhones, not because of any technical reason, but just because Steve Jobs doesn’t like it. They’re now taking Java out of the Mac “experience”, again, just for kicks. Microsoft at least put .NET and Silverlight in place, but Apple simply takes out, no replacements.

Oh, how Apple fan-boys like it. They applaud, they defend with their lives, even having no knowledge of why nor even if there is any reason for it. They just watch Steve Jobs speech and repeat, word by word. There is no reason, and those people are sounding every day more dumb than anything else, but who am I to say so? I’m the one out of the group, I’m the one who has no voice.

When that happened to Microsoft in the 90’s, it was hard to take it. The numbers were more like 95% of them and 1% of us, so there was absolutely no argument that would make them understand the utter garbage they were talking about. But today, Apple market is still not big enough, so the Apple fan-boys are indeed making Apple the second biggest company in the world, but they still look like idiots to the rest of the +50% of the world.

Yahoo!’s steps

Yahoo has shown us that locking users down, stuffing them with ads and ignoring completely the upgrade of their architecture for years is not a good patho. But Apple (as did Yahoo) thinks they are invulnerable. When Google exploded with their awesome search (I was at Yahoo’s search team at the time), we had a shock. It was not just better than Yahoo’s search, it really worked! Yahoo was afraid of being the copy-cat, so they started walking down other paths and in the end, it never really worked.

Yahoo, that started as a search company, now runs Microsoft’s lame search engine. This is, for me, the utmost proof that they failed miserably. The second biggest thing Yahoo had was email and Google has it better. Portals? Who need portals when you have the whole web at your finger tips with Google search? In the end, Google killed every single Yahoo business, one by one. Apple is following the same path, locking themselves out of the world, just waiting for someone to come with a better and simpler solution that will actually work. And they won’t listen, not even when it’s too late.

Before Yahoo! was IBM. After Apple there will be more. Those that don’t accept reality as it is, that stuck with their old ideas just because it worked so far, are bound to fail. Of course, Steve Jobs made all the money he could, and he’s not worried. As aren’t David Filo or Jerry Young, Bill Gates or Larry Ellison. And this is the crucial part.

Companies fade because great leaders fade. Communities fade when they’re no longer relevant. the Linux community is still very much relevant and won’t fade too soon. And, by its metamorphic nature, it’s very likely that the free, open source community will never die.

Companies better get used to it, and find ways to profit from it. Free, open content is here to stay, and there’s nothing anyone can do to stop that. Being dictators is not helping for the US patent and copyright system, not helping for Microsoft or Intel and definitely won’t help Apple. If they want to stay relevant, they better change soon.


Zero-cost exception handling in C++
October 16th, 2010 under Algorithms, Devel, rengolin. [ Comments: none ]

C++ exception handling is a controversial subject. The intricate concepts are difficult to grasp, even to seasoned programmers but, in complexity, nothing compares to its implementation. I can’t remember any real-world (commercial or otherwise) product that makes heavy use of exception handling. Even STL itself only throws a couple of them and only upon terminal failures.

Java exceptions, on the other hand, are much more common. Containers throw them at will (ex. OutOfBounds) and creating your own exceptions is very easy, developers are encouraged to use them. But still, some C++ projects’ coding standards hint (like LLVM’s) or specifically disallow (like Google’s) the use of exceptions. The Android NDK, for example, doesn’t even have (nor plans to) support exceptions.

Some of the cons commonly listed are the encouragement of bad style (throwing at will, like in Java), compilation problems with old code and all the gotchas that exception handling can bring to your code, making it a nightmare to understand the real flow of your program and to figure out the critical DON’Ts of exception handling (like throwing from inside a destructor). For a deeper discussion on that subject, read Sutter’s book, for the implementation details of exception handling from a compiler point of view, keep reading.

Three distant worlds

Exception handling is not just a language construction, like if, it’s an intricate interaction between three distant worlds. All the complications that exception handling brings to us have to be implemented by a three-layer infrastructure: the user code, the run-time library and the compiler. The communication between them has to be absolutely perfect, one bit different and the control flow goes south, you start executing random code and blow up your program for good.

Not only it’s three very different places, but the code is written by three different kinds of people, at different times. The only formal document that binds them together is the ABI (ex. the Itanium C++ ABI), which is, to say the least, vague. Luckily, the guys who write the run-time libraries are close enough to the guys writing the compiler (and hopefully, but not necessarily, the same writing the ABI).

In the end, if the compiler implementation is correct, the user only has to care about the rules imposed by the C++ standard. But that’s not too comforting, the C++ standard is complex and frequently driven by previous implementations, rather than a fix and clear rule of thumb. Most users don’t precisely understand why you can’t throw from inside a destructor or why throwing a newly created object is calling terminate on their code, they just avoid it at all costs.

Dirty and expensive

The easy exception handling implementation is called SetJump/LongJump. It literally sets jumps between your code and exception handling code (created by the compiler). If an exception was thrown, the flow jumps directly to the exception handling code. This is very easy, but extremely expensive. Usually, if you follow good practices, you’d expect that your program would spend most of the time doing real computation and only occasionally handling exceptions. But if for every action you create a bunch of exception handling data, you’re wasting a lot of resources to check for something that might not even happen during the whole run of your program.

When an exception happens, you need to go back down the call graph and inspect if any of those functions (including the one you just thrown) is catching that exception. This is called unwinding the stack, and in a SetJump/LongJump exception handling this is done by inspecting if any LongJump was called by a series of SetJump checks. The problem is, that those checks are done even if an exception was not thrown at all.

Zero-cost

So, how is it possible to only execute exception handling code when an exception really happen? Simple, let the compiler/library do it for you. ;)

The whole idea of a zero-cost exception handling is to create a set of functions (like __cxa_throw, __cxa_begin_catch) that will deal with the exception handling flow by sending it to a different path, the EH path. Under normal situations, your code flow will be identical with or without exceptions. The flow could never reach the EH blocks and it’s only because the compiler knows those blocks are in the EH path that they don’t get discarded as unreachable code, since there is no explicit flow between user code and EH code.

Only the run-time library knows how to get there, and that’s helped by a table, normally in a read-only data section, specific to exception handling (ex. the .eh_frame ELF section), created by the compiler when reading the user’s exception flow. This is the core interaction between user, compiler and library codes.

When an exception is thrown via __cxa_allocate_exception + __cxa_throw, the first function allocates space and prepare the object to be thrown and the second starts reading the exception handling table. This table contains a list of exceptions that the particular piece of code is catching (often sorted by address, so it’s easy to do a binary search) and the code that will catch the exception.

The code that catches exceptions is called the personality routine. The personality routine library call enables you to use multiple zero-cost EH mechanisms in the same program. Normally, the table entry contains a pointer to the routine and some data it’ll use while unwinding, but some ABIs (like ARM’s EHABI) can add specially crafted tables for their purposes (like having the unwind code as 16-bits instructions embedded in the data section).

All in all, the general flow is the same: the personality routine will find the EH block (also called landing pads, unreachable from normal code) and jump there, where the catching happens. That code will get the exception allocated before the throw and make it available to the user code (inside the catch block) to do what the user requested. If there is no entry in the EH table that catches that exception, the stack will be unwound again and the library will read the caller’s EH table, until it reaches a table that does catch it or terminate if none does.

Clean-ups

So far so good, but what happens with the local variables and the temporaries (stack-based data)? If they have non-trivial destructors, they must be called, which could chain a lot of cleaning up to do. Clean-up code is part in the unreachable blocks (when you’re cleaning up EH stuff) and part in normal code, for automatic destruction at the end of a lexical block. Although they do similar things, their purpose is completely different.

Normal flow clean-up blocks are always called and always clean up all objects. But EH clean-up code has to clean up only the objects that have been allocated so far, in the current lexical block or in any parent until the root block of the function. If you can imagine deeply nested blocks, creating objects at will, with a throw inside in the increment part of a for loop, where the object beeing incremented has a non-trivial destructor, you got the picture.

What happens if you throw an exceptions when other was being caught? Here is where the complexity met reality and the boundaries were settled. If you are unwinding one exception, and have a flow being dealt with by the run-time library, and another exception is thrown, you can’t distinguish which exception you are handling. Since there is space for one exception been unwounded at a time, whenever a second exception is thrown, you have to terminate.

Of course, you could argue to create multiple lanes for exception handling, where in every exception handling is handled separately, so you know that one exception was caused when throwing another. That may be easy to implement in Java (or other interpreted/bytecode languages), because the VM/interpreter has total control of the language, but as stated earlier, exception handling in C++ is just a contract between three parties: compiler, run-time library and the user, and often these three parties are completely different groups in completely different space-time realities.

Conclusion

This is just a very brief, shallow demonstration of how complex exception handling can be. Many other pitfalls appear, especially when dealing with specific user code running on a specific hardware platform with a specific binary configuration for the object file. It’s impossible to test every single combination (including room temperature, or annual rainfall forecast) to make it an implementation final and weird bugs creep up every once in a while.

Enhancing the ABI to allow that is possible, but the complexities of doing so, together with the very scarce use of exception handling in C++, makes it a very unlikely candidate to be changed in the near future. It’d have to be done in so many levels (C++ standard, ABIs, compilers, run-time libraries) that it’s best left alone.


How many gears in a bicycle?
October 16th, 2010 under Fun, Life, rengolin. [ Comments: none ]

I’m cycling to work now and found that by being stored and unused for so long, my bicycle’s gear box (or whatever that’s called) was a bit damaged. Not a total loss, but I lost one front gear and one back gear, which makes it a 12-gear bicycle instead of a 21. It made me wonder about the gears in the bicycle, and how much I have really lost. Since there are obvious similar gear-ratios and impossible configurations in those supposed 21 gears, the 12 I still have could account for more than 12/21th.

I counted the number of dents in the gears, front and back, and came up with the following (approximated) table or ratios:

  14 16 18 20 22 34
28 2.00 1.75 1.53 1.40 1.27 0.82
38 2.71 2.34 2.11 1.90 1.73 1.12
48 3.43 3.00 2.67 2.40 2.18 1.41

If you count the rations in order, you’ll see that there is an average distance of 0.15 between most of them. Some have more (1.12-0.82=0.30), some less (1.75-1.74=0.02). So, if you group those with less than 0.07 as one ratio, you end up with 12 possible combinations!

Furthermore, I can only change gears one at a time. With good training, maybe I can change both at the same time, one up and the other down, to minimize the switch. So that, represented in the table above, would be a grid with vertical and horizontal changes and the eventual diagonal change, but the diagonal has to be 45-degree angled (one change on each side at one time).

Since my 28-dented gear went busted, as well as my 14, I can only use 38/48 in the front and 16-34 in the back. Removing the first row and column, we’re left with the following ratios table:

  16 18 20 22 34
38 2.34 2.11 1.90 1.73 1.12
48 3.00 2.67 2.40 2.18 1.41

which, when grouping close ratios, gives us 8 usable rations, instead of 12. That’s just a bit less reduction in the number of ratios. But also, that helped me understand my gears more easily (I’m not that well versed in bicycle matters, as you can see) and plan my changes with more accuracy.

Since two of the ratios in the two remaining rows are the same and in impossible changes (more than two columns apart), I don’t have to jump over a similar gear. The ratio change from increasing the rear gear is not constantly increasing (but reasonably similar), and the ratio gain from changing the front gear is reasonably close to 0.50 (aprox. 3x the change in the rear gear), so that gives me enough information to know which gear I have to change and when.

Of course, everyone learns that just by cycling, but I need to use my head while I cycle, and so I did this… ;)


 


License
Creative Commons License
We Support

WWF

National Autistic Society

Royal Society for the Prevention of Cruelty to Animals

DefectiveByDesign.org

End Software Patents

See Also
Disclaimer

The information in this weblog is provided “AS IS” with no warranties, and confers no rights.

This weblog does not represent the thoughts, intentions, plans or strategies of our employers. It is solely our opinion.

Feel free to challenge and disagree, and do not take any of it personally. It is not intended to harm or offend.

We will easily back down on our strong opinions by presentation of facts and proofs, not beliefs or myths. Be sensible.

Recent Posts