header image
Open Source and Profit
July 8th, 2013 under Corporate, Devel, Digital Rights, OSS, rengolin, World. [ Comments: 2 ]

I have written extensively about free, open source software as a way of life, and now reading back my own articles of the past 7 years, I realize that I was wrong on some of the ideas, or in the state of the open source culture within business and around companies.

I’ll make a bold statement to start, trying to get you interested in reading past the introduction, and I hope to give you enough arguments to prove I’m right. Feel free to disagree on the comments section.

The future of business and profit, in years to come, can only come if surrounded by free thoughts.

By free thoughts I mean free/open source software, open hardware, open standards, free knowledge (both free as in beer and as in speech), etc.

Past Ideas

I began my quest to understand the open source business model back in 2006, when I wrote that open source was not just software, but also speech. Having open source (free) software is not enough when the reasons why the software is free are not clear. The reason why this is so is that the synergy, that is greater than the sum of the individual parts, can only be achieved if people have the rights (and incentives) to reach out on every possible level, not just the source, or the hardware. I make that clear later on, in 2009, when I expose the problems of writing closed source software: there is no ecosystem in which to rely, so progress is limited and the end result is always less efficient, since the costs to make it as efficient are too great and would drive the prices of the software too high up to be profitable.

In 2008 I saw both sides of the story, pro and against Richard Stallman, on the views of the legitimacy of propriety control, being it via copyright licenses or proprietary software. I may have come a long way, but I was never against his idea of the perfect society, Richard Stallman’s utopia, or as some friends put it: The Star Trek Universe. The main difference between me and Stallman is that he believes we should fight to the last man to protect ourselves from the evil corporations towards software abuse, while I still believe that it’s impossible for them to sustain this empire for too long. His utopia will come, whether they like it or not.

Finally, in 2011 I wrote about how copying (and even stealing) is the only business model that makes sense (Microsoft, Apple, Oracle etc are all thieves, in that sense) and the number of patent disputes and copyright infringement should serve to prove me right. Last year I think I had finally hit the epiphany, when I discussed all these ideas with a friend and came to the conclusion that I don’t want to live in a world where it’s not possible to copy, share, derive or distribute freely. Without the freedom to share, our hands will be tied to defend against oppression, and it might just be a coincidence, but in the last decade we’ve seen the biggest growth of both disproportionate propriety protection and disproportional governmental oppression that the free world has ever seen.

Can it be different?

Stallman’s argument is that we should fiercely protect ourselves against oppression, and I agree, but after being around business and free software for nearly 20 years, I so far failed to see a business model in which starting everything from scratch, in a secret lab, and releasing the product ready for consumption makes any sense. My view is that society does partake in an evolutionary process that is ubiquitous and compulsory, in which it strives to reduce the cost of the whole process, towards stability (even if local), as much as any other biological, chemical or physical system we know.

So, to prove my argument that an open society is not just desirable, but the only final solution, all I need to do is to show that this is the least energy state of the social system. Open source software, open hardware and all systems where sharing is at the core should be, then, the least costly business models, so to force virtually all companies in the world to follow suit, and create the Stallman’s utopia as a result of the natural stability, not a forced state.

This is crucial, because every forced state is non-natural by definition, and every non-natural state has to be maintained by using resources that could be used otherwise, to enhance the quality of the lives of the individuals of the system (being them human or not, let’s not block our point of view this early). To achieve balance on a social system we have to let things go awry for a while, so that the arguments against such a state are perfectly clear to everyone involved, and there remains no argument that the current state is non-optimal. If there isn’t discomfort, there isn’t the need for change. Without death, there is no life.

Profit

Of all the bad ideas us humans had on how to build a social system, capitalism is probably one of the worst, but it’s also one of the most stable, and that’s because it’s the closest to the jungle rule, survival of the fittest and all that. Regulations and governments never came to actually protect the people, but as to protect capitalism from itself, and continue increasing the profit of the profitable. Socialism and anarchy rely too much on forced states, in which individuals have to be devoid of selfishness, a state that doesn’t exist on the current form of human beings. So, while they’re the product of amazing analysis of the social structure, they still need heavy genetic changes in the constituents of the system to work properly, on a stable, least-energy state.

Having less angry people on the streets is more profitable for the government (less costs with security, more international trust in the local currency, more investments, etc), so panis et circenses will always be more profitable than any real change. However, with more educated societies, result from the increase in profits of the middle class, more real changes will have to be made by governments, even if wrapped in complete populist crap. One step at a time, the population will get more educated, and you’ll end up with more substance and less wrapping.

So, in the end, it’s all about profit. If not using open source/hardware means things will cost more, the tendency will be to use it. And the more everyone uses it, the less valuable will be the products that are not using it, because the ecosystem in which applications and devices are immersed in, becomes the biggest selling point of any product. Would you buy a Blackberry Application, or an Android Application? Today, the answer is close to 80% on the latter, and that’s only because they don’t use the former at all.

It’s not just more expensive to build Blackberry applications, because the system is less open, the tools less advanced, but also the profit margins are smaller, and the return on investment will never justify. This is why Nokia died with their own App store, Symbian was not free, and there was a better, free and open ecosystem already in place. The battle had already been lost, even before it started.

But none of that was really due to moral standards, or Stallman’s bickering. It was only about profit. Microsoft dominated the desktop for a few years, long enough to make a stand and still be dominant after 15 years of irrelevance, but that was only because there was nothing better when they started, not by a long distance. However, when they tried to flood the server market, Linux was not only already relevant, but it was better, cheaper and freer. The LAMP stack was already good enough, and the ecosystem was so open, that it was impossible for anyone with a closed development cycle to even begin to compete on the same level.

Linux became so powerful that, when Apple re-defined the concept of smartphones with the iPhone (beating Nokia’s earlier attempts by light-years of quality), the Android system was created, evolved and dominated in less than a decade. The power to share made possible for Google, a non-device, non-mobile company, to completely outperform a hardware manufacturer in a matter of years. If Google had invented a new OS, not based on anything existent, or if they had closed the source, like Apple did with FreeBSD, they wouldn’t be able to compete, and Apple would still be dominant.

Do we need profit?

So, the question is: is this really necessary? Do we really depend on Google (specifically) to free us from the hands of tyrant companies? Not really. If it wasn’t Google, it’d be someone else. Apple, for a long time, was the odd guy in the room, and they have created an immense value for society: they gave us something to look for, they have educated the world on what we should strive for mobile devices. But once that’s done, the shareable ecosystem learns, evolves and dominate. That’s not because Google is less evil than Apple, but because Android is more profitable than iOS.

Profit here is not just the return on investment that you plan on having on a specific number of years, but adding to that, the potential that the evolving ecosystem will allow people to do when you’ve long lost the control over it. Shareable systems, including open hardware and software, allow people far down in the planing, manufacturing and distributing process to still have profit, regardless of what were your original intentions. One such case is Maddog’s Project Cauã.

By using inexpensive RaspberryPis, by fostering local development and production and by enabling the local community to use all that as a way of living, Maddog’s project is using the power of the open source initiative by completely unrelated people, to empower the people of a country that much needs empowering. That new class of people, from this and other projects, is what is educating the population of the world, and what is allowing the people to fight for their rights, and is the reason why so many civil uprisings are happening in Brazil, Turkey, Egypt.

Instability

All that creates instability, social unrest, whistle-blowing gone wrong (Assange, Snowden), and this is a good thing. We need more of it.

It’s only when people feel uncomfortable with how the governments treat them that they’ll get up their chairs and demand for a change. It’s only when people are educated that they realise that oppression is happening (since there is a force driving us away from the least-energy state, towards enriching the rich), and it’s only when these states are reached that real changes happen.

The more educated society is, the quicker people will rise to arms against oppression, and the closer we’ll be to Stallman’s utopia. So, whether governments and the billionaire minority likes or not, society will go towards stability, and that stability will migrate to local minima. People will rest, and oppression will grow in an oscillatory manner until unrest happens again, and will throw us into yet another minimum state.

Since we don’t want to stay in a local minima, we want to find the best solution not just a solution, having it close to perfect in the first attempt is not optimal, but whether we get it close in the first time or not, the oscillatory nature of social unrest will not change, and nature will always find a way to get us closer to the global minimum.

Conclusion

Is it possible to stay in this unstable state for too long? I don’t think so. But it’s not going to be a quick transition, nor is it going to be easy, nor we’ll get it on the first attempt.

But more importantly, reaching stability is not a matter of forcing us to move towards a better society, it’s a matter of how dynamic systems behave when there are clear energetic state functions. In physical and chemical systems, this is just energy, in biological systems this is the propagation ability, and in social systems, this is profit. As sad as it sounds…


Uno score keeper
March 31st, 2013 under Devel, OSS, rengolin, Software. [ Comments: none ]

With the spring not coming soon, we had to improvise during the Easter break and play Uno every night. It’s a lot of fun, but it can take quite a while to find a piece of clean paper and a pen that works around the house, so I wondered if there was an app for that. It turns out, there wasn’t!

There were several apps to keep card game scores, but every one was specific to the game, and they had ads, and wanted access to the Internet, so I decided it was worth it writing one myself. Plus, that would finally teach me to write Android apps, a thing I was delaying to get started for years.

The App

Adding new players

Card Game Scores

The app is not just a Uno score keeper, it’s actually pretty generic. You just keep adding points until someone passes the threshold, when the poor soul will be declared a winner or a loser, depending on how you set up the game. Since we’re playing every night, even the 30 seconds I spent re-writing our names was adding up, so I made it to save the last game in the Android tuple store, so you can retrieve it via the “Last Game” button.

It’s also surprisingly easy to use (I had no idea), but if you go back and forth inside the app, it cleans the game and start over a new one, with the same players, so you can go on as many rounds as you want. I might add a button to restart (or leave the app) when there’s a winner, though.

I’m also thinking about printing the names in order in the end (from victorious to loser), and some other small changes, but the way it is, is good enough to advertise and see what people think.

If you end up using, please let me know!

Download and Source Code

The app is open source (GPL), so rest assured it has no tricks or money involved. Feel free to download it from here, and get the source code at GitHub.


Distributed Compilation on a Pandaboard Cluster
February 13th, 2013 under Devel, Distributed, OSS, rengolin. [ Comments: 2 ]

This week I was experimenting with the distcc and Ninja on a Pandaboard cluster and it behaves exactly as I expected, which is a good thing, but it might not be what I was looking for, which is not. ;)

Long story short, our LLVM buildbots were running very slow, from 3 to 4.5 hours to compile and test LLVM. If you consider that at peak time (PST hours) there are up to 10 commits in a single hour, the buildbot will end up testing 20-odd patches at the same time. If it breaks in unexpected ways, of if there is more than one patch on a given area, it might be hard to spot the guilty.

We ended up just avoiding the make clean step, which put us around 15 minutes build+tests, with the odd chance of getting 1 or 2 hours tops, which is a great deal. But one of the alternatives I was investigating is to do a distributed build. More so because of the availability of cluster nodes with dozens of ARM cores inside, we could make use of such a cluster to speed up our native testing, even benchmarking on a distributed way. If we do it often enough, the sample might be big enough to account for the differences.

The cluster

So, I got three Pandaboards ES (dual Cortex-A9, 1GB RAM each) and put the stock Ubuntu 12.04 on them and installed the bare minimum (vim, build-essential, python-dev, etc), upgraded to the latest packages and they were all set. Then, I needed to find the right tools to get a distributed build going.

It took a bit of searching, but I ended up with the following tool-set:

  • distcc: The distributed build dispatcher, which knows about the other machines in the cluster and how to send them jobs and get the results back
  • CMake: A Makefile generator which LLVM can use, and it’s much better than autoconf, but can also generate Ninja files!
  • Ninja: The new intelligent builder which not only is faster to resolve dependencies, but also has a very easy way to change the rules to use distcc, and also has a magical new feature called pools, which allow me to scale job types independently (compilers, linkers, etc).

All three tools had to be compiled from source. Distcc’s binary distribution for ARM is too old, CMake’s version on that Ubuntu couldn’t generate Ninja files and Ninja doesn’t have binary distributions, full stop. However, it was very simple to get them interoperating nicely (follow the instructions).

You don’t have to use CMake, there are other tools that generate Ninja files, but since LLVM uses CMake, I didn’t have to do anything. What you don’t want is to generate the Ninja files yourself, it’s just not worth it. Different than Make, Ninja doesn’t try to search for patterns and possibilities (this is why it’s fast), so you have to be very specific on the Ninja file on what you want to accomplish. This is very easy for a program to do (like CMake), but very hard and error prone for a human (like me).

Distcc

To use distcc is simple:

  1. Replace the compiler command by distcc compiler on your Ninja rules;
  2. Set the environment variable DISTCC_HOSTS to the list of IPs that will be the slaves (including localhost);
  3. Start the distcc daemon on all slaves (not on the master): distccd --daemon --allow <MasterIP>;
  4. Run ninja with the number of CPUs of all machines + 1 for each machine. Ex: ninja -j6 for 2 Pandaboards.

A local build, on a single Pandaboard of just LLVM (no Clang, no check-all) takes about 63 minutes. With distcc and 2 Pandas it took 62 minutes!

That’s better, but not as much as one would hope for, and the reason is a bit obvious, but no less damaging: The Linker! It took 20 minutes to compile all of the code, and 40 minutes to link them into executable. That happened because while we had 3 compilation jobs on each machine, we had 6 linking jobs on a single Panda!

See, distcc can spread the compilation jobs as long as it copies the objects back to the master, but because a linker needs all objects in memory to do the linking, it can’t do that over the network. What distcc could do, with Ninja’s help, is to know which objects will be linked together, and keep copies of them on different machines, so that you can link on separate machines, but that is not a trivial task, and relies on an interoperation level between the tools that they’re not designed to accept.

Ninja Pools

And that’s where Ninja proved to be worth its name: Ninja pools! In Ninja, pools are named resources that bundle together with a specific level of scalability. You can say that compilers scale free, but linkers can’t run more than a handful. You simply need to create a pool called linker_pool (or anything you want), give it a depth of, say, 2, and annotate all linking jobs with that pool. See the manual for more details.

With the pools enabled, a distcc build on 2 Pandaboards took exactly 40 minutes. That’s 33% of gain with double the resources, not bad. But, how does that scale if we add more Pandas?

How does it scale?

To get a third point (and be able to apply a curve fit), I’ve added another Panda and ran again, with 9 jobs and linker pool at 2, and it finished in 30 minutes. That’s less than half the time with three times more resources. As expected, it’s flattening out, but how much more can we add to be profitable?

I don’t have an infinite number of Pandas (nor I want to spend all my time on it), so I just cheated and got a curve fitting program (xcrvfit, in case you’re wondering) and cooked up an exponential that was close enough to the points and use the software ability to do a best fit. It came out with 86.806*exp(-0.58505*x) + 14.229, which according to Lybniz, flattens out after 4 boards (about 20 minutes).

Pump Mode

Distcc has a special mode called pump mode, in which it pushes with the C file, all headers necessary to compile it solely on the node. Normally, distcc will pre-compile on the master node and send the pre-compiled result to the slaves, which convert to object code. According to the manual, this could improve the performance 10-fold! Well, my results were a little less impressive, actually, my 3-Panda cluster finished in just about 34 minutes, 4 minutes more than without push mode, which is puzzling.

I could clearly see that the files were being compiled in the slaves (distccmon-text would tell me that, while there was a lot of “preprocessing” jobs on the master before), but Ninja doesn’t print times on each output line for me to guess what could have slowed it down. I don’t think there was any effect on the linker process, which was still enabled in this mode.

Conclusion

Simply put, both distcc and Ninja pools have shown to be worthy tools. On slow hardware, such as the Pandas, distributed builds can be an option, as long as you have a good balance between compilation and linking. Ninja could be improved to help distcc to link on remote nodes as well, but that’s a wish I would not press on the team.

However, scaling only to 4 boards will reduce a lot of the value for me, since I was expecting to use 16/32 cores. The main problem is again the linker jobs working solely on the master node, and LLVM having lots and lots of libraries and binaries. Ninja’s pools can also work well when compiling LLVM+Clang on debug mode, since the objects are many times bigger, and even on above average machine you can start swapping or even freeze your machine if using other GUI programs (browsers, editors, etc).

In a nutshell, the technology is great and works as advertised, but with LLVM it might not be yet the thing. It’s still more profitable to get faster hardware, like the Chromebooks, that are 3x faster than the Pandas and cost only marginally more.

Would also be good to know why the pump mode has regressed in performance, but I have no more time to spend on this, so I leave as a exercise to the reader. ;)


Open Source and Innovation
September 13th, 2012 under Corporate, OSS, rengolin, Technology. [ Comments: 1 ]

A few weeks ago, a friend (Rob) asked me a pertinent question: “How can someone innovate and protect her innovation with open source?”. Initially, I scorned off with a simple “well, you know…”, but this turned out to be a really hard question to answer.

The main idea is that, in the end, every software (and possibly hardware) will end up as open source. Not because it’s beautiful and fluffy, but because it seems to be the natural course of things nowadays. We seem to be moving from profiting on products, to giving them away and profiting on services. If that’s true, are we going to stop innovating at all, and just focus on services? What about the real scientists that move the world forward, are they also going to be flipping burgers?

Open Source as a business model

The reason to use open source is clear, the TCO fallacy is gone and we’re all used to it (especially the lawyers!), that’s all good, but the question is really what (or even when) to open source your own stuff. Some companies do it because they want to sell the value added, or plugins and services. Others do because it’s not their core business or they want to form a community, which would otherwise use the competitors’ open source solution. Whatever the reason is, more and more we seem to be open sourcing software and hardware at an increasing speed, some times it comes off as open source on its first day in the wild.

Open source is a very good cost sharing model. Companies can develop a third-party product, not related to their core areas (where they actually make money), and still claim no responsibility or ownership (which would be costly). For example, the GNU/Linux and FreeBSD operating systems tremendously reduce the cost of any application developer, from embedded systems to big distributed platforms. Most platforms today (Apple’s, Androids, set-top boxes, sat-navs, HPC clusters, web-servers, routers, etc) have them at their core. If each of these products had to develop their own operating system (or even parts of it), it wouldn’t be commercially viable.

Another example is the MeshPotato (in Puerto Rico) box, which uses open software and hardware initially developed by Village Telco (in South Africa). They can cover wide areas providing internet and VoIP telephony over the rugged terrain of Puerto Rico for under $30 a month. If they had to develop their hardware and software (including the OS), it’d cost no less than a few hundred pounds. Examples like that are abundant these days and it’s hard to ignore the benefits of Open Source. Even Microsoft, once the biggest closed-source zealot, who propagated the misinformation that open source was hurting the American Way of Life is now one of the biggest open source contributors on the planet.

So, what is the question then?

If open source saves money everywhere, and promotes incremental innovation that wouldn’t be otherwise possible, how can the original question not have been answered? The key was in the scope.

Rob was referring, in fact, to real chunky innovations. Those that take years to develop, many people working hard with one goal in mind, spending their last penny to possibly profit in the end. The true sense of entrepreneurship. Things that might profit from other open source technologies, but are so hard to make that even so it takes years to produce. Things like new chips, new medicines, real artificial intelligence software and hardware, etc. The open source savings on those projects are marginal. Furthermore, if you spend 10 years developing a software (or hardware) and open source it straight away, how are you ever going to get your investment money back? Unless you charge $500 a month in services to thousands of customers on day one, you won’t see the money back in decades.

The big misunderstanding, I think, it’s that this model no longer applies, so the initial question was invalid to begin with. I explain.

Science and Tecnology

300 years ago, if you were curious about something you could make a name for yourself very easily. You could barely call what they did science. They even called themselves natural philosophers, because what they did was mostly discovering nature and inquiring about its behaviour. Robert Hooke was a natural philosopher and a polymath, he kept dogs with their internals in the open just to see if it’d survive. He’d keep looking at things through a microscope and he named most of the small things we can see today.

Newton, Liebniz, Gauss, Euler and few others have created the whole foundation of modern mathematics. They are known for fundamentally changing how we perceive the universe. It’d be preposterous to assume that there isn’t a person today as bright as they were, but yet, we don’t see people changing our perception of the universe that often. The last spree was more than a hundred years ago, with Maxwell, Planck and Einstein, but still, they were corrections (albeit fundamental) to the model.

Today, a scientist contents in scratching the surface of a minor field in astrophysics, and he’ll probably get a Nobel for that. But how many of you can name more than 5 Nobel laureates? Did they really change your perception of the universe? Did they invent things such as real artificial intelligence or did they discover a better way of doing politics? Sadly, no. Not because they weren’t as smart as Newton or Leibniz, but because the easy things were already discovered, now we’re in for the hard and incremental science and, like it or not, there’s no way around it.

Today, if you wrapped tin foil around a toilet paper tube and played music with it, people would, at best, think you’re cute. Thomas Edison did that and was called a Wizard. Nokia was trying to build a smartphone, but they were trying to make it perfect. Steve Jobs made is almost useless, people loved it, and he’s now considered a genius. If you try to produce a bad phone today, people will laugh at you, not think you’re cute, so things are getting harder for the careless innovators, and that’s the crucial point. Careless and accidental innovation is not possible on any field that has been exploited long enough.

Innovation and Business

Innovation is like business, you only profit if there is a market that hasn’t been taken. If you try to invent a new PC, you will fail. But if you produce a computer that has a niche that has never been exploited (even if it’s a known market, like in the Nokia’s smartphone case), you’re in for the money. If you want to build the next AI software, and it marginally works, you can make a lot of money, whether you open source your software or not. Since people will copy (copyright and patent laws are not the same in every country), your profit will diminish with time, proportional to the novelty and the difficulty in copying.

Rob’s point went further, “This isn’t just a matter of what people can or can’t do, is what people should or should not do”. Meaning, shouldn’t we aim for a world where people don’t copy other people’s ideas as a principle, instead of accepting the fact that people copy? My answer is a strong and sounding: NO! For the love of all that’s good, NO!

The first reason is simply because that’s not the world we live in and it will not be as long as humanity remains human. There is no point in creating laws that do not apply to the human race, though it seems that people get away with that very easy these days.

The second point is that it breaks our society. An example: try to get into a bank and ask for investment on a project that will take 10 years to complete (at the cost of $10M) and the return will come during the 70 years that follows it (at a profit of $100’sM a year). The manager will laugh at you and call security. This is, however, the time it takes (today) for copyright in Hollywood to expire (the infamous Mickey Mouse effect), and the kind of money they deal with.

Imagine that a car manufacturer develops a much safer way of building cars, say magical air bags. This company will be able to charge a premium, not just because of the development costs, but also for its unique position in the market. With time, it’ll save more lives that any other car and governments will want that to be standard. But no other company can apply that to their cars, or at least not without paying a huge premium to the original developer. In the end, cars will be much more expensive in general, and we end up paying the price.

Imagine if there were patents for the telephone, or the TV or cars (I mean, the concept of a car) or “talking to another person over the phone”, or “reminding to call your parents once in a while”. It may look silly, but this is better than most patent descriptions! Most of the cost to the consumer would be patents to people that no longer innovate! Did you know that Microsoft makes more money with Android phones than Google? Their contributions to the platform? Nothing. This was an agreement over dubious and silly patents that most companies accepted as opposed to being sued for billions of dollars.

Conclusion

In my opinion, we can’t just live in the 16th century with 21st century technology. You can’t expect to be famous or profit by building an in-house piece of junk or by spotting a new planet. Open source has nothing to do with it. The problem is not what you do with your code, but how you approach the market.

I don’t want to profit at the expense of others, I don’t want to protect my stupid idea that anyone else could have had (or probably already had, but thought it was silly), just because I was smart enough to market it. Difficult technology is difficult (duh), and it’s not up to a team of experts to create it and market it to make money. Science and technology will advance from now on on a steady, baby-steps way, and the tendency is for this pace to get even slower and smaller.

Another important conclusion for me is that, I’d rather live in a world where I cannot profit horrendously from a silly idea just because I’ve patented it than have monopolies like pharma/banking/tobacco/oil/media controlling our governments, or more than directly, our lives. I think that the fact that we copy and destroy property is the most liberating fact of humanity. It’s the Robin Hood of modern societies, making sure that, one way or another, the filthy rich won’t continue getting richer. Explosive growth, monopolies, cartels, free trade and protection of property are core values that I’d rather see dead as a parrot.

In a nutshell, open source does not hinder innovation, protection of property does.


Science vs. Business
July 30th, 2011 under Computers, Corporate, OSS, Politics, rengolin, Science. [ Comments: none ]

Since the end of the dark ages, and the emergence of modern capitalism, science has been connected to business, in one way or another.

During my academic life and later (when I moved to business), I saw the battle of those that would only do pure science (with government funding) and those that would mainly do business science (with private money). There were only few in between the two groups and most of them argued that it was possible to use private money to promote and develop science.

For years I believed that it was possible, and in my book, the title of this post wouldn’t make sense. But as I dove into the business side, every step closer to business research than before, I realised that there is no such thing as business science. It is such a fundamental aspect of capitalism, profit, that make it so.

Copy cats

Good mathematicians copy, best mathematicians steal. The three biggest revolutions in computing during the last three decades were the PC, the Open Source and Apple.

The PC revolution was started by IBM (with open platforms and standard components) but it was really driven by Bill Gates and Microsoft, and that’s what generated most of his fortune. However, it was a great business idea, not a great scientific one, as Bill Gates copied from a company (the size of a government), such as IBM. His business model’s return on investment was instantaneous and gigantic.

Apple, on the other hand, never made much money (not as much as IBM or Microsoft) until recently with the iPhone and iPad. That is, I believe, because Steve Jobs copied from a visionary, Douglas Engelbart, rather than a business model. His return on investment took decades and he took one step at a time.

However, even copying from a true scientist, he had to have a business model. It was impossible for him to open the platform (as MS did), because that was where all the value was located. Apple’s graphical interface (with the first Macs), the mouse etc (all blatantly copied from Engelbart). They couldn’t control the quality of the software for their platform (they still can’t today on AppStore) and they opted for doing everything themselves. That was the business model getting in the way of a true revolution.

Until today, Apple tries to do the coolest system on the planet, only to fall short because of the business model. The draconian methods Microsoft took on competitors, Apple takes on the customers. Honestly, I don’t know what’s worse.

On the other hand, Open Source was born as the real business-free deal. But its success has nothing to do with science, nor with the business-freeness. Most companies that profit with open source, do so by exploiting the benefits and putting little back. There isn’t any other way to turn open source into profit, since profit is basically to gain more than what you spend.

This is not all bad. Most successful Open source systems (such as Apache, MySQL, Hadoop, GCC, LLVM, etc) are so because big companies (like Intel, Apple, Yahoo) put a lot of effort into it. Managing the private changes is a big pain, especially if more than one company is a major contributor, but it’s more profitable than putting everything into the open. Getting the balance right is what boosts, or breaks, those companies.

Physics

The same rules also apply to other sciences, like physics. The United States are governed by big companies (oil, weapons, pharma, media) and not by its own government (which is only a puppet for the big companies). There, science is mostly applied to those fields.

Nuclear physics was only developed at such a fast pace because of the bomb. Laser, nuclear fusion, carbon nanotubes are mostly done with military funding, or via the government, for military purposes. Computer science (both hardware and software) are mainly done on the big companies and with a business background, so again not real science.

Only the EU, a less business oriented government (but still, not that much less), could spend a gigantic amount of money on the LHC at CERN to search for a mere boson. I still don’t understand what’s the commercial applicability of finding the Higgs boson and why the EU has agreed to spend such money on it. I’m not yet ready to accept that it was all in the name of science…

But while physics has clear military and power-related objectives, computing, or rather, social computing, has little to no impact. Radar technologies, heavy-load simulations, and prediction networks receive a strong budget from governments (especially US, Russia), while other topics such as how to make the world a better place with technology, has little or no space is either business or government sponsored research.

That is why, in my humble opinion, technology has yet to flourish. Computers today create more problems than they solve. Operating systems make our life harder than they should, office tools are not intuitive enough for every one to use, compilers always fall short of doing a great job, the human interface is still dominated by the mouse, invented by Engelbart himself in the 60’s.

Not to mention the rampant race to keep Moore’s law (in both cycles and profit) at the cost of everything else, most notably the environment. Chip companies want to sell more and more, obsolete last year’s chip and send it to the land fills, as there is no efficient recycling technology yet for chips and circuits.

Unsolved questions of the last century

Like Fermat’s theorems, computer scientists had loads of ideas last century, at the dawn of computing era, that are still unsolved. Problems that everybody tries to solve the wrong way, as if they were going to make that person famous, or rich. The most important problems, as I see, are:

  • Computer-human interaction: How to develop an efficient interface between humans and computers as to remove all barriers on communication and ease the development of effective systems
  • Artificial Intelligence: As in real intelligence, not mimicking animal behaviour, not solving subset of problems. Solutions that are based on emergent behaviour, probabilistic networks and automatons.
  • Parallel Computation: Natural brains are parallel in nature, yet, computers are serial. Even parallel computers nowadays (multi-core) are only parallel to a point, where they go back on being serial. Serial barriers must be broken, we need to scratch the theory so far and think again. We need to ask ourselves: “what happens when I’m at the speed of light and I look into the mirror?“.
  • Environmentally friendly computing: Most components on chips and boards are not recyclable, and yet, they’re replaced every year. Does the hardware really need to be more advanced, or the software is being dumber and dumber, driving the hardware complexity up? Can we use the same hardware with smarter software? Is the hardware smart enough to last a decade? Was it really meant to last that long?

All those questions are, in a nutshell, in a scientific nature. If you take the business approach, you’ll end up with a simple answer to all of them: it’s not worth the trouble. It is impossible, at short and medium term, to profit from any of those routes. Some of them won’t generate profit even in the long term.

That’s why there is no advance in that area. Scientists that study such topics are alone and most of the time trying to make money out of it (thus, going the wrong way and not hitting the bull’s eye). One of the gurus in AI at the University of Cambridge is a physicist, and his company does anything new in AI, but exploits the little effort on old school data-mining to generate profit.

They do generate profit, of course, but does it help to develop the field of computer science? Does it help tailor technology to better ourselves? To make the world a better place? I think not.


Fool me once, shame on you… fool me twice, shame on me (DBD)
October 23rd, 2010 under Computers, Corporate, Digital Rights, Hardware, Media, OSS, rengolin, Software, Unix/Linux. [ Comments: 4 ]

Defective by design came with a new story on Apple’s DRM. While I don’t generally re-post from other blogs (LWN already does that), this one is special, but not for the apparent reasons.

I agree that DRM is bad, not just for you but for business, innovation, science and the evolution of mankind. But that’s not the point. What Apple is doing with the App store is not just locking other applications from running on their hardware, but locking their hardware out of the real world.

In the late 80’s – early 90’s, all hardware platforms were like that, and Apple was no exception. Amiga, Commodore, MSX and dozens of others, each was a completely separate machine, with a unique chipset, architecture and software layers. But that never stopped people writing code for it, putting on a floppy disk and installing on any compatible computer they could find. Computer viruses spread out that way, too, given the ease it was to share software in those days.

Ten years later, there was only a handful of architectures. Intel for PCs, PowerPC for Mac and a few others for servers (Alpha, Sparc, etc). The consolidation of the hardware was happening at the same time as the explosion of the internet, so not only more people had the same type of computer, but they also shared software more easily, increasing the quantity of software available (and viruses) by orders of magnitude.

Linux was riding this wave since its beginning, and probably that was the most important factor why such an underground movement got so much momentum. It was considered subversive, anti-capitalist to use free software and those people (including me) were hunt down like communists, and ridiculed as idiots with no common-sense. Today we know how ridicule it is to use Linux, most companies and governments do and would be unthinkable today not to use it for what it’s good. But it’s not for every one, not for everything.

Apple’s niche

Apple always had a niche, and they were really smart not to get out of it. Companies like Intel and ARM are trying to get out of their niche and attack new markets, to maybe savage a section of economy they don’t have control over. Intel is going small, ARM is going big and both will get hurt. Who get’s more hurt doesn’t matter, what matter is that Apple never went to attack other markets directly.

Ever since the beginning, Apple’s ads were in the lines of “be smart, be cool, use Apple”. They never said their office suite was better than Microsoft’s (as MS does with Open Office), or that their hardware support was better (like MS does with Linux). Once you compare directly your products with someone else’s, you’re bound to trouble. When Microsoft started comparing their OS with Linux (late 90’s), the community fought back showing all the areas in which they were very poor, and businesses and governments started doing the same, and that was a big hit on Windows. Apple never did that directly.

By being always on the sidelines, Apple was the different. In their own niche, there was no competitor. Windows or Linux never entered that space, not even today. When Apple entered the mobile phone market, they didn’t took market from anyone else, they made a new market for themselves. Who bought iPhones didn’t want to buy anything else, they just did because there was no iPhone at the time.

Android mobile phones are widespread, growing faster than anything else, taking Symbian phones out of the market, destroying RIM’s homogeneity, but rarely touching the iPhone market. Apple fan-boys will always buy Apple products, no matter the cost or the lower quality in software and hardware. Being cool is more important than any of that.

Fool me once again, please

Being an Apple fan-boy is hard work. Whenever a new iPhone is out, the old ones disappear from the market and you’re outdated. Whenever the new MacBook arrives, the older ones look so out-dated that all your (fan-boy) friends will know you’re not keeping up. If by creating a niche to capture the naiveness of people and profit from it is fooling, than Apple is fooling those same people for decades and they won’t stop now. That has made them the second biggest company in the world (loosing only for an oil company), nobody can argue with that fact.

iPhones have a lesser hardware than most of the new Android phones, less functionality, less compatibility with the rest of the world. The new MacBook air has an Intel chip several years old, lacks connectivity options and in a short time won’t run Flash, Java or anything Steve Jobs dislike when he wakes up from a bad dream. But that doesn’t affect a bit the fan-boys. See, back in the days when Microsoft had fan-boys too, they were completely oblivious to the horrendous problems the platform had (viruses, bugs, reboots, memory hog etc) and they would still mock you for not being on their group.

That’s the same with Apple fan-boys and always have been. I had an Apple ][, and I liked it a lot. But when I saw an Amiga I was baffled. I immediately recognized the clear superiority of the architecture. The sound was amazing, the graphics was impressive and the games were awesome (all that mattered to me at that time, tbh). There was no comparison between an Amiga game and an Apple game at that time and everybody knew it. But Apple fan-boys were all the same, and there were fights in BBSs and meetings: Apple fan-boys one side, Amiga fan-boys on the other and the pizza would be over long before the discussion would cool down.

Nice little town, invaded

But today, reality is a bit harder to swallow. There is no PowerPC, or Alpha or even Sparc now. With Oracle owning Sparc’s roadmap, and following what they are doing to Java and OpenOffice, I wouldn’t be surprised if Larry Ellison one day woke up and decided to burn everything down. Now, there are only two major players in the small to huge markets: Intel and ARM. With ARM only being at the small and smaller, it leaves Intel with all the rest.

MacOS is no longer an OS per se. Its underlying sub-system is based on (or ripped off from) FreeBSD (a robust open source unix-like operating system). As it goes, FreeBSD is so similar to Linux that it’s not hard to re-compile Linux application to run on it. So, why should it be hard to run Linux application on MacOS? Well, it’s not, actually. With the same platform and a very similar sub-system, re-compiling Linux application to Mac is a matter of finding the right tools and libraries, everything else follows the natural course.

Now, this is dangerous! Windows has the protection of being completely different, even on the same platform (Intel), but MacOS doesn’t and there’s no way to keep the penguin’s invasion at bay. For the first time in history, Apple has opened its niche to other players. In Apple terms, this is the same as to kill itself.

See, capitalism is all about keeping control of the market. It’s not about competition or innovation, and it’s clearly not about re-distribution of capital, as the French suggested in their revolution. Albeit Apple never fought Microsoft or Linux directly, they had their market well in control and that was the key to their success. With very clever advertising and average quality hardware, they managed to build an entire universe of their own and attract a huge crowd that, once in, would never look back. But now, that bubble has been invaded by the penguin commies, and there’s no way for them to protect that market as they’ve done before.

One solution to rule them all

On a very good analysis of the Linux “dream”, this article suggests that it is dead. If you look to Linux as if it was a company (following the success of Canonical, I’m not surprised), he has a point. But Linux is not Canonical, nor a dream and it’s definitely not dead.

In the same line, you could argue that Windows is dead. It hasn’t grown up for a while, Vista destroyed the confidence and moved more people to Macs and Linux than ever before. The same way, more than 10 years ago, a common misconception for Microsoft’s fan-boys was that the Mac was dead. Its niche was too little, the hardware too expensive and incompatible with everything else. Windows is in the same position today, but it’s far from dead.

But Linux is not a company, it doesn’t fit the normal capitalist market analysis. Remember that Linux hackers are commies, right? It’s an organic community, it doesn’t behave like a company or anything capitalism would like to model. This is why it has been so many times wrongly predicted (Linux is dead, this is the year of Linux, Linux will kill Windows, Mac is destroying Linux and so on). All of this is pure bollocks. Linux growth is organic, not exponential, not bombastic. It won’t kill other platforms. Never had, never will. It will, as it has done so far, assimilate and enhance, like the Borg.

If we had Linux in the French revolution, the people would have a better chance of getting something out of it, rather than letting all the glory (and profit) to the newly founded bourgeoisie class. Not because Linux is magic, but because it embraces changes, expand the frontiers and expose the flaw in the current systems. That alone is enough to keep the existing software in constant check, that is vital to software engineering and that will never end. Linux is, in a nutshell, what’s driving innovation in all other software fronts.

Saying that Linux is dead is the same as saying that generic medication is dead because it doesn’t make profit or hasn’t taken over the big pharma’s markets. It simply is not the point and only shows that people are still with the same mindset that put Microsoft, Yahoo!, Google, IBM and now Apple where they are today, all afraid of the big bad wolf, that is not big, nor bad and has nothing to do with a wolf.

This wolf is, mind you, not Linux. Linux and the rest of the open source community are just the only players (and Google, I give them that) that are not afraid of that wolf, but, according to business analysts, they should to be able to play nice with the rest of the market. The big bad wolf is free content.

Free, open content

Free as in freedom is dangerous. Everybody knows what happens when you post on Facebook about your boss being an ass: you get fired. The same would happen if you said it out loud in a company’s lunch, wouldn’t it? Running random software in your machine is dangerous, everybody knows what can happen when virus invade your computer, or rogue software start stealing your bank passwords and personal data.

But all systems now are very similar, and the companies of today are still banging their heads against the same wall as 20 years ago: lock down the platform. 20 years ago that was quite simple, and actually, only the reflection of the construction process of any computer. Today, it has to be actively done.

It’s very easy to rip a DVD and send it to a friend. Today’s broadband speeds allow you to do that quite fast, indeed. But your friend haven’t paid for that, and the media companies felt threatened. They created DRM. Intel has just acquired McAfee to put security measures inside the chip itself. This is the same as DRM, but on a much lower level. Instead of dealing with the problem, those companies are actually delaying the solution and only making the problem worse.

DRM is easily crackable. It has been shown over and over that any DRM (software or hardware) so far has not resisted the will of people. There are far more ingenious people outside companies that do DRM than inside, therefore, it’s impossible to come up with a solution that will fool all outsiders, unless they hire them all (which will never happen) or kill them all (which could happen, if things keep the same pace).

Unless those companies start looking at the problem as the new reality, and create solutions to work in this new reality, they won’t make any money out of it. DRM is not just bad, but it’s very costly and hampers progress and innovation. It kills what capitalism loves most: profit. Take all the money spent on DRM that were cracked a day later, all the money RIAA spent on lawsuits, all the trouble to create software solutions to lock all users and the drop-out rate which happens when some better solution appears (see Google vs. Yahoo) and you get the picture.

Locked down society

Apple’s first popular advertisement was the one mocking Orwell’s 1984 and how Apple would break the rules by bringing something completely different that would free people of the locked down world they lived in. Funny though, how things turned out…

Steve Jobs say that Android is a segmented market, that Apple is better because it has only one solution to every problem. They said the same thing about Windows and Linux, that the segmentation is what’s driving their demise, that everybody should listen to Steve Jobs and use his own creations (one for each problem) and that the rest was just too noisy, too complicated for really cool people to use.

I don’t know you, but for me that sounds exactly like Big Brother’s speech.

With DRM and control of the ApStore, Apple has total freedom to put in, or take out, whatever they want, whenever they want. It has happened and will continue to happen. They never put Flash in iPhones, not because of any technical reason, but just because Steve Jobs doesn’t like it. They’re now taking Java out of the Mac “experience”, again, just for kicks. Microsoft at least put .NET and Silverlight in place, but Apple simply takes out, no replacements.

Oh, how Apple fan-boys like it. They applaud, they defend with their lives, even having no knowledge of why nor even if there is any reason for it. They just watch Steve Jobs speech and repeat, word by word. There is no reason, and those people are sounding every day more dumb than anything else, but who am I to say so? I’m the one out of the group, I’m the one who has no voice.

When that happened to Microsoft in the 90’s, it was hard to take it. The numbers were more like 95% of them and 1% of us, so there was absolutely no argument that would make them understand the utter garbage they were talking about. But today, Apple market is still not big enough, so the Apple fan-boys are indeed making Apple the second biggest company in the world, but they still look like idiots to the rest of the +50% of the world.

Yahoo!’s steps

Yahoo has shown us that locking users down, stuffing them with ads and ignoring completely the upgrade of their architecture for years is not a good patho. But Apple (as did Yahoo) thinks they are invulnerable. When Google exploded with their awesome search (I was at Yahoo’s search team at the time), we had a shock. It was not just better than Yahoo’s search, it really worked! Yahoo was afraid of being the copy-cat, so they started walking down other paths and in the end, it never really worked.

Yahoo, that started as a search company, now runs Microsoft’s lame search engine. This is, for me, the utmost proof that they failed miserably. The second biggest thing Yahoo had was email and Google has it better. Portals? Who need portals when you have the whole web at your finger tips with Google search? In the end, Google killed every single Yahoo business, one by one. Apple is following the same path, locking themselves out of the world, just waiting for someone to come with a better and simpler solution that will actually work. And they won’t listen, not even when it’s too late.

Before Yahoo! was IBM. After Apple there will be more. Those that don’t accept reality as it is, that stuck with their old ideas just because it worked so far, are bound to fail. Of course, Steve Jobs made all the money he could, and he’s not worried. As aren’t David Filo or Jerry Young, Bill Gates or Larry Ellison. And this is the crucial part.

Companies fade because great leaders fade. Communities fade when they’re no longer relevant. the Linux community is still very much relevant and won’t fade too soon. And, by its metamorphic nature, it’s very likely that the free, open source community will never die.

Companies better get used to it, and find ways to profit from it. Free, open content is here to stay, and there’s nothing anyone can do to stop that. Being dictators is not helping for the US patent and copyright system, not helping for Microsoft or Intel and definitely won’t help Apple. If they want to stay relevant, they better change soon.


The Ubuntu Way
May 16th, 2010 under OSS, rengolin, Software, Unix/Linux. [ Comments: 2 ]

It’s been five years now that I switched from Debian to Ubuntu, primarily for the updated software and radical changes in the user interface, and there are quite a few things that were constant all this time. When on Debian, I always used the unstable branch. It was the obvious choice for a non-mission-critical desktop environment I always needed. But even being unstable, it lacked a bit of risk-taking that made me some times having to compile (or download binary) applications by myself, working around the packaging management system.

With Ubuntu, it’s the exact opposite. The ongoing lack of support for nVidia and ATI boards, PulseAudio and the new Plymouth splash are good examples of major failures on deploying a technology that is yet too young to be in a distribution, especially a Long-Term-Support one. Recent rumours on changing Firefox to Chrome is a more critical change, since the whole community around Firefox (add-ons, plug-ins, bookmarklets, etc) cannot easily be migrated to Chrome or any other major browser. But this is all about the Ubuntu Way.

Identity

Ubuntu, like many other Linux distributions (especially Debian), has built its identity around the OS that most users share. It’s organic, and grows with time and feedback from the users, joined with the directions the “board” is taking in what goes in and what goes out. The original Linux community (back in mid-90’s) was a bit homogeneous in that respect, with most distributions being yet-another-collections-of-packages, be it RPM, DEB, Tar balls or anything else. With time, strong feelings were separating some distributions apart, and specializing others. Debian, for instance, became over preoccupied with license issues (no other than open source was allowed), while RedHat became more enterprise focused, flooded with third-party libraries, commercial products and a licensing scheme that was more like Microsoft than anything else.

Still, within the Debian community, some people (like me) thought that the release schedule was too wide and the licensing issues were too narrow to produce a really helpful desktop replacement for other commercial systems, like MacOS. Indeed, after a few releases, Ubuntu has shown that it can replace them for most uses to most users. I, as a Linux user for so many years, welcomed the ease of use of a MacOS without the lock-downs and lame packaging systems.

But they went further, and decided to be very (very) much the same as Apple. Initially, the Linux way was to offer everything there was available for everything. There were dozens of instant messengers, browsers, picture viewers, consoles, etc, all installed by default (or to pick from a selection of thousands of packages in the installation process), which was a major pain. Recently, Ubuntu has provided an installation process easier than Windows and MacOS, and for every application type, there was only one default option. That is, what has become, the Ubuntu Identity.

Taking Risks

To keep that identity, and still progress as fast as they (and me) would like, one has to take risks. I have to say that, for the most part, they were right on the spot. Some failures (as mentioned) are expected to happen and you are left with the consequences and decision of those risks. For a company with such a tight budget (and such high expectancy), there is little they can do differently. If they had bigger budgets, they could spend more time adapting the proprietary graphic drivers and the update system (that never works on fine-tuned machines), but they don’t. And based on how updates work on Windows and MacOS (ie. they don’t), I’m not surprised with Canonical’s failures.

I like Firefox, ALSA and Pidgin, but if the overall experience is more stable (and complete) with Empathy, Chrome and PulseAudio, so be it. We’re passt the time to complain about personal preferences in favour of a wider viewpoint. I’m too old to rant about how pity is the new splash screen when using ATI proprietary drivers for the time being, I just want to install and run. As long as my VIM is working and there is a browser and an IM to use, I’m happy. I don’t care Gimp is not included by default, I do dislike that GCC is not, but I understand the reasons and always install it first thing when I get a new system.

That’s the Ubuntu identity and the risks Canonical takes to move the desktop experience forward. As unstable Debian people used to say, that’s the risk of being on the edge…

Upgrades never work

So, I stated that upgrades never work for fine-tuned machines, and that has been my experiences until today. In the beginning, I thought it was that Ubuntu was still immature, but today I had to roll-back my Lucid installation I did yesterday for major incompatibility issues, mainly with the ATI proprietary graphic driver (splash and return from sleep).

So far, the only way I can upgrade Ubuntu is by installing a complete new copy of it every time, and apply the backed-up changes in configuration files manually after all is done. It may seem a lot of work, but every time I try to upgrade and every time I end up installing from scratch and applying the few manual tuning later. Now that I know exactly what I have to change and where (after years of doing), it takes me roughly 15 minutes to customize it.

My configuration is in such a state that it takes me zero maintenance and little backup disk space, as well as easy installation process. The magic is simple.

Preparation

This is one thing I recommend to any system, Linux, Windows and MacOS: Split into, at least, two partitions. One, around 50-80 GB, for your system, preferably the first one (primary partition). The other(s), taking up the rest of the hard-drive, for your data/home directories. If using Linux, of course, reserve (at the end), a space for your swap (4GB is more than enough, even if you have that, or more, of RAM). Swap is a safety measure and not to be used under any normal circumstance.

Daily Usage

Backup your home directory often, including personal configuration files, IM history, panel short-cuts, everything. Apart from your data, the rest might give rise to some complications when upgrading the user environment (Gnome, KDE) but that’s minor and can be overcame easily. That will help you in case things go awry in your update/replace process. A cron job or manual invocation to a script is recommended for that.

Also, remember to back up (manually, by copying) every system configuration you change. Because most configuration on Linux is a text file, that part is very easy. It has to be done manually because, as it’s very simple and easy (you shouldn’t change that many configuration files), you can do a detailed comparison between what’s in there and what you want to replace or add. This will be important for your post-update process.

Additionally, any non-essential data can be moved to a shared disk (with appropriated backup), accessible over the network. This way, you not only don’t have to backup all your data (photos, videos, documents) every install (could take days), but they will also be available from other computers while you upgrade your machine, so you can continue working on them as soon as your machine is ready.

Upgrade

Upgrades never work, especially if you have changed the configuration. Some systems evolve and can’t read old configurations properly, new systems won’t read other systems configurations and migration scripts never work properly on modified files. What’s worse, as Ubuntu has its own identity, the new systems will work better (or only work) with other new systems. So the integration between the new systems and your old, changed, systems will most likely fail silently. PulseAudio is the best example of that conflict.

To update, simply re-install the new version from CD (USB, or whatever) into the OS partition. So far, they have managed to make the upgrade to new systems pretty easy, if you discard your old ones. Empathy imports pidgin accounts (and history), all basic systems are properly configured if you do a fresh install. As wireless network passwords, panels, personal short-cuts, and other configurations are stored in your home directory, you just have to log in to see your old desktop, just the way it was.

The few things that aren’t installed (like GCC, VIM, gstreamer plugins) can be easily installed if you have a list of things you always install in a file (in your home dir), like build-essentials, ubuntu-restricted-extras bundles. Printer VPN, printer and share configurations can be easily copied over from your backup as soon as you installed and an apt-get upgrade can be done to get the new stuff since the CD was released.

Roll back

What’s best in this strategy is that roll backs are extremely easy. You can’t roll back a dist-upgrade using apt, but you can safely re-install the previous CD in case it breaks up things so badly it becomes unusable. Like the new Ubuntu, it’s still bad with proprietary graphic drivers and the open source ones are not nearly as good. So I just rolled back and will wait until it stabilises.

Instabilities occur most often in Long-Term-Support releases (like the current). It might seem weird, but it’s pretty simple: they commit to three to five years of support, so they must get new software that will last that time. The lifetime of open source projects is not great (still, longer than many commercial products), but a five year commitment on a software that is already five years old is a big risk. Ext3 and Ext4 filesystems are a good example of this case.

So, instead of providing the stable components, they change radically the interface and sub-systems and wait for them to stabilise, hoping that the production state of the release will remind developers to speed up the fixes. While not optimal to the users, it’s more or less the only way they can go without breaking the promise of support when the application goes dead. This is why enterprise Linux is so expensive, because companies require stability as well as support, and ultimately, the distribution companies will have to maintain some of the dead application for years, if not decades.

Not only roll back is easy, but changing distribution entirely. As your data is distribution agnostic (Linux centric, not package-system centric), you can re-install virtually any other Linux distribution, as many times as you want, and keep the same look and feel.

Conclusion

In summary, it might look more complicated to use and maintain, but it’s not. Once your setup is done (partitions, backup scripts), the rest is pretty easy and quick. So far, I have stubbornly upgraded every release (since 7.04) to make sure it’s still harder than re-installing and it has been the case for every release.

Also, if you have nVidia or ATI graphic boards, never upgrade in less than a month after the release is out. I recommend you upgrade at least two or three months later (mid-releases), as most of the vendors will have updated to match the new Ubuntu Way.

Lastly, as I normally fine-tune my computer, I haven’t had a successful migration of any operating system until today. I always try to upgrade, if available, and end up re-installing everything. That was true with DOS, Linux and Windows, since 1990 and I doubt it’ll change any time soon. It’ll be necessary an intelligent installation process (which our computers are not able to run, yet), to do that.

In the far future, it lies, then.


Barrelfish
March 18th, 2010 under Algorithms, Devel, Distributed, OSS, rengolin, Software. [ Comments: none ]

Minix seems to be inspiring more operating systems nowadays. Microsoft Research is investing on a micro-kernel (they call it multi-kernel, as there are slight differences) called Barrelfish.

Despite being Microsoft, it’s BSD licensed. The mailing list looks pretty empty, the last snapshot is half a year ago and I couldn’t find an svn repository, but still more than I would expect from Microsoft anyway.

Multi-kernel

The basic concept is actually very interesting. The idea is to be able to have multi-core hybrid machines to the extreme, and still be able to run a single OS on it. Pretty much the same way some cluster solutions do (OpenMPI, for instance), but on a single machine. The idea is far from revolutionary. It’s a natural evolution of the multi-core trend with the current cluster solutions (available for years) and a fancy OS design (micro-kernel) that everyone learns in CS degrees.

What’s the difference, then? For one thing, the idea is to abstract everything away. CPUs will be just another piece of hardware, like the network or graphic cards. The OS will have the freedom to ask the GPU to do MP floating-point calculations, for instance, if it feels it’s going to benefit the total execution time. It’ll also be able to accept different CPUs in the same machine, Intel and ARM for instance (like the Dell Latitude z600), or have different GPUs, nVidia and ATI, and still use all the hardware.

With Windows, Linux and Mac today, you either use the nVidia driver or the ATI one. You also normally don’t have hybrid-core machines and absolutely can’t recover if one of the cores fail. This is not the same with cluster solutions, and Barrelfish’s idea is to simulate precisely that. In theory, you could do energy control (enabling and disabling cores), crash-recovery when one of the cores fail but not the other, or plug and play graphic or network cards and even different CPUs.

Imagine you have an ARM netbook that is great for browsing, but you want to play a game on it. You get your nVidia and a coreOcta 10Ghz USB4 and plug in. The OS recognizes the new hardware, loads the drivers and let you play your game. Battery life goes down, so once you’re back from the game, you just unplug the cards and continue browsing.

Scalability

So, how is it possible that Barrelfish can be that malleable? The key is communication. Shared memory is great for single-processed threaded code and acceptable for multi-processed OSs with little number of concurrent process accessing the same region in memory. Most modern OSs can handle many concurrent processes, but they rarely access the same data at the same time.

Normally, processes are single threaded or have a very small number of threads (dozens) running. More than that is so difficult to control that people usually fall back to other means, such as client/server or they just go out and buy more hardware. In clusters, there is no way to use shared memory. For one, accessing memory in another computer via network is just plain stupid, but even if you use shared memory in each node and client/server among different nodes, you’re bound to have trouble. This is why MPI solutions are so popular.

In Barrelfish there’s no shared memory at all. Every process communicate with each other via messages and duplicate content (rather than share). There is an obvious associated cost (memory and bus), but the lock-free semantics is worth it. It also gives Barrelfish another freedom: to choose the communication protocol generic enough so that each piece of hardware is completely independent of all others, and plug’n’play become seamless.

Challenges

It all seem fantastic, but there’s a long road ahead. First, message passing scales much better than shared memory, but nowadays there isn’t enough cores in most machines to make it worth it. Message passing also introduces some other problems that are not easily solvable: bus traffic and storage requirements increase considerably, and messages are not that generic in nature.

Some companies are famous for not adhering to standards (Apple comes to mind), and a standard hardware IPC framework would be quite hard to achieve. Also, even if using pure software IPC APIs, different companies will still use slightly modified APIs to suit their specific needs and complexity will rise, exponentially.

Another problem is where the hypervisor will live. Having a distributed control centre is cool and scales amazingly well, but its complexity also scales. In a hybrid-core machine, you have to run different instructions, in different orders, with different optimizations and communication. Choosing one core to deal with the scheduling and administration of the system is much easier, but leaves the single-point-of-failure.

Finally, going the multi-hybrid-independent style is way too complex. Even for a several-year’s project with lots of fund (M$) and clever people working on it. After all, if micro-kernel was really that useful, Tanembaum would have won the discussion with Linus. But, the future holds what the future holds, and reality (as well as hardware and some selfish vendors) can change. Multi-kernel might be possible and even easier to implement in the future.

This seems to be what the Barrelfish’s team is betting on, and I’m with them on that bet. Even if it fails miserably (as did Minix), some concepts could still be used in real-world operating systems (like Minix), whatever that’ll mean in 10 years. Being serious about parallelism is the only way forward, sticking with 40 years old concepts is definitely not.

I’m still aspiring for non-deterministic computing, though, but that’s an even longer shot…


2010 – Year of what?
January 29th, 2010 under Computers, Life, OSS, Physics, rengolin, Unix/Linux, World. [ Comments: 2 ]

Ever since 1995 I hear the same phrase, and ever since 2000 I stopped listening. It was already the year of Linux in 95 for me, so why bother?

But this year is different, and Linux is not the only revolution in town… By the end of last year, the first tera-electronvolt collisions were recorded in the LHC, getting closer to see (or not) the infamous Higgs boson. Now, the NIF reports a massive 700 kilojoules in a 10 billionth of a second laser, that, if it continues on schedule, could lead us to cold fusion!!

The human race is about to finally put the full stop on the standard model and achieve cold fusion by the end of this year, who cares about Linux?!

Well, for one thing, Linux is running all the clusters being used to compute and maintain all those facilities. So, if it were for Microsoft, we’d still be in the stone age…

UPDATE: More news on cold fusion


Linux is whatever you want it to be
November 5th, 2009 under OSS, rengolin, Software, Unix/Linux. [ Comments: 5 ]

Normally the Linux Magazine has great articles. Impartial, informative and highly technical. Unfortunately, not always. In a recent article, some perfectionist zealot stated that Ubuntu makes Linux looks bad. I couldn’t disagree more.

Ubuntu is a fast-paced, fast-adapted Linux. I was one of the early adopters and I have to say that most of the problems I had with the previous release were fixed. Some bugs went through, of course, but they were reported and quickly fixed. Moreover, Ubuntu has the support from hardware manufacturers, such as Dell, and that makes a big difference.

Linux is everything

Linux is excellent for embedded systems, great for network appliances, wonderful for desktops, irreplaceable as a development platform, marvellous on servers and the only choice for real clusters. It also sucks when you have to find the configuration manually, it’s horrible to newbies, it breaks whenever a new release is out, it takes longer to get new software (such as Firefox) but also helps a lot with package dependencies. Something that neiter Mac nor Windows managed to do properly over the past decades.

Linux is great as any piece of software could be but horrible as every operating system that was release since the beginning of times. Some Linux distributions are stable, others not so. Debian takes 10 years to release and when it does, the software it contains is already 10 years old. Ubuntu tries to be a bit faster but that obviously breaks a few things. If you’re fast enough fixing, the early adopters will be pleased that they helped the community.

“Unfortunately what most often comes is a system full of bugs, pain, anguish, wailing and gnashing of teeth – as many “early” adopters of Karmic Koala have discovered.”

As any piece of software, open or closed, free or paid, free or non-free. It takes time to mature. A real software engineer should know better, that a system is only fully tested when it reaches the community, the user base. Google uses their own users (your granny too!) as beta testers for years and everyone seem to understand it.

Debian zealots hate Red Hat zealots and both hate Ubuntu zealots that probably hate other zealots anywhere else. It’s funny to see how opinions vary greatly from a zealot clan to the other about what Linux really is. All of them have a great knowledge on what Linux is comprised of, but few seems to understand what Linux really is. Linux, or better, GNU/Linux is a big bunch of software tied together with so many different points of view that it’s impossible to state in less than a thousand words what it really is.

“Linux is meant to be stable, secure, reliable.”

NO, IT’S NOT! Linux is meant to be whatever you make of it, that’s the real beauty. If Canonical thought it was ready to launch is because they thought that, whatever bug pased the safety net was safe enough for the users to grab and report, which we did! If you’re not an expert, wait for the system to cool down. A non-expert will not be an “early adopter” anyway, that’s for sure.

Idiosyncrasies

Each Linux has its own idiosyncrasies, that’s what makes it powerful, and painful. The way Ubuntu updates/upgrades itself is particular to Ubuntu. Debian, Red Hat, Suse, all of them do it differently, and that’s life. Get over it.

“As usual, some things which were broken in the previous release are now fixed, but things which were working are now broken.”

One pleonasm after another. There is no new software without new bugs. There is no software without bugs. What was broken was known, what is new is unknown. How can someone fix something they don’t know? When eventually the user tested it, found it broken, reported, they fixed! Isn’t it simple?

“There’s gotta be a better way to do this.”

No, there isn’t. Ubuntu is like any other Linux: Like it? Use it. Don’t like it? Get another one. If you don’t like the way Ubuntu works, get over it, use another Linux and stop ranting.

Red Hat charges money, Debian has ubber-stable-decade-old releases, Gentoo is for those that have a lot of time in their hands, etc. Each has its own particularities, each is good for a different set of people.

Why Ubuntu?

I use Ubuntu because it’s easy to install, use and update. The rate of bugs is lower than on most other distros I’ve used and the rate of updates is much faster and stable than some other distros. It’s a good balance for me. Is it perfect? Of course not! There are lots of things I don’t like about Ubuntu, but that won’t make me use Windows 7, that’s for sure!

I have friends that use Suse, Debian, Fedora, Gentoo and they’re all as happy as I am, not too much, but not too few. Each has problems and solutions, you just have to choose the ones that are best for you.


« Previous entries 


License
Creative Commons License
We Support

WWF

National Autistic Society

Royal Society for the Prevention of Cruelty to Animals

DefectiveByDesign.org

End Software Patents

See Also
Disclaimer

The information in this weblog is provided “AS IS” with no warranties, and confers no rights.

This weblog does not represent the thoughts, intentions, plans or strategies of our employers. It is solely our opinion.

Feel free to challenge and disagree, and do not take any of it personally. It is not intended to harm or offend.

We will easily back down on our strong opinions by presentation of facts and proofs, not beliefs or myths. Be sensible.

Recent Posts