Open Source and Profit |
| July 8th, 2013 under Corporate, Devel, Digital Rights, OSS, rengolin, World. [ Comments: 2 ]
I have written extensively about free, open source software as a way of life, and now reading back my own articles of the past 7 years, I realize that I was wrong on some of the ideas, or in the state of the open source culture within business and around companies.
I’ll make a bold statement to start, trying to get you interested in reading past the introduction, and I hope to give you enough arguments to prove I’m right. Feel free to disagree on the comments section.
The future of business and profit, in years to come, can only come if surrounded by free thoughts.
By free thoughts I mean free/open source software, open hardware, open standards, free knowledge (both free as in beer and as in speech), etc.
I began my quest to understand the open source business model back in 2006, when I wrote that open source was not just software, but also speech. Having open source (free) software is not enough when the reasons why the software is free are not clear. The reason why this is so is that the synergy, that is greater than the sum of the individual parts, can only be achieved if people have the rights (and incentives) to reach out on every possible level, not just the source, or the hardware. I make that clear later on, in 2009, when I expose the problems of writing closed source software: there is no ecosystem in which to rely, so progress is limited and the end result is always less efficient, since the costs to make it as efficient are too great and would drive the prices of the software too high up to be profitable.
In 2008 I saw both sides of the story, pro and against Richard Stallman, on the views of the legitimacy of propriety control, being it via copyright licenses or proprietary software. I may have come a long way, but I was never against his idea of the perfect society, Richard Stallman’s utopia, or as some friends put it: The Star Trek Universe. The main difference between me and Stallman is that he believes we should fight to the last man to protect ourselves from the evil corporations towards software abuse, while I still believe that it’s impossible for them to sustain this empire for too long. His utopia will come, whether they like it or not.
Finally, in 2011 I wrote about how copying (and even stealing) is the only business model that makes sense (Microsoft, Apple, Oracle etc are all thieves, in that sense) and the number of patent disputes and copyright infringement should serve to prove me right. Last year I think I had finally hit the epiphany, when I discussed all these ideas with a friend and came to the conclusion that I don’t want to live in a world where it’s not possible to copy, share, derive or distribute freely. Without the freedom to share, our hands will be tied to defend against oppression, and it might just be a coincidence, but in the last decade we’ve seen the biggest growth of both disproportionate propriety protection and disproportional governmental oppression that the free world has ever seen.
Can it be different?
Stallman’s argument is that we should fiercely protect ourselves against oppression, and I agree, but after being around business and free software for nearly 20 years, I so far failed to see a business model in which starting everything from scratch, in a secret lab, and releasing the product ready for consumption makes any sense. My view is that society does partake in an evolutionary process that is ubiquitous and compulsory, in which it strives to reduce the cost of the whole process, towards stability (even if local), as much as any other biological, chemical or physical system we know.
So, to prove my argument that an open society is not just desirable, but the only final solution, all I need to do is to show that this is the least energy state of the social system. Open source software, open hardware and all systems where sharing is at the core should be, then, the least costly business models, so to force virtually all companies in the world to follow suit, and create the Stallman’s utopia as a result of the natural stability, not a forced state.
This is crucial, because every forced state is non-natural by definition, and every non-natural state has to be maintained by using resources that could be used otherwise, to enhance the quality of the lives of the individuals of the system (being them human or not, let’s not block our point of view this early). To achieve balance on a social system we have to let things go awry for a while, so that the arguments against such a state are perfectly clear to everyone involved, and there remains no argument that the current state is non-optimal. If there isn’t discomfort, there isn’t the need for change. Without death, there is no life.
Of all the bad ideas us humans had on how to build a social system, capitalism is probably one of the worst, but it’s also one of the most stable, and that’s because it’s the closest to the jungle rule, survival of the fittest and all that. Regulations and governments never came to actually protect the people, but as to protect capitalism from itself, and continue increasing the profit of the profitable. Socialism and anarchy rely too much on forced states, in which individuals have to be devoid of selfishness, a state that doesn’t exist on the current form of human beings. So, while they’re the product of amazing analysis of the social structure, they still need heavy genetic changes in the constituents of the system to work properly, on a stable, least-energy state.
Having less angry people on the streets is more profitable for the government (less costs with security, more international trust in the local currency, more investments, etc), so panis et circenses will always be more profitable than any real change. However, with more educated societies, result from the increase in profits of the middle class, more real changes will have to be made by governments, even if wrapped in complete populist crap. One step at a time, the population will get more educated, and you’ll end up with more substance and less wrapping.
So, in the end, it’s all about profit. If not using open source/hardware means things will cost more, the tendency will be to use it. And the more everyone uses it, the less valuable will be the products that are not using it, because the ecosystem in which applications and devices are immersed in, becomes the biggest selling point of any product. Would you buy a Blackberry Application, or an Android Application? Today, the answer is close to 80% on the latter, and that’s only because they don’t use the former at all.
It’s not just more expensive to build Blackberry applications, because the system is less open, the tools less advanced, but also the profit margins are smaller, and the return on investment will never justify. This is why Nokia died with their own App store, Symbian was not free, and there was a better, free and open ecosystem already in place. The battle had already been lost, even before it started.
But none of that was really due to moral standards, or Stallman’s bickering. It was only about profit. Microsoft dominated the desktop for a few years, long enough to make a stand and still be dominant after 15 years of irrelevance, but that was only because there was nothing better when they started, not by a long distance. However, when they tried to flood the server market, Linux was not only already relevant, but it was better, cheaper and freer. The LAMP stack was already good enough, and the ecosystem was so open, that it was impossible for anyone with a closed development cycle to even begin to compete on the same level.
Linux became so powerful that, when Apple re-defined the concept of smartphones with the iPhone (beating Nokia’s earlier attempts by light-years of quality), the Android system was created, evolved and dominated in less than a decade. The power to share made possible for Google, a non-device, non-mobile company, to completely outperform a hardware manufacturer in a matter of years. If Google had invented a new OS, not based on anything existent, or if they had closed the source, like Apple did with FreeBSD, they wouldn’t be able to compete, and Apple would still be dominant.
Do we need profit?
So, the question is: is this really necessary? Do we really depend on Google (specifically) to free us from the hands of tyrant companies? Not really. If it wasn’t Google, it’d be someone else. Apple, for a long time, was the odd guy in the room, and they have created an immense value for society: they gave us something to look for, they have educated the world on what we should strive for mobile devices. But once that’s done, the shareable ecosystem learns, evolves and dominate. That’s not because Google is less evil than Apple, but because Android is more profitable than iOS.
Profit here is not just the return on investment that you plan on having on a specific number of years, but adding to that, the potential that the evolving ecosystem will allow people to do when you’ve long lost the control over it. Shareable systems, including open hardware and software, allow people far down in the planing, manufacturing and distributing process to still have profit, regardless of what were your original intentions. One such case is Maddog’s Project Cauã.
By using inexpensive RaspberryPis, by fostering local development and production and by enabling the local community to use all that as a way of living, Maddog’s project is using the power of the open source initiative by completely unrelated people, to empower the people of a country that much needs empowering. That new class of people, from this and other projects, is what is educating the population of the world, and what is allowing the people to fight for their rights, and is the reason why so many civil uprisings are happening in Brazil, Turkey, Egypt.
All that creates instability, social unrest, whistle-blowing gone wrong (Assange, Snowden), and this is a good thing. We need more of it.
It’s only when people feel uncomfortable with how the governments treat them that they’ll get up their chairs and demand for a change. It’s only when people are educated that they realise that oppression is happening (since there is a force driving us away from the least-energy state, towards enriching the rich), and it’s only when these states are reached that real changes happen.
The more educated society is, the quicker people will rise to arms against oppression, and the closer we’ll be to Stallman’s utopia. So, whether governments and the billionaire minority likes or not, society will go towards stability, and that stability will migrate to local minima. People will rest, and oppression will grow in an oscillatory manner until unrest happens again, and will throw us into yet another minimum state.
Since we don’t want to stay in a local minima, we want to find the best solution not just a solution, having it close to perfect in the first attempt is not optimal, but whether we get it close in the first time or not, the oscillatory nature of social unrest will not change, and nature will always find a way to get us closer to the global minimum.
Is it possible to stay in this unstable state for too long? I don’t think so. But it’s not going to be a quick transition, nor is it going to be easy, nor we’ll get it on the first attempt.
But more importantly, reaching stability is not a matter of forcing us to move towards a better society, it’s a matter of how dynamic systems behave when there are clear energetic state functions. In physical and chemical systems, this is just energy, in biological systems this is the propagation ability, and in social systems, this is profit. As sad as it sounds…
Open Source and Innovation |
| September 13th, 2012 under Corporate, OSS, rengolin, Technology. [ Comments: 1 ]
A few weeks ago, a friend (Rob) asked me a pertinent question: “How can someone innovate and protect her innovation with open source?”. Initially, I scorned off with a simple “well, you know…”, but this turned out to be a really hard question to answer.
The main idea is that, in the end, every software (and possibly hardware) will end up as open source. Not because it’s beautiful and fluffy, but because it seems to be the natural course of things nowadays. We seem to be moving from profiting on products, to giving them away and profiting on services. If that’s true, are we going to stop innovating at all, and just focus on services? What about the real scientists that move the world forward, are they also going to be flipping burgers?
Open Source as a business model
The reason to use open source is clear, the TCO fallacy is gone and we’re all used to it (especially the lawyers!), that’s all good, but the question is really what (or even when) to open source your own stuff. Some companies do it because they want to sell the value added, or plugins and services. Others do because it’s not their core business or they want to form a community, which would otherwise use the competitors’ open source solution. Whatever the reason is, more and more we seem to be open sourcing software and hardware at an increasing speed, some times it comes off as open source on its first day in the wild.
Open source is a very good cost sharing model. Companies can develop a third-party product, not related to their core areas (where they actually make money), and still claim no responsibility or ownership (which would be costly). For example, the GNU/Linux and FreeBSD operating systems tremendously reduce the cost of any application developer, from embedded systems to big distributed platforms. Most platforms today (Apple’s, Androids, set-top boxes, sat-navs, HPC clusters, web-servers, routers, etc) have them at their core. If each of these products had to develop their own operating system (or even parts of it), it wouldn’t be commercially viable.
Another example is the MeshPotato (in Puerto Rico) box, which uses open software and hardware initially developed by Village Telco (in South Africa). They can cover wide areas providing internet and VoIP telephony over the rugged terrain of Puerto Rico for under $30 a month. If they had to develop their hardware and software (including the OS), it’d cost no less than a few hundred pounds. Examples like that are abundant these days and it’s hard to ignore the benefits of Open Source. Even Microsoft, once the biggest closed-source zealot, who propagated the misinformation that open source was hurting the American Way of Life is now one of the biggest open source contributors on the planet.
So, what is the question then?
If open source saves money everywhere, and promotes incremental innovation that wouldn’t be otherwise possible, how can the original question not have been answered? The key was in the scope.
Rob was referring, in fact, to real chunky innovations. Those that take years to develop, many people working hard with one goal in mind, spending their last penny to possibly profit in the end. The true sense of entrepreneurship. Things that might profit from other open source technologies, but are so hard to make that even so it takes years to produce. Things like new chips, new medicines, real artificial intelligence software and hardware, etc. The open source savings on those projects are marginal. Furthermore, if you spend 10 years developing a software (or hardware) and open source it straight away, how are you ever going to get your investment money back? Unless you charge $500 a month in services to thousands of customers on day one, you won’t see the money back in decades.
The big misunderstanding, I think, it’s that this model no longer applies, so the initial question was invalid to begin with. I explain.
Science and Tecnology
300 years ago, if you were curious about something you could make a name for yourself very easily. You could barely call what they did science. They even called themselves natural philosophers, because what they did was mostly discovering nature and inquiring about its behaviour. Robert Hooke was a natural philosopher and a polymath, he kept dogs with their internals in the open just to see if it’d survive. He’d keep looking at things through a microscope and he named most of the small things we can see today.
Newton, Liebniz, Gauss, Euler and few others have created the whole foundation of modern mathematics. They are known for fundamentally changing how we perceive the universe. It’d be preposterous to assume that there isn’t a person today as bright as they were, but yet, we don’t see people changing our perception of the universe that often. The last spree was more than a hundred years ago, with Maxwell, Planck and Einstein, but still, they were corrections (albeit fundamental) to the model.
Today, a scientist contents in scratching the surface of a minor field in astrophysics, and he’ll probably get a Nobel for that. But how many of you can name more than 5 Nobel laureates? Did they really change your perception of the universe? Did they invent things such as real artificial intelligence or did they discover a better way of doing politics? Sadly, no. Not because they weren’t as smart as Newton or Leibniz, but because the easy things were already discovered, now we’re in for the hard and incremental science and, like it or not, there’s no way around it.
Today, if you wrapped tin foil around a toilet paper tube and played music with it, people would, at best, think you’re cute. Thomas Edison did that and was called a Wizard. Nokia was trying to build a smartphone, but they were trying to make it perfect. Steve Jobs made is almost useless, people loved it, and he’s now considered a genius. If you try to produce a bad phone today, people will laugh at you, not think you’re cute, so things are getting harder for the careless innovators, and that’s the crucial point. Careless and accidental innovation is not possible on any field that has been exploited long enough.
Innovation and Business
Innovation is like business, you only profit if there is a market that hasn’t been taken. If you try to invent a new PC, you will fail. But if you produce a computer that has a niche that has never been exploited (even if it’s a known market, like in the Nokia’s smartphone case), you’re in for the money. If you want to build the next AI software, and it marginally works, you can make a lot of money, whether you open source your software or not. Since people will copy (copyright and patent laws are not the same in every country), your profit will diminish with time, proportional to the novelty and the difficulty in copying.
Rob’s point went further, “This isn’t just a matter of what people can or can’t do, is what people should or should not do”. Meaning, shouldn’t we aim for a world where people don’t copy other people’s ideas as a principle, instead of accepting the fact that people copy? My answer is a strong and sounding: NO! For the love of all that’s good, NO!
The first reason is simply because that’s not the world we live in and it will not be as long as humanity remains human. There is no point in creating laws that do not apply to the human race, though it seems that people get away with that very easy these days.
The second point is that it breaks our society. An example: try to get into a bank and ask for investment on a project that will take 10 years to complete (at the cost of $10M) and the return will come during the 70 years that follows it (at a profit of $100’sM a year). The manager will laugh at you and call security. This is, however, the time it takes (today) for copyright in Hollywood to expire (the infamous Mickey Mouse effect), and the kind of money they deal with.
Imagine that a car manufacturer develops a much safer way of building cars, say magical air bags. This company will be able to charge a premium, not just because of the development costs, but also for its unique position in the market. With time, it’ll save more lives that any other car and governments will want that to be standard. But no other company can apply that to their cars, or at least not without paying a huge premium to the original developer. In the end, cars will be much more expensive in general, and we end up paying the price.
Imagine if there were patents for the telephone, or the TV or cars (I mean, the concept of a car) or “talking to another person over the phone”, or “reminding to call your parents once in a while”. It may look silly, but this is better than most patent descriptions! Most of the cost to the consumer would be patents to people that no longer innovate! Did you know that Microsoft makes more money with Android phones than Google? Their contributions to the platform? Nothing. This was an agreement over dubious and silly patents that most companies accepted as opposed to being sued for billions of dollars.
In my opinion, we can’t just live in the 16th century with 21st century technology. You can’t expect to be famous or profit by building an in-house piece of junk or by spotting a new planet. Open source has nothing to do with it. The problem is not what you do with your code, but how you approach the market.
I don’t want to profit at the expense of others, I don’t want to protect my stupid idea that anyone else could have had (or probably already had, but thought it was silly), just because I was smart enough to market it. Difficult technology is difficult (duh), and it’s not up to a team of experts to create it and market it to make money. Science and technology will advance from now on on a steady, baby-steps way, and the tendency is for this pace to get even slower and smaller.
Another important conclusion for me is that, I’d rather live in a world where I cannot profit horrendously from a silly idea just because I’ve patented it than have monopolies like pharma/banking/tobacco/oil/media controlling our governments, or more than directly, our lives. I think that the fact that we copy and destroy property is the most liberating fact of humanity. It’s the Robin Hood of modern societies, making sure that, one way or another, the filthy rich won’t continue getting richer. Explosive growth, monopolies, cartels, free trade and protection of property are core values that I’d rather see dead as a parrot.
In a nutshell, open source does not hinder innovation, protection of property does.
In the future… |
| February 17th, 2012 under Corporate, Life, Politics, rengolin, World. [ Comments: 1 ]
In the future, people will be able to project three-dimensional films using holograms. These holograms could be placed among us, rather than at a stage, to give us a much better sense of reality and emotions than it is possible on a theatre or cinema.
When this technique gets common place, it’ll be possible to use it in the classroom. Actors would re-enact events in history, and children will be able to live the moment, rather than just listening to stories. The teachers, then, will have a much more fundamental role in teaching. They will comment on what’s happening, rather than merely serve as a narrator.
Holographic teaching has numerous advantages. Seeing the streets of London in 1666 on fire, running for your life is much more vivid than just chalk traces on a blackboard. Seeing Jews suffering on German camps, being a Jew on a German camp (minus the physical harm, of course), gives us a much better tool to avoid this in the future, and to do it to other people.
In the future, children will be able to live the credit crunch, the Syrian civil war, how the international community helped, and provoked, several conflicts in the Middle East and Africa. How people in the poorest parts of this world live without clean water or food, and how their parents die of unimaginable diseases and it falls on them the responsibility of raising a family, by the age of 4.
Children won’t be listeners, any more, they’ll live the moment, feel the pain, and learn that this is not acceptable, under any circumstances, for any living bean: Humans, animals, aliens.
However, you don’t have to wait for that glorious future to fix society. If things continue how they are, it is very likely that this future will never come to pass. If there is one constant in human history is the force of self-destruction. The more humans we have (we passed the 7bi barrier long ago), the stronger this force is.
There are several ways any of us can help save the world. The single most important you can do is to teach your children that ruthless selfish behaviour is not accepted, that the ends don’t justify the means, and that people deserve freedom to live and think for their own. Other things involve going to the most affected areas and work to revamp those cultures (not just bring food and water), help re-structure their governments (on their own terms) and work with your own government to stop invasive manoeuvres and third-party destructions for their own benefits.
A simple start is to help Avaaz. They do most of the bureaucracy, they go into the countries, they empower people, they turn rogue legislations around and, more importantly, they warn you before it’s too late.
Signing to their mailing list will give you a much better view of the world. You don’t have to donate money to help, just by signing the petitions, showing you care, is already a good start. The best part is that they will always ask you what’s the next step. How much effort they have to spend on this or that, and how much (and which) technology they have to develop to help their – our – cause.
I’m following Avaaz for a few years now, probably since its foundation, and I have to say that, not only they surpassed my expectations on what they could do with the world, but also on clarity, openness and use of technology and resources. They’re not a charity, they’re an activist group, and a very good one at that. If you were looking for something to support to help change the world, Avaaz is a great start.
Eventually everyone wants to be AOL |
| January 25th, 2012 under Articles, Corporate, Media, Politics, rengolin, Web. [ Comments: none ]
After a good week battling against SOPA, it’s time to go back to real life, to battling our own close enemies.
As was reported over, and over, and over again (at least in this blog), Google is dragging itself towards a giant dominant player it’s becoming, much like Yahoo! and AOL in previous times.
Lifehacker has a very good post about the same subject (from where the title of this post was deliberately taken), around Google+ and the new Search+ (or whatever they’re calling that), and how the giant is loosing its steam and trying so solidify its market, where it’ll comfortably lay until the end of its days.
True, Google has a somewhat strong research department, and is working towards new TCP/IP standards, but much of it was done by Yahoo! in the past, towards FreeBSD, PHP and MySQL. Yahoo! actually hired top notch BSD kernel hackers (like Paul Saab), MySQL gurus (like Jimi Cole and Zawodny) and the PHP creator, Rasmus Lerdorf. And they put a lot back to the community. But none of that is true revolution, only short reforms to keep themselves in power for a bit longer.
The issue is simple, Google doesn’t need to innovate as much as they did in the past, as did Yahoo! and AOL. Even Microsoft and Apple need to innovate more than Google, because they have to sell things. Software, hardware and services, not only cost money, and time, but they age too rapidly and it’s not hard to throw loads of money at a project that is borne dead (like Vista). But Google get its money for free (so to speak), their users are not paying a penny for their services. How hard it is to compete with that model?
Like Google, Yahoo! had the same comfort in their days. They had more users than anyone else, and that was the same as money. They did get money from ads, like Google, only not as efficient. And that put them in a comfort zone that it’s hard not to get used to, which was their ultimate doom. This is why, after 25 and so years failing, Microsoft is still a strong player. This is why Apple, after being in the shadow for than 20 years, got to be the biggest Tech company in the world. The must innovate at every turn.
Yahoo! displaced AOL and bought pretty much everyone else because they’ve outsmarted the competition, by doing the same thing, but cheaper and easier. Google repeated the same stunt, on Yahoo! and is beginning to age. How long would that last? When the next big thing appears, making money even easier, Google will be a giant. An arrogant, slow and blind giant. And natural selection will take care of them as quick as it took of AOL and Yahoo!
Post-SOPA-protest, what’s on? |
| January 19th, 2012 under Corporate, Digital Rights, Life, Politics, rengolin, Web, World. [ Comments: 1 ]
So, the day has ended and we’ve seen many protests around the world. Did it help? Well, a bit, but don’t hold your breath right now.
European citizens are still being sued by the American government and being extradited to the US because their sites had links to copyrighted material. So, in a way, what SOPA and PIPA stands for is already reality, but it takes the US government a lot of effort and money to do so. With SOPA and PIPA, enyone in the world could end up in Guantanamo Bay, as easy as any American.
While I welcome the protest, and feel that Americans did a good job converting 30 more senators to their cause (it was 5, now it’s 35), it’s far from enough. I think people still haven’t realised that this is not an American issue. Just like American copyright laws have bankrupted creativity around the world (think Mickey Mouse effect) and the American patent system has destroyed technological advancement (patent trolls, et al), SOPA and PIPA will spread throughout the world and be the icing on their cake.
The people that are so desperate to preserve their profits by breaking the rest of the world are the people that already have more than anyone. Last year, Viacom’s CEO had a 50mi raise in his salary. Not a bonus, mind you, a raise. To protect those people’s profits, we’re letting them destroy the entire world, stop technological advancements (that don’t give profits to them) and kill all the artists in the process.
If you, like me, are outside of the US, please make sure your government stops short of bending to the US government, as they always do. Europe, and particularly UK and France, has been America’s puppet for far too long. The US is not the only country in the world, and nowadays, it’s not even the most important one. We need to change the world to multi-polar and promote countries like China, Russia, Brazil, India. Not that I like any of them, but we must not put all our coins into one crazy country, we need more crazy countries to re-balance the world.
Now, for some of the protests
Apart from the obvious Wikipedia, Google, WordPress, there were some others I’ve seen that are worth mentioning.
- FightForTheFuture had a very interesting video explaining the whole thing.
- Ars Technica published only SOPA/PIPA articles, and very good ones at that.
- Bruce Schneier, security legend, also joined the protests.
- Avaaz, always alert, started a petition and already got more than 1.5mi signatures.
- People that live on content (in theory, the ones affected by piracy) also had their say: XKCD, Abstruse Goose, Basic Instruction.
- Last, but not least, The Daily WTF has a very interesting piece about how bad it already is, and supporting PIPA so we can go back to the BBS era that was much more comfortable!
It was not just that, some people actually went on to the streets (NY and SF) and it seems most senators’ phones and websites went dead for the traffic. It’s working, but this is not the end, nor this is just about copyright. This is about freedom of thought, freedom to share, freedom to be a human being. Stopping SOPA/PIPA is just the first step, we need to undo most of what the media/war/oil/tobacco industry has done for the past 80 years, unless you like dictatorships, of course.
Science vs. Business |
| July 30th, 2011 under Computers, Corporate, OSS, Politics, rengolin, Science. [ Comments: none ]
Since the end of the dark ages, and the emergence of modern capitalism, science has been connected to business, in one way or another.
During my academic life and later (when I moved to business), I saw the battle of those that would only do pure science (with government funding) and those that would mainly do business science (with private money). There were only few in between the two groups and most of them argued that it was possible to use private money to promote and develop science.
For years I believed that it was possible, and in my book, the title of this post wouldn’t make sense. But as I dove into the business side, every step closer to business research than before, I realised that there is no such thing as business science. It is such a fundamental aspect of capitalism, profit, that make it so.
Good mathematicians copy, best mathematicians steal. The three biggest revolutions in computing during the last three decades were the PC, the Open Source and Apple.
The PC revolution was started by IBM (with open platforms and standard components) but it was really driven by Bill Gates and Microsoft, and that’s what generated most of his fortune. However, it was a great business idea, not a great scientific one, as Bill Gates copied from a company (the size of a government), such as IBM. His business model’s return on investment was instantaneous and gigantic.
Apple, on the other hand, never made much money (not as much as IBM or Microsoft) until recently with the iPhone and iPad. That is, I believe, because Steve Jobs copied from a visionary, Douglas Engelbart, rather than a business model. His return on investment took decades and he took one step at a time.
However, even copying from a true scientist, he had to have a business model. It was impossible for him to open the platform (as MS did), because that was where all the value was located. Apple’s graphical interface (with the first Macs), the mouse etc (all blatantly copied from Engelbart). They couldn’t control the quality of the software for their platform (they still can’t today on AppStore) and they opted for doing everything themselves. That was the business model getting in the way of a true revolution.
Until today, Apple tries to do the coolest system on the planet, only to fall short because of the business model. The draconian methods Microsoft took on competitors, Apple takes on the customers. Honestly, I don’t know what’s worse.
On the other hand, Open Source was born as the real business-free deal. But its success has nothing to do with science, nor with the business-freeness. Most companies that profit with open source, do so by exploiting the benefits and putting little back. There isn’t any other way to turn open source into profit, since profit is basically to gain more than what you spend.
This is not all bad. Most successful Open source systems (such as Apache, MySQL, Hadoop, GCC, LLVM, etc) are so because big companies (like Intel, Apple, Yahoo) put a lot of effort into it. Managing the private changes is a big pain, especially if more than one company is a major contributor, but it’s more profitable than putting everything into the open. Getting the balance right is what boosts, or breaks, those companies.
The same rules also apply to other sciences, like physics. The United States are governed by big companies (oil, weapons, pharma, media) and not by its own government (which is only a puppet for the big companies). There, science is mostly applied to those fields.
Nuclear physics was only developed at such a fast pace because of the bomb. Laser, nuclear fusion, carbon nanotubes are mostly done with military funding, or via the government, for military purposes. Computer science (both hardware and software) are mainly done on the big companies and with a business background, so again not real science.
Only the EU, a less business oriented government (but still, not that much less), could spend a gigantic amount of money on the LHC at CERN to search for a mere boson. I still don’t understand what’s the commercial applicability of finding the Higgs boson and why the EU has agreed to spend such money on it. I’m not yet ready to accept that it was all in the name of science…
But while physics has clear military and power-related objectives, computing, or rather, social computing, has little to no impact. Radar technologies, heavy-load simulations, and prediction networks receive a strong budget from governments (especially US, Russia), while other topics such as how to make the world a better place with technology, has little or no space is either business or government sponsored research.
That is why, in my humble opinion, technology has yet to flourish. Computers today create more problems than they solve. Operating systems make our life harder than they should, office tools are not intuitive enough for every one to use, compilers always fall short of doing a great job, the human interface is still dominated by the mouse, invented by Engelbart himself in the 60’s.
Not to mention the rampant race to keep Moore’s law (in both cycles and profit) at the cost of everything else, most notably the environment. Chip companies want to sell more and more, obsolete last year’s chip and send it to the land fills, as there is no efficient recycling technology yet for chips and circuits.
Unsolved questions of the last century
Like Fermat’s theorems, computer scientists had loads of ideas last century, at the dawn of computing era, that are still unsolved. Problems that everybody tries to solve the wrong way, as if they were going to make that person famous, or rich. The most important problems, as I see, are:
- Computer-human interaction: How to develop an efficient interface between humans and computers as to remove all barriers on communication and ease the development of effective systems
- Artificial Intelligence: As in real intelligence, not mimicking animal behaviour, not solving subset of problems. Solutions that are based on emergent behaviour, probabilistic networks and automatons.
- Parallel Computation: Natural brains are parallel in nature, yet, computers are serial. Even parallel computers nowadays (multi-core) are only parallel to a point, where they go back on being serial. Serial barriers must be broken, we need to scratch the theory so far and think again. We need to ask ourselves: “what happens when I’m at the speed of light and I look into the mirror?“.
- Environmentally friendly computing: Most components on chips and boards are not recyclable, and yet, they’re replaced every year. Does the hardware really need to be more advanced, or the software is being dumber and dumber, driving the hardware complexity up? Can we use the same hardware with smarter software? Is the hardware smart enough to last a decade? Was it really meant to last that long?
All those questions are, in a nutshell, in a scientific nature. If you take the business approach, you’ll end up with a simple answer to all of them: it’s not worth the trouble. It is impossible, at short and medium term, to profit from any of those routes. Some of them won’t generate profit even in the long term.
That’s why there is no advance in that area. Scientists that study such topics are alone and most of the time trying to make money out of it (thus, going the wrong way and not hitting the bull’s eye). One of the gurus in AI at the University of Cambridge is a physicist, and his company does anything new in AI, but exploits the little effort on old school data-mining to generate profit.
They do generate profit, of course, but does it help to develop the field of computer science? Does it help tailor technology to better ourselves? To make the world a better place? I think not.
Google+ and the Yahoo-isation of Google |
| July 1st, 2011 under Corporate, rengolin, Web. [ Comments: 3 ]
Almost a decade ago I joined Yahoo to work on the search team. At that time, Google was giving Yahoo a hard time with their amazing search, while Yahoo was mostly based on directory search and some ad-hoc buyouts (AltaVista et al). Yahoo came with its own search, then bought Inktomy, then re-wrote the search engine, and they’re now using Bing. They were late on the search business. Too late. Not that Inktomy was bad, but it wasn’t better than Google and certainly wasn’t a novelty.
When Google came with Gmail it was a shock to all of us, Yahoo workers. How can they offer 1GB free mail and we only offer 10MB? How can a (then) small company provide such massive storage while the behemoth of the Internet could only afford peanuts? It’s all in the administration. Yahoo, for some reason I still don’t understand, had to have all user’s emails on filers (very expensive storage), and had to reserve the whole storage, even if less than 10% of the users actually used more than 50%. There was a lot of very expensive idle disks at Yahoo Mail…
Another case was social networks. Yahoo was never able to write a single decent social network that wouldn’t close an year later. Several internal attempts were made (during years of development) before the first Yahoo-360 came out, only to die a few months later of starvation.
When the internet was young, and Yahoo was at the top, they could do anything, people would just love. Yahoo mail was free and came with a calendar and some bits. It was horrid, but it was free, and we all used one day (especially after Hotmail was acquired by Microsoft). Somehow, the directors of Yahoo decided it was better to have it all, even if the quality was unbelievably low. They were so famous and so ubiquitous that anyone wanted to advertise on their websites. The more they had, the more people wanted.
That cycle made Yahoo create a huge number of useless pages and verticals (content website like weather, mobile, etc), just because people would pay loads of money to advertise there. I’ve seen many pages going live without proper review or a decent market analysis, and some were still with bogus content (non non non) and broken links after years. Yahoo had spread their butter so thin that it was impossible for them to compete with any other company.
Google came, Google destroyed Yahoo, Microsoft came and bought it. It may not be in paper, but Yahoo is the new Microsoft garden, where they put their feet up when they’re tired of working.
With Facebook, the story is not completely similar. Google is still not where Yahoo was 8 years ago and the Internet is not as naive as it used to be. It’s true that Facebook killed every other social network website, but it’s not true that they’ll be able to do with GMail, what Google did with Yahoo Mail. Part because GMail is really good, and part because Facebook guys are not that good.
But there’s one trend that is happening to Google that is similar to Yahoo of the past: legacy. Google has a decade of code, normally more bad than good, and the new systems have to integrate with it. But that’s not the worse, by far. As Yahoo, Google started from ground-up, so they always had the start-up mentality. When the company wasn’t a start up any more, they still tried to do things the same way.
Two things happened during the last few years that put Google in a bad situation: first, because in the golden days they had truly done remarkable systems (simple, yet efficient), they thought (and carried on thinking) not only that they were the best of the best, but that they could do anything and anything they did was automatically better than any one else. Second, the lack of process and reality check only made things worse, by kludging infrastructure on top of infrastructure, by solving every problem as if it was map-reduce and by doing everything in-house, it was difficult to use off-the-shelf applications for things that weren’t really that relevant, and move on when it was, indeed, relevant.
Facebook is still a young company, but I hardly believe their fate will not be the same. It was also clear from the beginning (8 years ago), that Google would have the same fate. However, I don’t think that Facebook will overrun Google as the latter did with Yahoo, but that’s neither fault. The market is not the same, the Internet is not the same.
In the same style as Yahoo-360, Google is trying to use their own user-base to compete with Facebook, and for that, it’s very likely that they’ll fail. Not as bad as 360, but they won’t kill Facebook.
Hangouts, Circles, Huddle are just different names for the same functionality in Facebook, Twitter, etc. Even the layout of Google+ is identical to Facebook. 360 was exactly the same thing and that’s where I draw the line. That’s where Google is getting ludicrously similar to Yahoo 8 years ago and that’s why they’ll start failing more and more often from now on.
They stopped being creative. Their creative innovations (wave, buzz) is average at best, their copied products (chrome, android) and similar to the competition with no clear game changer and the old stuff (search, gmail) is still the same. Not bad, but not creative any more.
From now on, is only downhill. In a decade or so, Apple will offer to buy them, and fail to, but enough to dismiss the general trust people have on them and that will be the end. Zombies of the Internet…
Computer Science vs Software Engineering |
| January 13th, 2011 under Corporate, rengolin, Science, Technology. [ Comments: none ]
The difference between science and engineering is pretty obvious. Physics is science, mechanics is engineering. Mathematics is (ahem) science, and building bridges is engineering. Right?
Well, after several years in science and far too much time in software engineering that I was hoping to tell my kids when they grow up, it seems that people’s beliefs are much more exacerbated about the difference, if there’s any, than their own logic seems to imply.
General beliefs that science is more abstract fall apart really quickly when you compare maths to physics. There are many areas of maths (statistics, for example) that are much more realistic and real world than many parts of physics (like string theory and a good part of cosmology). Nevertheless, most scientists will turn their noses up at or anything that resembles engineering.
From different points of view (biology, chemistry, physics and maths), I could see that there isn’t a consensus on what people really consider a less elaborate task, not even among the same groups of scientists. But when faced with a rejection by one of their colleagues, the rest usually agree on it. I came to the conclusion that the psychology of belonging to a group was more important than personal beliefs or preferences. One would expect that from young schoolgirls, not from professors and graduate students. But regardless of the group behaviour, there still is that feeling that tasks such as engineering (whatever that is) are mundane, mechanical and more detrimental to the greater good than science.
On the other side of the table, the real world, there are people doing real work. It generally consists of less thinking, more acting and getting things done. You tend to use tables and calculators rather than white boards and dialogue, your decisions are much more based on gut feelings and experience than over-zealously examining every single corner case and the result of your work is generally more compact and useful to the every-day person.
From that perspective, (what we’re calling) engineers have a good deal of prejudice towards (what we called) scientists. For instance, the book Real World Haskell is a great pun from people that have one foot on each side of this battle (but are leaning towards the more abstract end of it). In the commercial world, you don’t have time to analyse every single detail, you have a deadline, do what you can with that and buy insurance for the rest.
Engineers also produce better results than scientists. Their programs are better structured, more robust and efficient. Their bridges, rockets, gadgets and medicines are far more tested, bullet-proofed and safe than any scientist could ever hope to do. It is a misconception that software engineers have the same experience than an academic with the same time coding, as is a misconception that engineers could as easily develop prototypes that would revolutionise their industry.
But even on engineering, there are tasks and tasks. Even loathing scientists, those engineers that perform a more elaborate task (such as massive bridges, ultra-resistant synthetic materials, operating systems) consider themselves above the mundane crowd of lesser engineers (building 2-bed flats in the outskirts of Slough). So, even here, the more abstract, less fundamental jobs are taken at a higher level than the more essential and critical to society.
Is it true, then, that the more abstract and less mundane a task is, the better?
Since the first thoughts on general purpose computing, there is this separation of the intangible generic abstraction and the mundane mechanical real world machine. Leibniz developed the binary numeral system, compared the human brain to a machine and even had some ideas on how to develop one, someday, but he ended up creating some general-purpose multipliers (following Pascal’s design for the adder).
Leibniz would have thrilled in the 21th century. Lots of people in the 20th with the same mindset (such as Alan Turin) did so much more, mainly because of the availability of modern building techniques (perfected for centuries by engineers). Babbage is another example: he developed his differential machine for years and when he failed (more by arrogance than anything else), his analytical engine (far more elegant and abstract) has taken his entire soul for another decade. When he realised he couldn’t build it in that century, he perfected his first design (reduced the size 3 times) and made a great specialist machine… for engineers.
Mathematicians and physicists had to do horrible things (such as astrology and alchemy) to keep their pockets full and, in their spare time, do a bit of real science. But in this century this is less important. Nowadays, even if you’re not a climate scientist, you can get a good budget for very little real applicability (check NASA’s funded projects, for example). The number of people working in string theory or trying to prove the Riemann hypothesis is a clear demonstration of that.
But computing is still not there yet. We’re still doing astrology and alchemy for a living and hoping to learn the more profound implications of computing on our spare time. Well, some of us at least. And that comes to my point…
There is no computer science… yet
The beginning of science was marked by philosophy and dialogue. 2000 years later, man kind was still doing alchemy, trying to prove the Sun was the centre of the solar system (and failing). Only 200 years after that that people really started doing real science, cleansing themselves from private funding and focusing on real science. But computer science is far from it…
Most computer science courses I’ve seen teach a few algorithms, an object oriented language (such as Java) and a few courses on current technologies (such as databases, web development and concurrency). Very few of them really teach about Turin machines, group theory, complex systems, other forms of formal logic and alternatives to the current models. Moreover, the number of people doing real science on computing (given what appears on arXiv or news aggregation sites such as Ars Technica or Slashdot) is probably less than the number of people working with string theory or wanting a one-way trip to Mars.
So, what do PHDs do in computer science? Well, novel techniques on some old school algorithms are always a good choice, but the recent favourite has been breaking the security of the banking system or re-writing the same application we all already have, but for the cloud. Even the more interesting dissertations like memory models in concurrent systems, energy efficient gate designs are all commercial applications at most.
After all, PHDs can get a lot more money in the industry than remaining at the universities, and doing your PHD towards some commercial application can guarantee you a more senior position as a start in such companies than something completely abstract. So, now, to be honestly blunt, we are all doing alchemy.
Still, that’s not to say that there aren’t interesting jobs in software engineering. I’m lucky to be able to work with compilers (especially because it also involves the amazing LLVM), and there are other jobs in the industry that are as interesting as mine. But all of them are just the higher engineering, the less mundane rocket science (that has nothing of science). But all in all, software engineering is a very boring job.
You cannot code freely, ignore the temporary bugs, ask the user to be nice and have a controlled input pattern. You need a massive test infrastructure, quality control, standards (which are always tedious), and well documented interfaces. All that gets in the way of real innovation, it makes any attempt of doing innovation in a real company a mere exercise of futility and a mild source of fun.
This is not exclusive of the software industry, of course. In the pharmaceutical industry there is very little innovation. They do develop new drugs, but using the same old methods. They do need to get new medicines, more powerful out of the door quickly, but the massive amount of tests and regulation they have to follow is overwhelming (this is why they avoid as much as possible doing it right, so don’t trust them!). Nevertheless, there are very interesting positions in that industry as well.
Good question. People are afraid of going out of their area of expertise, they feel exposed and ridiculed, and quickly retract to their comfort area. The best thing that can happen to a scientist, in my opinion, is to be proven wrong. For me, there is nothing worse than being wrong and not knowing. Not many people are like that, and the fear of failure is what keeps the industry (all of them) in the real world, with real concerns (this is good, actually).
So, as far as the industry drives innovation in computing, there will be no computer science. As long as the most gifted software engineers are mere employees in the big corporations, they won’t try, to avoid failure, as that could cost them their jobs. I’ve been to a few companies and heard about many others that have a real innovation centre, computer laboratory or research department, and there isn’t a single one of them that actually is bold enough to change computing at its core.
Something that IBM, Lucent and Bell labs did in the past, but probably don’t do it any more these days. It is a good twist of irony, but the company that gets closer to software science today is Microsoft, in its campus in Cambridge. What happened to those great software teams of the 70’s? Could those companies really afford real science, or were them just betting their petty cash in case someone got lucky?
I can’t answer those questions, nor if it’ll ever be possible to have real science in the software industry. But I do plea to all software people to think about this when they teach at university. Please, teach those kids how to think, defy the current models, challenge the universality of the Turin machine, create a new mathematics and prove Gödel wrong. I know you won’t try (by hubris and self-respect), but they will, and they will fail and after so many failures, something new can come up and make the difference.
There is nothing worse than being wrong and not knowing it…
Fool me once, shame on you… fool me twice, shame on me (DBD) |
| October 23rd, 2010 under Computers, Corporate, Digital Rights, Hardware, Media, OSS, rengolin, Software, Unix/Linux. [ Comments: 4 ]
Defective by design came with a new story on Apple’s DRM. While I don’t generally re-post from other blogs (LWN already does that), this one is special, but not for the apparent reasons.
I agree that DRM is bad, not just for you but for business, innovation, science and the evolution of mankind. But that’s not the point. What Apple is doing with the App store is not just locking other applications from running on their hardware, but locking their hardware out of the real world.
In the late 80’s – early 90’s, all hardware platforms were like that, and Apple was no exception. Amiga, Commodore, MSX and dozens of others, each was a completely separate machine, with a unique chipset, architecture and software layers. But that never stopped people writing code for it, putting on a floppy disk and installing on any compatible computer they could find. Computer viruses spread out that way, too, given the ease it was to share software in those days.
Ten years later, there was only a handful of architectures. Intel for PCs, PowerPC for Mac and a few others for servers (Alpha, Sparc, etc). The consolidation of the hardware was happening at the same time as the explosion of the internet, so not only more people had the same type of computer, but they also shared software more easily, increasing the quantity of software available (and viruses) by orders of magnitude.
Linux was riding this wave since its beginning, and probably that was the most important factor why such an underground movement got so much momentum. It was considered subversive, anti-capitalist to use free software and those people (including me) were hunt down like communists, and ridiculed as idiots with no common-sense. Today we know how ridicule it is to use Linux, most companies and governments do and would be unthinkable today not to use it for what it’s good. But it’s not for every one, not for everything.
Apple always had a niche, and they were really smart not to get out of it. Companies like Intel and ARM are trying to get out of their niche and attack new markets, to maybe savage a section of economy they don’t have control over. Intel is going small, ARM is going big and both will get hurt. Who get’s more hurt doesn’t matter, what matter is that Apple never went to attack other markets directly.
Ever since the beginning, Apple’s ads were in the lines of “be smart, be cool, use Apple”. They never said their office suite was better than Microsoft’s (as MS does with Open Office), or that their hardware support was better (like MS does with Linux). Once you compare directly your products with someone else’s, you’re bound to trouble. When Microsoft started comparing their OS with Linux (late 90’s), the community fought back showing all the areas in which they were very poor, and businesses and governments started doing the same, and that was a big hit on Windows. Apple never did that directly.
By being always on the sidelines, Apple was the different. In their own niche, there was no competitor. Windows or Linux never entered that space, not even today. When Apple entered the mobile phone market, they didn’t took market from anyone else, they made a new market for themselves. Who bought iPhones didn’t want to buy anything else, they just did because there was no iPhone at the time.
Android mobile phones are widespread, growing faster than anything else, taking Symbian phones out of the market, destroying RIM’s homogeneity, but rarely touching the iPhone market. Apple fan-boys will always buy Apple products, no matter the cost or the lower quality in software and hardware. Being cool is more important than any of that.
Fool me once again, please
Being an Apple fan-boy is hard work. Whenever a new iPhone is out, the old ones disappear from the market and you’re outdated. Whenever the new MacBook arrives, the older ones look so out-dated that all your (fan-boy) friends will know you’re not keeping up. If by creating a niche to capture the naiveness of people and profit from it is fooling, than Apple is fooling those same people for decades and they won’t stop now. That has made them the second biggest company in the world (loosing only for an oil company), nobody can argue with that fact.
iPhones have a lesser hardware than most of the new Android phones, less functionality, less compatibility with the rest of the world. The new MacBook air has an Intel chip several years old, lacks connectivity options and in a short time won’t run Flash, Java or anything Steve Jobs dislike when he wakes up from a bad dream. But that doesn’t affect a bit the fan-boys. See, back in the days when Microsoft had fan-boys too, they were completely oblivious to the horrendous problems the platform had (viruses, bugs, reboots, memory hog etc) and they would still mock you for not being on their group.
That’s the same with Apple fan-boys and always have been. I had an Apple ][, and I liked it a lot. But when I saw an Amiga I was baffled. I immediately recognized the clear superiority of the architecture. The sound was amazing, the graphics was impressive and the games were awesome (all that mattered to me at that time, tbh). There was no comparison between an Amiga game and an Apple game at that time and everybody knew it. But Apple fan-boys were all the same, and there were fights in BBSs and meetings: Apple fan-boys one side, Amiga fan-boys on the other and the pizza would be over long before the discussion would cool down.
Nice little town, invaded
But today, reality is a bit harder to swallow. There is no PowerPC, or Alpha or even Sparc now. With Oracle owning Sparc’s roadmap, and following what they are doing to Java and OpenOffice, I wouldn’t be surprised if Larry Ellison one day woke up and decided to burn everything down. Now, there are only two major players in the small to huge markets: Intel and ARM. With ARM only being at the small and smaller, it leaves Intel with all the rest.
MacOS is no longer an OS per se. Its underlying sub-system is based on (or ripped off from) FreeBSD (a robust open source unix-like operating system). As it goes, FreeBSD is so similar to Linux that it’s not hard to re-compile Linux application to run on it. So, why should it be hard to run Linux application on MacOS? Well, it’s not, actually. With the same platform and a very similar sub-system, re-compiling Linux application to Mac is a matter of finding the right tools and libraries, everything else follows the natural course.
Now, this is dangerous! Windows has the protection of being completely different, even on the same platform (Intel), but MacOS doesn’t and there’s no way to keep the penguin’s invasion at bay. For the first time in history, Apple has opened its niche to other players. In Apple terms, this is the same as to kill itself.
See, capitalism is all about keeping control of the market. It’s not about competition or innovation, and it’s clearly not about re-distribution of capital, as the French suggested in their revolution. Albeit Apple never fought Microsoft or Linux directly, they had their market well in control and that was the key to their success. With very clever advertising and average quality hardware, they managed to build an entire universe of their own and attract a huge crowd that, once in, would never look back. But now, that bubble has been invaded by the penguin commies, and there’s no way for them to protect that market as they’ve done before.
One solution to rule them all
On a very good analysis of the Linux “dream”, this article suggests that it is dead. If you look to Linux as if it was a company (following the success of Canonical, I’m not surprised), he has a point. But Linux is not Canonical, nor a dream and it’s definitely not dead.
In the same line, you could argue that Windows is dead. It hasn’t grown up for a while, Vista destroyed the confidence and moved more people to Macs and Linux than ever before. The same way, more than 10 years ago, a common misconception for Microsoft’s fan-boys was that the Mac was dead. Its niche was too little, the hardware too expensive and incompatible with everything else. Windows is in the same position today, but it’s far from dead.
But Linux is not a company, it doesn’t fit the normal capitalist market analysis. Remember that Linux hackers are commies, right? It’s an organic community, it doesn’t behave like a company or anything capitalism would like to model. This is why it has been so many times wrongly predicted (Linux is dead, this is the year of Linux, Linux will kill Windows, Mac is destroying Linux and so on). All of this is pure bollocks. Linux growth is organic, not exponential, not bombastic. It won’t kill other platforms. Never had, never will. It will, as it has done so far, assimilate and enhance, like the Borg.
If we had Linux in the French revolution, the people would have a better chance of getting something out of it, rather than letting all the glory (and profit) to the newly founded bourgeoisie class. Not because Linux is magic, but because it embraces changes, expand the frontiers and expose the flaw in the current systems. That alone is enough to keep the existing software in constant check, that is vital to software engineering and that will never end. Linux is, in a nutshell, what’s driving innovation in all other software fronts.
Saying that Linux is dead is the same as saying that generic medication is dead because it doesn’t make profit or hasn’t taken over the big pharma’s markets. It simply is not the point and only shows that people are still with the same mindset that put Microsoft, Yahoo!, Google, IBM and now Apple where they are today, all afraid of the big bad wolf, that is not big, nor bad and has nothing to do with a wolf.
This wolf is, mind you, not Linux. Linux and the rest of the open source community are just the only players (and Google, I give them that) that are not afraid of that wolf, but, according to business analysts, they should to be able to play nice with the rest of the market. The big bad wolf is free content.
Free, open content
Free as in freedom is dangerous. Everybody knows what happens when you post on Facebook about your boss being an ass: you get fired. The same would happen if you said it out loud in a company’s lunch, wouldn’t it? Running random software in your machine is dangerous, everybody knows what can happen when virus invade your computer, or rogue software start stealing your bank passwords and personal data.
But all systems now are very similar, and the companies of today are still banging their heads against the same wall as 20 years ago: lock down the platform. 20 years ago that was quite simple, and actually, only the reflection of the construction process of any computer. Today, it has to be actively done.
It’s very easy to rip a DVD and send it to a friend. Today’s broadband speeds allow you to do that quite fast, indeed. But your friend haven’t paid for that, and the media companies felt threatened. They created DRM. Intel has just acquired McAfee to put security measures inside the chip itself. This is the same as DRM, but on a much lower level. Instead of dealing with the problem, those companies are actually delaying the solution and only making the problem worse.
DRM is easily crackable. It has been shown over and over that any DRM (software or hardware) so far has not resisted the will of people. There are far more ingenious people outside companies that do DRM than inside, therefore, it’s impossible to come up with a solution that will fool all outsiders, unless they hire them all (which will never happen) or kill them all (which could happen, if things keep the same pace).
Unless those companies start looking at the problem as the new reality, and create solutions to work in this new reality, they won’t make any money out of it. DRM is not just bad, but it’s very costly and hampers progress and innovation. It kills what capitalism loves most: profit. Take all the money spent on DRM that were cracked a day later, all the money RIAA spent on lawsuits, all the trouble to create software solutions to lock all users and the drop-out rate which happens when some better solution appears (see Google vs. Yahoo) and you get the picture.
Locked down society
Apple’s first popular advertisement was the one mocking Orwell’s 1984 and how Apple would break the rules by bringing something completely different that would free people of the locked down world they lived in. Funny though, how things turned out…
Steve Jobs say that Android is a segmented market, that Apple is better because it has only one solution to every problem. They said the same thing about Windows and Linux, that the segmentation is what’s driving their demise, that everybody should listen to Steve Jobs and use his own creations (one for each problem) and that the rest was just too noisy, too complicated for really cool people to use.
I don’t know you, but for me that sounds exactly like Big Brother’s speech.
With DRM and control of the ApStore, Apple has total freedom to put in, or take out, whatever they want, whenever they want. It has happened and will continue to happen. They never put Flash in iPhones, not because of any technical reason, but just because Steve Jobs doesn’t like it. They’re now taking Java out of the Mac “experience”, again, just for kicks. Microsoft at least put .NET and Silverlight in place, but Apple simply takes out, no replacements.
Oh, how Apple fan-boys like it. They applaud, they defend with their lives, even having no knowledge of why nor even if there is any reason for it. They just watch Steve Jobs speech and repeat, word by word. There is no reason, and those people are sounding every day more dumb than anything else, but who am I to say so? I’m the one out of the group, I’m the one who has no voice.
When that happened to Microsoft in the 90’s, it was hard to take it. The numbers were more like 95% of them and 1% of us, so there was absolutely no argument that would make them understand the utter garbage they were talking about. But today, Apple market is still not big enough, so the Apple fan-boys are indeed making Apple the second biggest company in the world, but they still look like idiots to the rest of the +50% of the world.
Yahoo has shown us that locking users down, stuffing them with ads and ignoring completely the upgrade of their architecture for years is not a good patho. But Apple (as did Yahoo) thinks they are invulnerable. When Google exploded with their awesome search (I was at Yahoo’s search team at the time), we had a shock. It was not just better than Yahoo’s search, it really worked! Yahoo was afraid of being the copy-cat, so they started walking down other paths and in the end, it never really worked.
Yahoo, that started as a search company, now runs Microsoft’s lame search engine. This is, for me, the utmost proof that they failed miserably. The second biggest thing Yahoo had was email and Google has it better. Portals? Who need portals when you have the whole web at your finger tips with Google search? In the end, Google killed every single Yahoo business, one by one. Apple is following the same path, locking themselves out of the world, just waiting for someone to come with a better and simpler solution that will actually work. And they won’t listen, not even when it’s too late.
Before Yahoo! was IBM. After Apple there will be more. Those that don’t accept reality as it is, that stuck with their old ideas just because it worked so far, are bound to fail. Of course, Steve Jobs made all the money he could, and he’s not worried. As aren’t David Filo or Jerry Young, Bill Gates or Larry Ellison. And this is the crucial part.
Companies fade because great leaders fade. Communities fade when they’re no longer relevant. the Linux community is still very much relevant and won’t fade too soon. And, by its metamorphic nature, it’s very likely that the free, open source community will never die.
Companies better get used to it, and find ways to profit from it. Free, open content is here to stay, and there’s nothing anyone can do to stop that. Being dictators is not helping for the US patent and copyright system, not helping for Microsoft or Intel and definitely won’t help Apple. If they want to stay relevant, they better change soon.
| February 9th, 2010 under Corporate, Devel, Games, Politics, rengolin. [ Comments: none ]
A while ago I wrote an article about Agile and Scrum and wanted to write another one following my recent experience with Agile. However, somehow I couldn’t add anything of that great value to my original post that would be worth a new one.
And now I know I don’t have to. In this fantastic post, Gwaredd takes a deep look into all failures and successes of Agile, with the common misconceptions of believers and decision-makers. In the end, the so called “Post Agile”, is just plain common sense.
« Previous entries