Uno score keeper |
| March 31st, 2013 under Devel, OSS, rengolin, Software. [ Comments: none ]
With the spring not coming soon, we had to improvise during the Easter break and play Uno every night. It’s a lot of fun, but it can take quite a while to find a piece of clean paper and a pen that works around the house, so I wondered if there was an app for that. It turns out, there wasn’t!
There were several apps to keep card game scores, but every one was specific to the game, and they had ads, and wanted access to the Internet, so I decided it was worth it writing one myself. Plus, that would finally teach me to write Android apps, a thing I was delaying to get started for years.
Card Game Scores
The app is not just a Uno score keeper, it’s actually pretty generic. You just keep adding points until someone passes the threshold, when the poor soul will be declared a winner or a loser, depending on how you set up the game. Since we’re playing every night, even the 30 seconds I spent re-writing our names was adding up, so I made it to save the last game in the Android tuple store, so you can retrieve it via the “Last Game” button.
It’s also surprisingly easy to use (I had no idea), but if you go back and forth inside the app, it cleans the game and start over a new one, with the same players, so you can go on as many rounds as you want. I might add a button to restart (or leave the app) when there’s a winner, though.
I’m also thinking about printing the names in order in the end (from victorious to loser), and some other small changes, but the way it is, is good enough to advertise and see what people think.
If you end up using, please let me know!
Download and Source Code
The app is open source (GPL), so rest assured it has no tricks or money involved. Feel free to download it from here, and get the source code at GitHub.
Distributed Compilation on a Pandaboard Cluster |
| February 13th, 2013 under Devel, Distributed, OSS, rengolin. [ Comments: 2 ]
This week I was experimenting with the distcc and Ninja on a Pandaboard cluster and it behaves exactly as I expected, which is a good thing, but it might not be what I was looking for, which is not.
Long story short, our LLVM buildbots were running very slow, from 3 to 4.5 hours to compile and test LLVM. If you consider that at peak time (PST hours) there are up to 10 commits in a single hour, the buildbot will end up testing 20-odd patches at the same time. If it breaks in unexpected ways, of if there is more than one patch on a given area, it might be hard to spot the guilty.
We ended up just avoiding the make clean step, which put us around 15 minutes build+tests, with the odd chance of getting 1 or 2 hours tops, which is a great deal. But one of the alternatives I was investigating is to do a distributed build. More so because of the availability of cluster nodes with dozens of ARM cores inside, we could make use of such a cluster to speed up our native testing, even benchmarking on a distributed way. If we do it often enough, the sample might be big enough to account for the differences.
So, I got three Pandaboards ES (dual Cortex-A9, 1GB RAM each) and put the stock Ubuntu 12.04 on them and installed the bare minimum (vim, build-essential, python-dev, etc), upgraded to the latest packages and they were all set. Then, I needed to find the right tools to get a distributed build going.
It took a bit of searching, but I ended up with the following tool-set:
- distcc: The distributed build dispatcher, which knows about the other machines in the cluster and how to send them jobs and get the results back
- CMake: A Makefile generator which LLVM can use, and it’s much better than autoconf, but can also generate Ninja files!
- Ninja: The new intelligent builder which not only is faster to resolve dependencies, but also has a very easy way to change the rules to use distcc, and also has a magical new feature called pools, which allow me to scale job types independently (compilers, linkers, etc).
All three tools had to be compiled from source. Distcc’s binary distribution for ARM is too old, CMake’s version on that Ubuntu couldn’t generate Ninja files and Ninja doesn’t have binary distributions, full stop. However, it was very simple to get them interoperating nicely (follow the instructions).
You don’t have to use CMake, there are other tools that generate Ninja files, but since LLVM uses CMake, I didn’t have to do anything. What you don’t want is to generate the Ninja files yourself, it’s just not worth it. Different than Make, Ninja doesn’t try to search for patterns and possibilities (this is why it’s fast), so you have to be very specific on the Ninja file on what you want to accomplish. This is very easy for a program to do (like CMake), but very hard and error prone for a human (like me).
To use distcc is simple:
- Replace the
compiler command by
distcc compiler on your Ninja rules;
- Set the environment variable
DISTCC_HOSTS to the list of IPs that will be the slaves (including localhost);
- Start the distcc daemon on all slaves (not on the master):
distccd --daemon --allow <MasterIP>;
- Run ninja with the number of CPUs of all machines + 1 for each machine. Ex:
ninja -j6 for 2 Pandaboards.
A local build, on a single Pandaboard of just LLVM (no Clang, no check-all) takes about 63 minutes. With distcc and 2 Pandas it took 62 minutes!
That’s better, but not as much as one would hope for, and the reason is a bit obvious, but no less damaging: The Linker! It took 20 minutes to compile all of the code, and 40 minutes to link them into executable. That happened because while we had 3 compilation jobs on each machine, we had 6 linking jobs on a single Panda!
See, distcc can spread the compilation jobs as long as it copies the objects back to the master, but because a linker needs all objects in memory to do the linking, it can’t do that over the network. What distcc could do, with Ninja’s help, is to know which objects will be linked together, and keep copies of them on different machines, so that you can link on separate machines, but that is not a trivial task, and relies on an interoperation level between the tools that they’re not designed to accept.
And that’s where Ninja proved to be worth its name: Ninja pools! In Ninja, pools are named resources that bundle together with a specific level of scalability. You can say that compilers scale free, but linkers can’t run more than a handful. You simply need to create a pool called linker_pool (or anything you want), give it a depth of, say, 2, and annotate all linking jobs with that pool. See the manual for more details.
With the pools enabled, a distcc build on 2 Pandaboards took exactly 40 minutes. That’s 33% of gain with double the resources, not bad. But, how does that scale if we add more Pandas?
How does it scale?
To get a third point (and be able to apply a curve fit), I’ve added another Panda and ran again, with 9 jobs and linker pool at 2, and it finished in 30 minutes. That’s less than half the time with three times more resources. As expected, it’s flattening out, but how much more can we add to be profitable?
I don’t have an infinite number of Pandas (nor I want to spend all my time on it), so I just cheated and got a curve fitting program (xcrvfit, in case you’re wondering) and cooked up an exponential that was close enough to the points and use the software ability to do a best fit. It came out with
86.806*exp(-0.58505*x) + 14.229, which according to Lybniz, flattens out after 4 boards (about 20 minutes).
Distcc has a special mode called pump mode, in which it pushes with the C file, all headers necessary to compile it solely on the node. Normally, distcc will pre-compile on the master node and send the pre-compiled result to the slaves, which convert to object code. According to the manual, this could improve the performance 10-fold! Well, my results were a little less impressive, actually, my 3-Panda cluster finished in just about 34 minutes, 4 minutes more than without push mode, which is puzzling.
I could clearly see that the files were being compiled in the slaves (distccmon-text would tell me that, while there was a lot of “preprocessing” jobs on the master before), but Ninja doesn’t print times on each output line for me to guess what could have slowed it down. I don’t think there was any effect on the linker process, which was still enabled in this mode.
Simply put, both distcc and Ninja pools have shown to be worthy tools. On slow hardware, such as the Pandas, distributed builds can be an option, as long as you have a good balance between compilation and linking. Ninja could be improved to help distcc to link on remote nodes as well, but that’s a wish I would not press on the team.
However, scaling only to 4 boards will reduce a lot of the value for me, since I was expecting to use 16/32 cores. The main problem is again the linker jobs working solely on the master node, and LLVM having lots and lots of libraries and binaries. Ninja’s pools can also work well when compiling LLVM+Clang on debug mode, since the objects are many times bigger, and even on above average machine you can start swapping or even freeze your machine if using other GUI programs (browsers, editors, etc).
In a nutshell, the technology is great and works as advertised, but with LLVM it might not be yet the thing. It’s still more profitable to get faster hardware, like the Chromebooks, that are 3x faster than the Pandas and cost only marginally more.
Would also be good to know why the pump mode has regressed in performance, but I have no more time to spend on this, so I leave as a exercise to the reader.
LLVM Vectorizer |
| February 12th, 2013 under Algorithms, Devel, rengolin. [ Comments: 2 ]
Now that I’m back working full-time with LLVM, it’s time to get some numbers about performance on ARM.
I’ve been digging the new LLVM loop vectorizer and I have to say, I’m impressed. The code is well structured, extensible and above all, sensible. There are lots of room for improvement, and the code is simple enough so you can do it without destroying the rest or having to re-design everything.
The main idea is that the loop vectorizer is a Loop Pass, which means that if you register this pass (automatically on
-O3, or with
-loop-vectorize option), the Pass Manager will run its
runOnLoop(Loop*) function on every loop it finds.
The three main components are:
- The Loop Vectorization Legality: Basically identifies if it’s legal (not just possible) to vectorize. This includes checking if we’re dealing with an inner loop, and if it’s big enough to be worth, and making sure there aren’t any conditions that forbid vectorization, such as overlaps between reads and writes or instructions that don’t have a vector counter-part on a specific architecture. If nothing is found to be wrong, we proceed to the second phase:
- The Loop Vectorization Cost Model: This step will evaluate both versions of the code: scalar and vector. Since each architecture has its own vector model, it’s not possible to create a common model for all platforms, and in most cases, it’s the special behaviour that makes vectorization profitable (like 256-bits operations in AVX), so we need a bunch of cost model tables that we consult given an instruction and the types involved. Also, this model doesn’t know how the compiler will lower the scalar or vectorized instructions, so it’s mostly guess-work. If the vector cost (normalized to the vector size) is less than the scalar cost, we do:
- The Loop Vectorization: Which is the proper vectorization, ie. walking through the scalar basic blocks, changing the induction range and increment, creating the prologue and epilogue, promote all types to vector types and change all instructions to vector instructions, taking care to leave the interaction with the scalar registers intact. This last part is a dangerous one, since we can end up creating a lot of copies from scalar to vector registers, which is quite expensive and was not accounted for in the cost model (remember, the cost model is guess-work based).
All that happens on a new loop place-holder, and if all is well at the end, we replace the original basic blocks by the new vectorized ones.
So, the question is, how good is this? Well, depending on the problems we’re dealing with, vectorizers can considerably speed up execution. Especially iterative algorithms, with lots of loops, like matrix manipulation, linear algebra, cryptography, compression, etc. In more practical terms, anything to do with encoding and decoding media (watching or recording videos, pictures, audio), Internet telephones (compression and encryption of audio and video), and all kinds of scientific computing.
One important benchmark for that kind of workload is Linpack. Not only Linpack has many examples of loops waiting to be vectorized, but it’s also the benchmark that defines the Top500 list, which classifies the fastest computers in the world.
So, both GCC and Clang now have the vectorizers turned on by default with
-O3, so comparing them is as simple as compiling the programs and see them fly. But, since I’m also interested in seeing what is the performance gain with just the LLVM vectorizer, I also disabled it and ran a clang with only
-O3, no vectorizer.
On x86_64 Intel (Core i7-3632QM), I got these results:
This is some statement! The GCC vectorizer exists for a lot longer than LLVM’s and has been developed by many vectorization gurus and LLVM seems to easily beat GCC in that field. But, a word of warning, Linpack is by no means representative of all use cases and user visible behaviour, and it’s very likely that GCC will beat LLVM on most other cases. Still, a reason to celebrate, I think.
This boost mean that, for many cases, not only the legality if the transformations are legal and correct (or Linpack would have gotten wrong results), but they also manage to generate faster code at no discernible cost. Of course, the theoretical limit is around 4x boost (if you manage to duplicate every single scalar instruction by a vector one and the CPU has the same behaviour about branch prediction and cache, etc), so one could expect a slightly higher number, something on the order of 2x better.
It depends on the computation density we’re talking about. Linpack tests specifically the inner loops of matrix manipulation, so I’d expect a much higher ratio of improvement, something around 3x or even closer to 4x. VoIP calls, watching films and listening to MP3 are also good examples of densely packet computation, but since we’re usually running those application on a multi-task operating system, you’ll rarely see improvements higher than 2x. But general applications rarely spend that much time on inner loops (mostly waiting for user input and then doing a bunch of unrelated operations, hardly vectorizeable).
Another important aspect of vectorization is that it saves a lot of battery juice. MP3 decoding doesn’t really matter if you finish in 10 or 5 seconds, as long as the music doesn’t stop to buffer. But taking 5 seconds instead of 10 means that on the other 5 seconds the CPU can reduce its voltage and save battery. This is especially important in mobile devices.
What about ARM code?
Now that we know the vectorizer works well, and the cost model is reasonably accurate, how does it compare on ARM CPUs?
It seems that the grass is not so green on this side, at least not at the moment. I have reports that on ARM it also reached the 40% boost similar to Intel, but what I saw was a different picture altogether.
On a Samsung Chromebook (Cortex-A15) I got:
The performance regression can be explained by the amount of scalar code intermixed with vector code inside the inner loops as a result of shuffles (movement of data within the vector registers and between scalar and vector registers) not being lowered correctly. This most likely happens because the LLVM back-end relies a lot on pattern-matching for instruction selection (a good thing), but the vectorizers might not be producing the shuffles in the right pattern, as expected by each back-end.
This can be fixed by tweaking the cost model to penalize shuffles, but it’d be good to see if those shuffles aren’t just mismatched against the patterns that the back-end is expecting. We will investigate and report back.
Got results for single precision floating point, which show a greater improvement on both Intel and ARM.
On x86_64 Intel (Core i7-3632QM), I got these results:
On a Samsung Chromebook (Cortex-A15) I got:
Which goes on to show that the vectorizer is, indeed, working well for ARM, but the costs of using the VFP/NEON pipeline outweigh the benefits. Remember than NEON vectors are only 128-bits wide and VFP only 64-bit wide, and NEON has no double precision floating point operations, so they’ll only do one double precision floating point operations per cycle, so the theoretical maximum depends on the speed of the soft-fp libraries.
So, in the future, what we need to be working is the cost model, to make sure we don’t regress in performance, and try to get better algorithms when lowering vector code (both by making sure we match the patterns that the back-end is expecting, and by just finding better ways of vectorizing the same loops).
Without further benchmarks it’s hard to come to a final conclusion, but it’s looking good, that’s for sure. Since Linpack is part of the standard LLVM test-suite benchmarks, fixing this and running it regularly on ARM will at least avoid any further regressions… Now it’s time to get our hands dirty!
Hypocrite Internet Freedom |
| December 11th, 2012 under Digital Rights, Politics, rengolin, Web, World. [ Comments: none ]
Last year, the Internet has shown its power over governments, when we all opposed to the SOPA and PIPA legislations in protests across the world, including this very blog. Later on, against ACTA and so on, and we all felt very powerful indeed. Now, a new thread looms over the Internet, the ITU is trying to take over the Internet.
To quote Ars Technica:
Some of the world’s most authoritarian regimes introduced a new proposal at the World Conference on International Telecommunications on Friday that could dramatically extend the jurisdiction of the International Telecommunication Union over the Internet.
Or New Scientist:
This week, 2000 people have gathered for the World Conference on International Telecommunications (WCIT) in Dubai in the United Arab Emirates to discuss, in part, whether they should be in charge.
And stressing that:
WHO runs the internet? For the past 30 years, pretty much no one.
When in reality, the Internet of today is actually in the precise state the US is trying to avoid, only that now they’re in control, and the ITU is trying to change it to an international organization, where more countries have a say.
Today, the DNS and the main IP blocks are controlled by the ICANN, however, Ars Technica helps us reminding that ICANN and IANA are:
the quasi-private organizations that currently oversee the allocation of domain names and IP addresses.
But the ICANN was once a US government operated body, still with strong ties with Washington, localized solely on the US soil, operating on US law jurisdiction. They also failed on many accounts to democratize their operations, resulting in little or no impact for international input. Furthermore, all top level domains that are not bound to a country (like .com, .org, .net) are also within American jurisdiction, even if they’re hosted and registered in another country.
But controlling the DNS is only half the story. The control that the US has on the Internet is much more powerful. First, they hold (for historical and economical reasons), most of the backbone of the Internet (root DNS servers, core routers, etc). That means the traffic between Europe and Japan will probably pass through them. In theory, this shouldn’t matter and it’s actually an optimization of the self-structuring routing tables, but in fact, the US government has openly reported that they do indeed monitor all traffic that goes within their borders and they do reserve the right to cut it, if they think this presents a risk of national security.
Given the amount of publicity the TSA had since 2001 for their recognition of what poses a security threat, including Twitter comments from British citizens, I wouldn’t trust them, or their automated detection system to care for my security. Also, given the intrusion that they have on some governments like the case of Dotcom in January, where national security operations in New Zealand were shared inappropriately with the American government, I never felt safe when crossing American soil, physically or through the Internet.
Besides, Hollywood has shown in Scandinavia and in UK that they hold a strong leash on European governments when related to (US) copyright laws, forcing governments, once liberals, to abide to American rules, arresting their own citizens, when content is being distributed over the Internet. It’s also interesting to remember than SOPA, PIPA and ACTA, mainly driven by Hollywood, were all created within closed doors.
So, would ITU control be better?
No. Nothing could be further from the truth. Although, in theory, it’s more democratic (more countries with decision power), this decision power has been sought for one main purpose: to enforce more strict laws. I generally agree that the ITU would not be a good controlling body, but believing that nobody controls the Internet is, at least, naive, and normally a pretentious lie.
A legal control of many countries over something as free as the Internet would impose the same dangers as having it free of legal control, since it leaves us with indirect control from the strongest player, which so far, has been the US. The other countries are only so strongly minded about the ITU because the US won’t let them have their voices, and the ITU is a way to create an UN for the Internet.
In that sense, the ITU would be a lot like the UN. Worthless. A puppet in the hands or the strong players. Each country would have more control over their borders, and that would impact almost nothing in the US, but the general rules would stop being valid, and the US (and other countries) would have to do a lot more work than they do today. One example is the stupid rule in the UK where the sites, including international ones, have to warn users that they are using cookies.
Don’t be fooled, the US government is not really worried about your safety and security, nor your freedom. They’re trying to avoid a lot of work, and a big loss in market in the Middle East and South Asia. With countries (that they like to say are authoritarian regimes) imposing stricter rules on traffic, including fees, taxes and other things that they have on material goods, the commerce with those governments will be a lot more expensive.
Ever since the second world war, the US economy is based mainly on military activities. First, helping Europe got them out of the big depression, then they forced rebellions throughout Latin America to keep the coins clinking and currently, it’s the Middle East. With the climate change endangering their last non-war resources (oil), they were betting on the Internet to spread the American Way Of Life to the less fortunate, with the off chance of selling a few iPads on the process, but now, that profit margin is getting dangerously thin.
Not to mention the military threat, since a lot of the intelligence is now being gathered through the Internet, and recent attacks on Iranian nuclear power plants via the Stuxnet worm, would all become a lot harder. The fact that China is now bigger and more powerful than they are, in every possible aspect (I dare say even military, but we can’t know for sure), is also not helping.
What is then, the solution? Is it possible to really have nobody running the Internet? And, if at all possible, is it desirable?
Mad Max Internet
I don’t think so.
It’s true that IPv6 should remove completely the need for IP allocation, but DNS is a serious problem. Letting DNS registration to an organic self-organized process would lead to widespread malicious content being distributed and building security measures around it would be much harder than they already are. The same is true with SSL certificates. You’d expect that, on a land with no rules, trusted bodies would charge a fortune and extort clients for a safe SSL certificate, if they actually produce a good one, that is, but this is exactly what happens today, on ICANN rule.
Routing would also be affected, since current algorithms rely on total trust between parties. There was a time when China had all US traffic (including governmental and military) through its routers, solely done via standard BGP rules. On a world where every country has its own core router, digitally attacking another country would be as easy as changing one line on a router.
We all love to think that the Internet is a free world already, but more often than ever, people are being arrested for their electronic behaviour. Unfortunately, because there isn’t a set of rules, or a governing body, the rules that get people arrested are the rules of the strongest player, which in our current case, is Hollywood. So, how is it possible to reconcile security, anonymity and stability without recurring to governing bodies?
The simple answer is, it’s not. The Internet is a land with no physical barriers, where contacting people over 1000s of miles is the same as the one besides you, but we don’t live in a world without borders. It’s not possible to reconcile the laws of all countries, with all the different cultures, into one single book. As long as the world keeps its multiculturalism, we have to cope with different rules for different countries, and I’m not in favour of losing our identity just to make the Internet a place comfortable to the US government.
It is my opinion that we do, indeed, need a regulating body. ICANN, ITU, it doesn’t matter, as long as the decisions are good for most.
I don’t expect that any such governing body would come up with a set of rules that are good for everybody, nor that they’ll find the best rules in the first N iterations (for large N), but if the process is fair, we should reach consensus (when N tends to infinity). The problem with both ICANN and ITU is that neither are fair, and there are other interests at play that are weighted much more than the interests of the people.
Since no regulating body, governmental or not, will ever account for the interests of the people (today or ever), people tend to hope that no-rule is the best rule, but I hope I have shown that this is not true. I believe that instead, a governing multi-body is the real solution. It’s hypocrite to believe that Russia will let the US create regulations within its borders, so we can’t assume that will ever happen from start, if we want it to work in the long run. So this multi-body, composed by independent organizations in Europe, Asia, Oceania, Africa and Americas would have strong powers on their regions, but would have to agree on very general terms.
The general terms would be something like:
- There should be no cost associated with the traffic to/from/across any country to any other country
- There should be no filtering of any content across countries, but filtering should be possible to/from a specific country or region based on religious or legal grounds
- It should be possible for countries to deny certain types of traffic (as opposed to filtering above), so that routing around would be preferred
- Misuse of Internet protocols (such as BGP and DNS spoofing) on root routers/DNS servers should be considered an international crime with the country responsible for the server in charge of the punishments or sanctions against that country could be enforced by the UN
- Legal rights and responsibilities on the Internet should be similar (but not identical) as they are on the physical world, but each country has the right and duty to enforce their own rules
Rule 1 is fundamental and would cut short most of the recent ITU’s proposals. It’s utter nonsense to cross-charge the Internet as it is to do it with telecoms around the world, and that is probably the biggest problem of the new proposal.
Rules 2 and 3 would leave control over regional Internet with little impact on the rest. It’d also foment creation of new routes around problematic countries, which is always beneficial to the Internet reliability as a whole. It’s hypocrite to assume that the US government has the right to impose Internet rules on countries like Iran or China, and it’s up to the people of China and Iran to fight their leaders on their own terms.
It’s extremely hypocrite, and very common, in the US to believe that their system (the American Way of Life) is the best for every citizen of the world, or that the people of other countries have no way of choosing their own history. It’s also extremely hypocrite to blame authoritarian governments on Internet regulations and at the same time provide weapons and support local authoritarian groups. Let’s not forget the role of the US on Afghanistan and Iraq prior to the Gulf War, as opposition to Russia and Iran (respectively), and their pivot role on all major authoritarian revolution in Latin America.
Most countries, including Russia and the ones in Middle East would probably be fine with rules 2 and 3, with little impact on the rest of the world. Which leaves us with rule 4, to account for the trust-worthiness of the whole system. Today, there is a gang of a few pals who control the main routers and giving more control over less trust-worthy pals over DNS and BGP routes would indeed be a problem.
However, in fact, this rule is in vigour today, since China routed US traffic for only 18 minutes. It was more a show of power than a real attack, but had China been doing this for too long, the US would think otherwise and with very strong reasons. The loose control is good, but the loose responsibility is not. Countries should have the freedom to structure their Internet backbones but also do it responsibly, or be punished otherwise.
Finally, there’s rule 5. How to account when a citizen of one country behaves in another country’s website as it’s legal for his culture, but not the other? Strong religious and ethical issues will arise from that, but nothing that there isn’t already on the Internet. Most of the time, this problem is identical to what already happens on the real world, with people from one country that commit crimes on another country. The hard bit is to know what are the differences between physical and logical worlds and how to reconcile the differences in interpretation of the multiple groups that will take part on such governing multi-body.
ITU’s proposal is not good, but ICANN’s is neither. The third alternative, to lack complete control is only going to make it worse, so we need a solution that is both viable and general enough, so that most countries agree to it. It also needs to relinquish control of internal features to their own governments in a way to not affect the rest of the Internet.
I argue that one single body, being it ITU or ICANN, is not a good model, since it’s not general enough nor they account for specific regions’ concerns (ICANN won’t listen to the Middle East and ITU won’t regard the US). So, the only solution I can see possible is one that unites them all into a governing multi-body, with very little in global agreement, but with general rules powerful enough to guarantee that the Internet will be free forever.
The American constitution is a beautiful piece of writing, but in reality, over the years, their government have destroyed most of its beauty. So, long term self-check must also be a core part of this multi-body, with regular review and democratic decisions (sorry authoritarian regimes, it’s the only way).
In a nutshell, while it is possible to write the Internet Constitution and make it work in the long term, humanity is very likely not ready to do that yet, and we’ll probably see the destruction of the Internet in the next 10 years.
Open Source and Innovation |
| September 13th, 2012 under Corporate, OSS, rengolin, Technology. [ Comments: 1 ]
A few weeks ago, a friend (Rob) asked me a pertinent question: “How can someone innovate and protect her innovation with open source?”. Initially, I scorned off with a simple “well, you know…”, but this turned out to be a really hard question to answer.
The main idea is that, in the end, every software (and possibly hardware) will end up as open source. Not because it’s beautiful and fluffy, but because it seems to be the natural course of things nowadays. We seem to be moving from profiting on products, to giving them away and profiting on services. If that’s true, are we going to stop innovating at all, and just focus on services? What about the real scientists that move the world forward, are they also going to be flipping burgers?
Open Source as a business model
The reason to use open source is clear, the TCO fallacy is gone and we’re all used to it (especially the lawyers!), that’s all good, but the question is really what (or even when) to open source your own stuff. Some companies do it because they want to sell the value added, or plugins and services. Others do because it’s not their core business or they want to form a community, which would otherwise use the competitors’ open source solution. Whatever the reason is, more and more we seem to be open sourcing software and hardware at an increasing speed, some times it comes off as open source on its first day in the wild.
Open source is a very good cost sharing model. Companies can develop a third-party product, not related to their core areas (where they actually make money), and still claim no responsibility or ownership (which would be costly). For example, the GNU/Linux and FreeBSD operating systems tremendously reduce the cost of any application developer, from embedded systems to big distributed platforms. Most platforms today (Apple’s, Androids, set-top boxes, sat-navs, HPC clusters, web-servers, routers, etc) have them at their core. If each of these products had to develop their own operating system (or even parts of it), it wouldn’t be commercially viable.
Another example is the MeshPotato (in Puerto Rico) box, which uses open software and hardware initially developed by Village Telco (in South Africa). They can cover wide areas providing internet and VoIP telephony over the rugged terrain of Puerto Rico for under $30 a month. If they had to develop their hardware and software (including the OS), it’d cost no less than a few hundred pounds. Examples like that are abundant these days and it’s hard to ignore the benefits of Open Source. Even Microsoft, once the biggest closed-source zealot, who propagated the misinformation that open source was hurting the American Way of Life is now one of the biggest open source contributors on the planet.
So, what is the question then?
If open source saves money everywhere, and promotes incremental innovation that wouldn’t be otherwise possible, how can the original question not have been answered? The key was in the scope.
Rob was referring, in fact, to real chunky innovations. Those that take years to develop, many people working hard with one goal in mind, spending their last penny to possibly profit in the end. The true sense of entrepreneurship. Things that might profit from other open source technologies, but are so hard to make that even so it takes years to produce. Things like new chips, new medicines, real artificial intelligence software and hardware, etc. The open source savings on those projects are marginal. Furthermore, if you spend 10 years developing a software (or hardware) and open source it straight away, how are you ever going to get your investment money back? Unless you charge $500 a month in services to thousands of customers on day one, you won’t see the money back in decades.
The big misunderstanding, I think, it’s that this model no longer applies, so the initial question was invalid to begin with. I explain.
Science and Tecnology
300 years ago, if you were curious about something you could make a name for yourself very easily. You could barely call what they did science. They even called themselves natural philosophers, because what they did was mostly discovering nature and inquiring about its behaviour. Robert Hooke was a natural philosopher and a polymath, he kept dogs with their internals in the open just to see if it’d survive. He’d keep looking at things through a microscope and he named most of the small things we can see today.
Newton, Liebniz, Gauss, Euler and few others have created the whole foundation of modern mathematics. They are known for fundamentally changing how we perceive the universe. It’d be preposterous to assume that there isn’t a person today as bright as they were, but yet, we don’t see people changing our perception of the universe that often. The last spree was more than a hundred years ago, with Maxwell, Planck and Einstein, but still, they were corrections (albeit fundamental) to the model.
Today, a scientist contents in scratching the surface of a minor field in astrophysics, and he’ll probably get a Nobel for that. But how many of you can name more than 5 Nobel laureates? Did they really change your perception of the universe? Did they invent things such as real artificial intelligence or did they discover a better way of doing politics? Sadly, no. Not because they weren’t as smart as Newton or Leibniz, but because the easy things were already discovered, now we’re in for the hard and incremental science and, like it or not, there’s no way around it.
Today, if you wrapped tin foil around a toilet paper tube and played music with it, people would, at best, think you’re cute. Thomas Edison did that and was called a Wizard. Nokia was trying to build a smartphone, but they were trying to make it perfect. Steve Jobs made is almost useless, people loved it, and he’s now considered a genius. If you try to produce a bad phone today, people will laugh at you, not think you’re cute, so things are getting harder for the careless innovators, and that’s the crucial point. Careless and accidental innovation is not possible on any field that has been exploited long enough.
Innovation and Business
Innovation is like business, you only profit if there is a market that hasn’t been taken. If you try to invent a new PC, you will fail. But if you produce a computer that has a niche that has never been exploited (even if it’s a known market, like in the Nokia’s smartphone case), you’re in for the money. If you want to build the next AI software, and it marginally works, you can make a lot of money, whether you open source your software or not. Since people will copy (copyright and patent laws are not the same in every country), your profit will diminish with time, proportional to the novelty and the difficulty in copying.
Rob’s point went further, “This isn’t just a matter of what people can or can’t do, is what people should or should not do”. Meaning, shouldn’t we aim for a world where people don’t copy other people’s ideas as a principle, instead of accepting the fact that people copy? My answer is a strong and sounding: NO! For the love of all that’s good, NO!
The first reason is simply because that’s not the world we live in and it will not be as long as humanity remains human. There is no point in creating laws that do not apply to the human race, though it seems that people get away with that very easy these days.
The second point is that it breaks our society. An example: try to get into a bank and ask for investment on a project that will take 10 years to complete (at the cost of $10M) and the return will come during the 70 years that follows it (at a profit of $100′sM a year). The manager will laugh at you and call security. This is, however, the time it takes (today) for copyright in Hollywood to expire (the infamous Mickey Mouse effect), and the kind of money they deal with.
Imagine that a car manufacturer develops a much safer way of building cars, say magical air bags. This company will be able to charge a premium, not just because of the development costs, but also for its unique position in the market. With time, it’ll save more lives that any other car and governments will want that to be standard. But no other company can apply that to their cars, or at least not without paying a huge premium to the original developer. In the end, cars will be much more expensive in general, and we end up paying the price.
Imagine if there were patents for the telephone, or the TV or cars (I mean, the concept of a car) or “talking to another person over the phone”, or “reminding to call your parents once in a while”. It may look silly, but this is better than most patent descriptions! Most of the cost to the consumer would be patents to people that no longer innovate! Did you know that Microsoft makes more money with Android phones than Google? Their contributions to the platform? Nothing. This was an agreement over dubious and silly patents that most companies accepted as opposed to being sued for billions of dollars.
In my opinion, we can’t just live in the 16th century with 21st century technology. You can’t expect to be famous or profit by building an in-house piece of junk or by spotting a new planet. Open source has nothing to do with it. The problem is not what you do with your code, but how you approach the market.
I don’t want to profit at the expense of others, I don’t want to protect my stupid idea that anyone else could have had (or probably already had, but thought it was silly), just because I was smart enough to market it. Difficult technology is difficult (duh), and it’s not up to a team of experts to create it and market it to make money. Science and technology will advance from now on on a steady, baby-steps way, and the tendency is for this pace to get even slower and smaller.
Another important conclusion for me is that, I’d rather live in a world where I cannot profit horrendously from a silly idea just because I’ve patented it than have monopolies like pharma/banking/tobacco/oil/media controlling our governments, or more than directly, our lives. I think that the fact that we copy and destroy property is the most liberating fact of humanity. It’s the Robin Hood of modern societies, making sure that, one way or another, the filthy rich won’t continue getting richer. Explosive growth, monopolies, cartels, free trade and protection of property are core values that I’d rather see dead as a parrot.
In a nutshell, open source does not hinder innovation, protection of property does.
Anarchy and Science |
| July 16th, 2012 under Life, Politics, rengolin, Science, World. [ Comments: none ]
If the world needed more proof that rational thinking is off the menu when concerning humans, we now have a so-called anarchist group attacking science. Bombs, shootings and sabotage, with one single goal: to stop science destroying our lives once and for all.
If you didn’t get it, you’re not alone. I’m still trying to understand the whole issue, but the more I read, the more I’m sure it’s just humanity reaching record levels of stupidity. Again.
First of all, the actions don’t make sense in the realms of anarchy. For ages, anarchism has been a non-violent banner. The anarchist is not tame, but a pacifist. Anarchists fight for freedom of everything, mainly from violence and oppression. Since every state, no matter controlled by whom, is oppressive, anarchists fight the very existence of any central form of coercion.
Bakunin once wrote:
“But the people will feel no better if the stick with which they are being beaten is labeled ‘the people’s stick’.” (Statism and Anarchy )
This clearly means governments that base their choice on the people, such as democracies. For an anarchist, a democracy is as bad as dictatorship, as even in its purest form, it imposes the will of the average citizen onto the majority of the population. (If you thought it was the other way around, you clearly don’t understand democracy!).
In essence, anarchy is all about a long and non-violent migration to the total lack of central government, leaving the people (organised in local communities) to decide what’s best for themselves. If that works or not on a global level, I don’t know. But two key words pop out: non-violent and lack of central power.
In Peter Kropotkin’s own words:
Anarchism is a world-concept based upon a mechanical explanation of all phenomena, embracing the whole of Nature–that is, including in it the life of human societies and their economic, political, and moral problems. Its method of investigation is that of the exact natural sciences, by which every scientific conclusion must be verified. Its aim is to construct a synthetic philosophy comprehending in one generalization all the phenomena of Nature–and therefore also the life of societies (…) [source]
Thus anarchy, as science, is the art of finding the best answer by an iterative and non-violent method, without centralised powers dictating what the answer should be, but finding the answers by experimentation and verification, where everyone should come to the same conclusions.
Science has no central power and doesn’t provide support to any government or controlling body. There isn’t any scientist or organization in the world, nor ever has, that can dictate what scientists believe or can prove. The scientific method is the most democratic method of all, where every one can repeat the same experiments and reach the same results, otherwise the hypothesis is plain wrong, and there is nothing anyone can do to force it to be true.
Science has been used by governments to impose lifestyles, borders and general ignorance, yes. Science has been used to develop unfathomably powerful bombs, yes. And used over and over again to control and dominate countries and continents, yes. But that was never a merit of science, but of governments. Every major blame on science is, actually, the people. Describing how science has made our lives better, would be boring and redundant.
If some scientists are idiots, it doesn’t mean the whole science is. If governments abuse of the power, and science provide that power, it doesn’t mean science is to blame, but governments. If some bishops should burn in hell, it doesn’t mean religion is to blame, but what people make of it. The climate change fiasco, the US national health program criticisms and the whole “God Particle” boom in recent religious people has shown that people are still complete ignorants and prejudicial when evaluating external information.
Pen and paper have been much more harmful to the world than science, and over a much longer period. Pride and honour have wiped out entire civilizations for millennia, well before science was such embedded in our culture. Barons, kings and presidents don’t need science to destroy our lives, but it just happen to be available.
So, science and anarchy have two major points in common: non-violence and the lack of centralised government. Why on Earth would an anarchist group gratuitously attack scientists? Because they are not anarchists, they are just idiots. I truly hope this is an isolated incident. If anarchists of the world lose their minds like these ones, the only hope for humanity (in the long term) will be lost, and there will be no return.
Anarchist science policy
Declaration of Internet Freedom |
| July 3rd, 2012 under Digital Rights, Life, Media, Politics, rengolin, rvincoletto, World. [ Comments: 1 ]
We stand for a free and open Internet.
We support transparent and participatory processes for making Internet policy and the establishment of five basic principles:
- Expression: Don’t censor the Internet.
- Access: Promote universal access to fast and affordable networks.
- Openness: Keep the Internet an open network where everyone is free to connect, communicate, write, read, watch, speak, listen, learn, create and innovate.
- Innovation: Protect the freedom to innovate and create without permission. Don’t block new technologies, and don’t punish innovators for their users’ actions.
- Privacy: Protect privacy and defend everyone’s ability to control how their data and devices are used.
Don’t get it? You should be more informed on the power of the internet and what governments around the world have been doing to it.
Good starting places are: Avaaz, Ars Technica, Electronic Frontier Foundation, End Software Patents, Piratpartiet and the excellent Case for Copyright Reform.
K-means clustering |
| June 20th, 2012 under Algorithms, Devel, rengolin. [ Comments: none ]
Clustering algorithms can be used with many types of data, as long as you have means to distribute them in a space, where there is the concept of distance. Vectors are obvious choices, but not everything can be represented into N-dimensional points. Another way to plot data, that is much closer to real data, is to allow for a large number of binary axis, like tags. So, you can cluster by the amount of tags the entries share, with the distance being (only relative to others) the proportion of these against the non-sharing tags.
An example of tag clustering can be viewed on Google News, an example of clustering on Euclidean spaces can be viewed on the image above (full code here). The clustering code is very small, and the result is very impressive for such a simple code. But the devil is in the details…
Each red dots group is generated randomly from a given central point (draws N randomly distributed points inside a circle or radius R centred at C). Each centre is randomly placed, and sometimes their groups collide (as you can see on the image), but that’s part of the challenge. To find the groups, and their centres, I throw random points (with no knowledge of the groups’ centres) and iterate until I find all groups.
The iteration is very simple, and consists of two steps:
- Assignment Step: For each point, assign it to the nearest mean. This is why you need the concept of distance, and that’s a tricky part. With Cartesian coordinates, it’s simple.
- Update Step: Calculate the real mean of all points belonging to each mean point, and update the point to be at it. This is basically moving the supposed (randomly guessed) mean to it’s rightful place.
On the second iteration, the means, that were randomly selected at first, are now closer to a set of points. Not necessarily points in the same cluster, but the cluster that has more points assigned to any given mean will slowly steal it from the others, since it’ll have more weight when updating it on step 2.
If all goes right, the means will slowly move towards the centre of each group and you can stop when the means don’t move too much after the update step.
Many problems will arise in this simplified version, for sure. For instance, if the mean is exactly in between two groups, and both pull it to their centres with equally strong forces, thus never moving the mean, thus the algorithm thinks it has already found its group, when in fact, it found two. Or if the group is so large that it ends up with two or more means which it belongs, splitting it into many groups.
To overcome these deficiencies, some advanced forms of K-means take into account the shape of the group during the update step, sometimes called soft k-means. Other heuristics can be added as further steps to make sure there aren’t two means too close to each other (relative to their groups’ sizes), or if there are big gaps between points of the same group, but that kind of heuristics tend to be exponential in execution, since they examine every point of a group in relation to other points of the same group.
All in all, still an impressive performance for such a simple algorithm. Next in line, I’ll try clustering data distributed among many binary axis and see how k-means behave.
Tough decision |
| May 10th, 2012 under rengolin, Stories. [ Comments: none ]
Peter wasn’t the most eclectic person, especially when the subject was musical styles. So it was a surprise for him when the alien that had landed in his livingroom (over all other places on Earth) started telling him that they were going to erase from the minds of all people, any memory of the best songs of every band that has performed on Earth.
This was an odd domination plan, to be honest, it looked more like some intergalactic prank, but hey, they’re aliens, right? You can never predict what aliens will do to your planet until they finally arrive and do, well, whatever they do when they arrive on new planets. And this was no exception.
According to the little alien, that was the first time that anyone from his species had landed on Earth and it was his duty to initiate Earthlings into the galactic customs. Peter tried to argue that Earth was on this very galaxy and that is not part of our customs, but the little alien did not reconsider. After all, it’s not like Earth is a central planet or anything.
The more Peter tried to argue, the more he was convinced that the alien was not fooling around. He was actually quite serious, stating that this is the norm for the initiation of any planet into the galactic fellowship, something that all other planets had done, too. There was no escape. The little guy got into his spaceship (or whatever that was, it didn’t look like it could fly in space but Peter was no rocket scientist), and disappear in mid-air, just as quickly and mysteriously as he had shown up.
There was one last thought that Peter should consider until the next morning (GMT), and it was that a single human could stop the initiation ceremony by killing himself. It was like an escape clause in the galactic contract. Either one being sacrifices himself (not killed by others) in the name of the fellowship, or all humans would have the best songs of all bands erased from memory. Forever.
Peter put the kettle on and sat on the dirty sofa of his small London flat. Was that a dream? Nope, he was well awaken, as proved by watching Rupert Murdoch on the telly. He was not drunk or intoxicated, so that shouldn’t be it, either. The kettle popped. He got up to get the tea bag and saw a business card laying on the kitchen sink, written: “You have until midnight of today, Peter. To kill yourself in the name of the Fellowship, tear this card in half.” Ok, now that was the confirmation he was waiting for. It was definitely not a dream.
But what is the problem with it? They’re not erasing all songs, just a few. The best ones, yes, but according to which criteria? For him, Bohemian Rhapsody, Lazy and War Pigs were the greatest songs ever, but there were people that liked Abba, and Beatles and, even those that did like Queen, could prefer Under Pressure instead. How is that even possible to choose? Peter put the tea bag in the cup and poured water in it. The vapour lifted the bitter smell of green tea, that would have to brew for a few more minutes until perfect.
Ok, so they can get the average of all favourite songs, or maybe a top500 list and remove duplicate songs per band. But that still doesn’t have all songs of all bands. They must have a way to traverse all songs in history, including those that were never recorded by humans. But how can they judge quality on them if no one knows they exist? So, they must have a different way to measure quality, an algorithm to judge by rhythm and choice of instruments and scales. Something that can be applied to virtually any audio signal to analyse the quality to a given set of standards, human standards. They must also understand perfectly the auditory system in humans, and human emotions, to know precisely what is good and what is just ok.
In that case, it doesn’t matter what he did like, but it was songs that were practically and theoretically good, no, the best! Wow, that changed things to a whole new level. All the songs he liked were just a handful, but all good songs, ever? That’s a different story. Erasing all good songs is much worse than erasing a single band from history, now matter how good this band is. It’s erasing everything that is good, and keeping a mediocre culture, it’s reducing the cultural richness of humanity to what shows on television or youtube. It’s making a sad world even sadder!
That is something he could not allow to happen! In his own mind, he was now beginning to believe of himself the same he though about the greatest band in the world. It’s better to lose the best band, than the best song of all bands, and him, well, it was better to lose him, even for himself, than to plunger humanity to even lower standards than today!
Peter looked at the tea cup, it was ready. The last green tea he’d ever have. He threw the bag in the sink and gave it a good sip. Burnt his tongue a bit, but no worries, that tongue wouldn’t care in a few minutes anyway. Got the card, and sat at the sofa, with the tea cup in one hand and the card in another. One more sip. This one was perfect, no burning. He put the cup away, held the card with his two hands and started ripping it apart, very slowly. Hearing the sound if it was making his hart stop, or at least beat slower. Much slower.
When suddenly it hit him. No, not death, Lady Gaga.
With the quality TV is these days, Murdoch and Lady Gaga is pretty much all you see without cable, and she was in all her glory (or whatever that is) on the screen. Peter had a revelation. Since the only way to precisely define what is good music is through a set of experiments outside the human mind, based on auditory and emotional systems, as well as the components that music is built from, it was, therefore, impossible to find a good song from Lady Gaga. QED.
Not just Lady Gada, mind you, a lot of what has been produced lately, pushed by the media companies including television. There was so much rubbish in the arts that it’d be impossible to find good music in more than half of what was produced in the last 3 decades! And, to not ignore alternative science, if they consider opinions, there would be a lot of songs that people wouldn’t even know exist.
The card was half-ripped, his tea was still warm. He put the card back where he got it from, sat on the sofa and finished his tea with the knowledge that, whatever that was, dream or bad trip, it was over. When he finished his tea it was Paris Hilton on the telly, doing something stupid, as usual. Peter felt somehow good watching that, knowing that those girls have saved humanity’s art history!
Copy cat |
| April 30th, 2012 under Physics, rengolin, Stories. [ Comments: 1 ]
Shaun was yet another physicist, working for yet another western country on yet another doomsday machine. Even that being far from the last world war, governments still had excuses to spend exorbitant amounts of money on secret projects that would never be used, just for the sake of the argument. It never matters what you do in war, but what’s the size of your gun, compared to the rest, and in that, his country was second to none. Not that anybody cared any more, or that anybody knew of that, since his country has never gone into a proper war in its history, but well, with these things, you can never be to sure, can you.
But I digress, Shaun, yes, the physicist. He had been working on his own project for nearly a decade now and had re-used the old pieces of the LHC in a much more miniaturized version, of course, but in essence, it was capable of creating elementary particles and at the same time entangle them. After the initial explosion, instead of losing the created particles into oblivion (what would be the point in entangling them in the first place, uh?), he actually converged the entangled particles back into atomic form. The idea was to create a clone army, or sub-atomic bombs, or whatever could be done to put fear in other countries. You know how scientists are attached to science-fiction, and Shaun was no exception.
In the beginning he wasn’t very successful, and it took him nearly 5 years to produce a pair of atoms with their quarks and gluons entangled on the other side. While you could easily make atoms entangle in normal lab conditions using lasers, at the moment you turned your machines off, they would go back into their natural state. But in this case, the effects were much more lasting. In recent years, he managed to create whole molecules that were virtually the same, stable for months, even years. Copy cats.
But what he didn’t expect (who would) was that his experiments were also touching the adjacent m-branes of parallel universes. It was hypothesised in the past that some forces could leak to adjacent universes, like gravity, and though that wasn’t widely accepted, it was very hard to prove it wrong. The problem is, until today, nobody had reached energy densities so intense as to actually make a remarkable effect on the parallel universes. Shaun did.
If the parallel universe was, like ours, sparsely populated, with a only handful of pseudo-sapient species, he’d probably have hit empty space. But the universe he found was nothing ordinary. In fact, Shaun’s own experiments for years had created a special condition, in which the aforementioned universe became aware of our own. I explain. His experiments, the entanglement of particles not always worked, as I said earlier, and the less they worked (ie. less matter on this universe), the more they leaked into the adjacent universe.
A door to your own room
In a lovely evening of spring, such as today, with daffodils and tulips blossoming, and the warm spells finally arriving, Shaun would normally be working. 30 storeys below ground. He would see none of that, or care for that matter. His new molecules (DNAs this time) were working at an alarming rate. He managed to duplicate an entire gene last week, and his team was now running loads of tests on the results. It required a lot of energy to create molecules enough to run all tests, but his lab had unlimited supply of everything.
With all his team elsewhere, Shaun was busy trying to expand his technique to achieve the whole sequence of a virus. That made the machine run at wild energy levels (quite a few Pev), and the whole thing destabilized for a moment, and stopped. Fearing he made the surrounding city go dark, he checked all energy inputs, and they were all fine. Trying to measure a few currents here and there, Shaun looked for his multimeter and, oddly, it was on the workbench, not where he’d left. Not surprised, somebody must have used it and not stored properly, it happens. With his multimeter in hand, he started checking all currents and they all look fine, apart from the 17th onwards, that the polarity was reversed.
That was odd. Seriously odd. As if his machine was actually providing energy back to the power plant, only that it was impossible (it was no fusion chamber!). Without a clue, Shaun went back to his desk, left the multimeter by the lamp and reclined his chair, looking in the infinite. The infinite, in this case, was his shelf rack. Everything was blurred, but a remarkably familiar yellow blur caught his attention, and his eyes focused for a moment, and clear as day (though it was never day in his lab), that was his multimeter. Exactly where he’d left, with the dangling red wire over the black one.
He looked back at the table, and sure enough, his multimeter was there, too. Obviously, that one was someone’s else, but just to be sure, he got his own, and started comparing them, finding the same imperfections, the same burnt mark, the same cuts. His head was not working any more, he went back where he found the other multimeter and started looking around, looking for clues. It could very easily be a prank, but his head was not thinking. It was in discovery mode.
Obsessive as he was, he started noticing differences to that part of the room, compared to what it usually was. Almost like the room was displaced in time, with that part a few hours, maybe days, back. And he started putting things in their own place, tidying up as a mechanical task to help him think. When he was satisfied with the place, he turned around and jumped so high backwards that he hit his head on a red pipe that was hanging from the ceiling. It was Shaun, looking back at himself, smiling.
“Hello”, said the other Shaun. “…”. “Yes, I see, you’re in a bit of a shock. That’s understandable, I um, let me help you with the concept.” Shaun said nothing.
“See, you are a very interesting specimen. We’ve been monitoring your experiment ever since we detected the leakage from your universe to ours. Generally, we wouldn’t ourselves believe in multiple universes, but as things were clearly leaking from your universe, we had no other alternative.” Shaun was still speechless. “As you probably have guessed by now, this part of the room is in our universe. Actually, the working part of your experiment has been inside our universe for quite some time. More specifically, ever since it started working…”.
“Hey!” Shaun opened his mouth for the first time. “You can’t possibly say that you guys did all the work!” – without even knowing who were they, but that was too big an insult to let that one pass. “Oh, no, you got me wrong, Shaun. No, you’re absolutely right, you did everything. We just provided our universe to you.”. Shaun was speechless again.
“Understand, we’re in a somewhat different level of technology than you. In some cases, much more advanced, in others, much less.”, the other Shaun continued after a pause, probing for any offence that he could have made. “In practical matters, we’re much more advanced. Our universe has been extremely kind to us. We have a very dense population throughout our known universe, it’s actually hard to get to know all the cultures yourself, we just don’t live long enough. The fact that your universe has been leaking energy has boosted our physics so much, that we managed to halve the energy consumption of all our technology and at the same time, more than double our energy production levels!”, Shaun would not let that one pass… “Lucky you, we have nothing of that…”
“I know! Very well indeed! And it’s in that respect that you guys are so much more advanced than us. Your theoretical physics is so advanced, your mathematics so robust, that make our feeble attempts in model our universe a pre-school matter.” – “Ha!” said Shaun, “our mathematics is broken, Goedel has proven it and Turing re-proved. Our theoretical physics is still fighting over string theory and the alternative and we’re getting nowhere fast!”.
“On the contrary, Shaun. Your universe is limited, so your mathematics can only reach thus far. Your theoretical physics is considering things that we never imagined possible. Our universe is lame next to yours, the challenges that you face are the most delicious delicatessen for our theoretical physicists. There is an entire community, the fastest growing of all times, just to consume the material you guys generated three centuries (of your time) ago!”
The other Shaun was breathless, smiling from ear to ear with a face like a dog waiting for you to throw the stick. There was a deep silence for a few moments. Shaun was afraid that someone would enter through the door and he would have to explain everything, and he was not sure he could, actually. He was still holding the last tool he was going to put somewhere safe. He looked at it and considered that that tool was not actually in his own universe, but somewhere else. Yet, it was there, on the same room.
“So,” – a pause – “how come you are… me?”, “Well, I’m not you, obviously, I’m just represented as you in this piece of our universe. I wouldn’t fit this room otherwise.”, “Oh, I get it.” lied Shaun. The other Shaun continued “You see, your studies has allowed us to extrapolate you idea and re-create your own universe inside our own. This room is just the connection point, if you go through that door” – and pointed to an old door that lead to the emergency exit – “you will continue inside our version of your universe.”, “Wait a minute, how much of our world have you replicated?”, “World, no, not just Earth, everything.” – a long pause, with wide open eyes. After a blink: “you mean, galaxies?”, “Yes, yes, all of them. Your universe is quite compact for all it has to offer, and we were firstly intrigued by that, but then we understood that it was necessary to have the constraints you have, and well, an important feature to generate such high quality theoretical physics.” “And we decided to lend an unused part of our universe so you could not only teach us by broadcasting your knowledge, but also running tests on our own universe.” “Most of your experiments are now part of our day-to-day life, from vehicles to communication devices to life-saving machines.” “You, Shaun, has made our lives so much better, that it was the least we could do.”
“Is there anyone living in this version of our universe? I mean, human … hum … clones?”, “No, no. We thought that would be improper. We do try to live in it, just for the curiosity, actually. There are some holiday packs to travel the wonderful places your universe has to offer. It’s nothing we don’t see in our own, but you know, travel agencies will always find an excuse to take your money, right?” and finished that sentence with a grin and almost a wink. His human traces were very good, almost as if he was observing for far too long, making Shaun to feel a little bit uneasy…
“Actually…” – the other Shaun continued – “maybe you could help us fixing a few things on this side of the universe. Make things a bit more suited to the people from our side, what do you think?”. With the rest of the team deep in tests, it’d be weeks before they would even consider going back to the main lab, and nobody else would dare to enter there, after the several claims (in the private circle that knew him) that his lab would produce a black-hole that would consume Earth and everything else.
Shaun decided to go in, at least to explore the very convincing copy of his own world. Going up the emergency exit, he found the lift all the way to the top, as expected. Outside, as expected, the early rays of the spring sun casting long shadows on the trees and buildings. The nearby cattle farm was empty, though. When the other Shaun noticed Shaun’s curiosity, he added, “Ah, yes, you see, we decided not to include mammals, as they could eventually evolve into sapient beings and we’d be altering the history of our own universe. We didn’t want to do that!”. Shaun thought it was sensible.
For several days, Shaun has listened to all complaints about his own universe and how would that fit into their own physiology. Animals were turned green to photosynthesise, trees would reproduce by multiple ways at the same time, genetic combination of more than a pair of chromosomes were allowed, as was normal in this new universe, and many of the landscapes were altered to fit the gigantic stature of most of its inhabitants. Some parts were left untouched, or the travel agencies would lose a huge market, and some were shortened and simplified, for the less elaborate, but still pseudo-sentient species.
Shaun was feeling very well, like a demi-god, changing landscapes and evolution at his own wishes, much like Slartibartfast. How fortunate was him, the only human – correction – the only being in his universe (as far as he knew) to play with a toy universe himself.
After meeting with leaders of the populations of the alter-universe, receiving gifts and commendations (and a few kisses from the lasses), it was time to return to his own universe. Shaun felt a bit tired, but after drinking a bit of their energetic beverage, he blasted back to alter-Earth in his new hyper-vehicle, to his own alter-lab. In there, only alter-Shaun was there to say goodbye. A handshake and a wink was enough to mean “I’ll be back, and thanks for all the fish”, which Shaun has taken as a warm gift, rather than a creepy resemblance.
But as soon as Shaun stepped up into his own universe, he noticed some things were out of place. After being in an alter-universe for so long, it was only natural to misplace normal concepts, but some things were not normal at all, like a 10 meter high corridor leading from his side of the room. Normally, It’d be no more than 2 meters and there was a very good reason for that: humans are not that tall!
He ran through it to find a huge door to a huge lift. In the lift were a few people still discussing what had happened. “It was definitely not that big! We must have shrunk!” said one, “No, that’s not possible, that’s Hollywoodian at best!” said the sceptic. Shaun took the lift up to the ground level, and ran to the farm nearby, fearing the worst.
And the worst happened. The cows were green, and the houses huge. Being a bad theoretical physicist himself, and not being able to count on the alter-physicists for theoretical matters, Shaun hasn’t taken into account that his machine was a duplication machine, of entangled particles. That means, for the lay to understand, that whatever happens to one, invariably happens to the other, no matter in what part of the universe, or in this case, in the multi-verse, they are.
That, thought Shaun, would take a bit more than a few days to fix… but he know how, and he was looking forward to fix it himself!
« Previous entries