White European Males, GamerGate and DongleGate |
| May 8th, 2016 under Digital Rights, Games, Life, OSS, Politics, rengolin, Uncategorized, World. [ Comments: 1 ]
First things first, a disclaimer:
- I don’t condone, nor I accept any form of harassment, physical, verbal or electronic.
- I don’t mix technical qualities with life situations. Your choices, opinions, abilities and disabilities may affect the quality of your work, but this is not about those, but about the result: your work.
- I don’t promote abusive behaviour as a form of getting your point across, even if no abusive intention was meant.
- I do promote inclusion in STEM to balance towards the real proportion in society.
- Both GamerGate and DongleGate were disasters on their own, for very different reasons. I want neither to happen.
- I have Asperger’s Syndrome and see things more black and white than most people. I cannot accept qualitative features being used for quantitative purposes. None of this is meant as an offence, or to explain or validate harassment, abuse or any other unethical behaviour. It’s just an analysis.
When Charles Babbage begun creating his analytical machine, he was worried about the hardware and the implications of it for mathematics and the world. But we all know that hardware is only as good as its software, and so Ada Lovelace’s work was of equal importance on that critical milestone. Both of them were mathematicians of an elite that weren’t thoroughly recognised until much later. Both were extremely methodical, eccentric and disconnected from reality. All well known characteristics that Hans Asperger recognised circa 1920 as what we now know as autism.
In the 40’s to 60’s, only really brilliant mathematicians could understand computing, mostly because they were just developing it, but thousands of men and women took part in building and using them. At that time, the proportion of people “using computers” was closer to the social distribution than it is today. However, the number of people working “with computers” was independent of their understanding of the underlying technology. Naturally, the distribution then follows the source group’s own. But after the first real case for general computing (WWII), the world was left with a tool that could do so much more, and people realised that they needed to take it to the next level.
Still too many people were clueless as to how computers worked, and a huge effort was made to get people “into computing”. But the importance and prevalence of computing those days were inexistent, so the appeal to the general public, men and women, were close to zero. The kind of people that felt attracted by it then, and during the 70’s and 80’s were the same groups as Babbage, Lovelace, Turin: people in the autistic spectrum. This is not to say that non-autistic people did’t do it, or worse, that they couldn’t do it. On the contrary, the proof that this is not an autistic-only field is today’s proliferation of computer scientists around the world, regardless of their mental status, gender, race or culture.
During the 70’s, computers had specific purposes, and only universities and very big companies had them. The 80’s saw the first boom in “personal” computing, but it was still dominated by self-built kits, and those like me that remember that time fondly, know how much of a weirdo we were in the eyes of the general population. While more people were taking on computing careers, those experimenting at home still had a clear autistic predisposition.
It was only in the 90’s, when Bill Gates became a millionaire, that people started giving “some” credit to the field, and personal computing toppled and then completely replaced mainframes. During the 80’s, operating systems were developed for the common tasks like word editing, spreadsheets and simple databases, but it wasn’t until the 90’s that most people had one in their homes and small shops. It became ubiquitous only then.
But even in the 90’s, all the attempts to simplify programming (Logo? Basic?) couldn’t really help you do much with computers. They were (and still are) basically toys. So, people that learnt Basic have realised early on that they couldn’t write anything meaningful and would either have to delve deep into C, or give up completely. That was still promoting those of the more autistic disposition to stay and the rest to find something more interesting to do.
But as with every spectrum, thresholds are biased.
If you understand a bit about autism, you know that all we want is to be left alone to our own devices. Don’t come to my house telling me what to do and how. This is most upsetting for autistic people and you will be faced with some unintentional harsh responses due to the genetic reasons that autistic people cannot control or fix.
Autistic people were *always* banned from social life for thousands of years (maybe more?), and since ever they tried to group into segregated societies, often characterised with bigotry and rudeness, not uncommonly harassment. The Royal Society was such a place, and not unlike the current computer science scenario, was dominated by “White European Males“.
It seems obvious to me that the “White European” part is easily explained because of the degree of development that Europe had at the time (1600’s), compared to everywhere else in the world. The parallel with modern computer scientists is clear: North America and Western Europe have a much higher rate of Caucasians well educated and positioned in society, for obvious reasons that don’t fit this text to discuss, than the other groups.
When a field is new and needs pressure to get to, most of the people that will get in will be of a similar disposition. In the same way that most voluntary army cadets will have a similar mentality. I would never be an army volunteer, but I was a computer enthusiast since I was 5 years old.
Recent studies have shown that the proportion of males and females in high-functioning autistic people (the ones that like to solve complex problems for fun) is 4:1. But boys and girls behave very differently, with boys having a lot more physically and verbally violent games, and girls being more sensitive. With a start ratio of 4:1, it’s not hard how that biased self-selection can get to 10:1 or more.
What has become
But after the initial self-balancing, true bigots and abusers (trolls), saw the chance to belong to a society that was professing, for completely different reasons, that different people be kept out. I hope it’s clear enough that high-functioning autistic people have a valid and important reason to keep people out of their lives and groups. Otherwise, they cannot function properly.
Moreover, autistic people have the tendency of responding badly to social pressure, and that includes behaviour that is often misinterpreted as harassment, bullying and violent. It is not uncommon to see very drastic ends to really sad stories.
Autistic people also have a higher than usual rate of trusting people, and therefore much more easily abused by trolls, who will become part of a community and extend their modus operandi, but not necessarily their intention.
People on less advantageous backgrounds (wealth, disabilities, minorities, life choices) had even less chances of getting in a club that was trying to keep people out. But with trolls inside, they’ll make sure this becomes impossible, and that’s how situations like GamerGate happen.
It is important to separate the original cause of aggregation and demand for separation, sometimes aggressively, as a classic high-functioning autistic process, from the subsequent harassment and directed intentional aggression that trolls had after they took over well meaning but fearful and trusting mostly autistic people.
That fact, however, does not condone any aggression, including from autistic people. But what people have to understand is that, if the aggression comes from an autistic person, even high-functioning, they very likely cannot control it and need help. Being offended is ok, but reserving the right to then discharge your own contained aggression, even if you are a minority, is not the way to solve this.
We all have problems, but turning off your care-meter because you are a minority and have just being offended is not ok. And that includes autistic people, too.
Why is this important?
Because computer science has moved on from the nerd-zone for at least 20 years, but much more so in the last 10.
The barrier into technology is so low now that anyone can enter, and once they’re in, they don’t need to be autistic to enjoy. Furthermore, neurotypical people can be as good (or better) than autistic people even in the hardest of problems. After all, being high-functioning autistic doesn’t mean you’re smarter, just means you want to do something that keep you away from people, and talking to machines is the best thing I can think of.
So nowadays we have all kinds of people, and with that, we’re back to the real distribution that societies have. All minorities are now represented by what they are in society. But trolls are haters, and they know some very cunning ways to keep unwanted people around, mostly using subversive tactics like physical, verbal and social abuse, doxing, DDoSing, etc.
We need to remove the trolls from our societies together. This is not a minorities vs majorities fight, this is a fight for the right to be safe. The new minorities have as much right to be safe as the original minority who created the space. And both minorities have the right to be represented, but so does the majorities. The only thing we want to get rid of are the trolls.
What we should move towards
So, autistic people want a space of their own, trolls take over, destroy the Internet. Minorities try to participate, trolls shoot them down, behave like assholes. What else is news? As it all started in the 40’s, we need a compatible distribution with the rest of society. The very definition of minority is that there is less of. So it makes no sense to expect an equal distribution of minority and majority on each specific scale.
For instance, on average worldwide, we have half men, half women. So I would expect the same distribution in STEM subjects. We may be far from it in computer science and physics, but not in biology or chemistry. It’s still not 50/50, so we can’t take each topic to be exactly 50/50, but we can expect the whole STEM subjects to be around that ballpark.
Of the world population, at a glance I see 18% is Han Chinese, while about two thirds of that is “European”, and a third of each Arabic, Hindu and African, living all over the world. The real distribution doesn’t matter much, but I’d expect a similar distribution for STEM in the same way.
Now, getting there will involve two distinct activities:
- Deep grass root movements to increase the development and literacy of impoverished communities, education of better off communities regarding equality and inclusion.
- Improve STEM inclusion and attractiveness for all members of society, as well as remove the exclusion characteristics (trolls) of the already existing community.
People that are keen on seen global equality (1) have to fight that battle outside of STEM subjects. The fights you should have inside are those that discriminate minorities that can already be represented in STEM subjects (2).
For example, all the feminists advocate for inclusion in open source communities already have the will and ability to participate on equal grounds as men. The fact that someone is gay or transgender makes absolutely no difference in a STEM community and should bear no value in inclusion or acceptance. The fact that they are not included is a horrible mistake and has to be fixed inside STEM communities.
We should move towards STEM communities that have a relevant distribution as far as STEM can have on its own. We’re not looking for equal numbers of all minorities, we’re looking for equal distribution of minorities, and those are two very different things.
What we cannot have
What seems to be happening, and it’s something that will not fix anything, is that we’re moving to the other side.
We have to discourage any kind of troll, regardless if they agree with you. It may be satisfying to see someone on your side trolling someone you’re against, but that’s as bad as their side’s troll behaviour. Encouraging hate, even in the form of biased consensus and imposed cultural traits is just as bad as any other form of harassment.
More importantly, it’s that form of harassment that gets to the core of autistic people, including high-functioning ones. It’s the very reason why we hide from people and talk to machines. Cases like DongleGate are as extremist as GamerGate, and as offensive to me.
The fact that one misinterpreting person with one picture and one tweet can get someone fired is disconcerting beyond words. As disconcerting as people ganging up on girls just because they want representativeness on their games. Both behaviours are beyond words.
What we cannot have is to flip sides and have the suffering minorities so far gaining the upper hand and gaining the right to harass the majority or worse still, the forgotten minority that started it all and had no intentional part in any of the bullying.
We need to protect the minorities from abuse, and that includes the odd folks that don’t look mentally retarded or deficient in any way but behave oddly and sometimes aggressively. Those people are too often interpreted as bullies when all they want is to be left alone, and all they need is help adapting to an alien society.
Oh, you want support? |
| August 25th, 2015 under Computers, Corporate, OSS, rengolin, Unix/Linux. [ Comments: none ]
I don’t know how many open source communities have the same problem, but in the LLVM list we do receive more than a few emails a year with people really upset that no one has fixed their bugs quick enough, or that no one replied to their emails. I find this behaviour quite interesting from a sociological point of view, but if you behave in that way, let me help you straight out: it’s rude. Really.
The open source business model relies on sharing of ideas, accumulation of technology and niche development. Small and incremental pieces are incorporated into stabilizing products that provide value to a groups of people.
For example, MacOS and Linux provide different values to the same user base (desktop users). The more commercial software, like MacOS, provide a stable, recognizable interface, with powerful integration to other products of the same line, while the open counterparts provide a more experimental interface, but greater control and spread of knowledge.
Apple’s business model is quite different than most Linux distributions, but both heavily use/derive open source infrastructure (kernel, compilers, libraries). So, if you purchase MacOS, you’re getting not only the eye candy, but also some components that are open source, like LLVM. What companies get from investing in LLVM is up for a different kind of post, but rest assured, the license is really clear: “THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED”.
Most Linux/BSD users, when they have a problem with their programs, they first search the web for the error message. In the uncommon case where they don’t find an answer, they then post on forums or mailing lists, often politely, dumping their logs and error messages, and gladly waiting for an answer, that may take a day, a week, sometimes, it may be forgotten. They, then, try a different forum, or “ping” their messages, work a bit harder, find more causes, etc.
After all, no one is as interested in your problems as much as yourself. Let me make that one clear:
No one is as interested in your problems as much as yourself.
Most people that deal with open source understand that. Most people that buy software don’t. But there is an intermediate crowd, that has recently grown tremendously: the freemium folks.
Most people now enjoy an impressive amount of free products, in midst of all the software that they did purchase, and for most of them, they do receive the same quality of support that they do for their paid products. That seems controversial, even paradoxical, but the answer is quite simple: they’re not free.
If you haven’t figured out yet, let’s get that one clear, too: you pay for it with your personal information. Accurate location logs, purchase history, personal identification, credit status, number of friends (and all their personal information too), who you like and who you don’t, etc. All that information is dutifully stored and used for their profit. A profit that is orders of magnitude higher than it would be if they did none of that and you paid $10 for it. Even $100. Hell, even if you paid $1000 per year, it would have been cheaper, or better said, they would have less money from you.
So, it only makes sense that they treat you like a full-paid member of their exclusive club, and treat you like a king so that you don’t jump ship and go share your cat picture on the other social website. Some people quickly understand what’s at stake, but most of them would keep using the service as a matter of convenience. They know the price of their privacy, and they exchange it for convenience.
As predicted by many in the 90s, and repeated by most in the last decade, open source (free/libre/etc) has taken the roots of computing and is now the base for all technology. From stock markets, to the ISS. From high-performance computing centres to schools. From operating systems to games. Open source is everywhere, and more people that never thought would have any contact with open source, are now getting exposed to it first hand. The pervasion of open source technologies is so complete, that I risk to say that there isn’t any profitable company today that doesn’t use or ship open source with its products. There isn’t a gadget that you own that didn’t use it during design or production, or rely on it for its operation.
And, as with any other technology, occasionally, open source fails. And as they fail, helpful messages pop up where users were expecting a nice “support contract” fixing it straight away. You may contact whoever you paid, and they may help you, or they may give the standard response that it’s not their problem. After all, your privacy is worth a lot of money, but not that much.
Because open source is everywhere, more and more people that were not used how it works are now falling pray to the support contract fallacy.
You may get expedite help from Android “free” apps makers, or social media websites, and they may provide their services for free and still be very friendly and helpful, but you cannot compare that freedom with libre/open source freedom. In free software / open source, we do not store your personal data, not we want to. We do not track your whereabouts, nor we contact your friend on your behalf. We don’t have that freedom, mostly because that’s not our business model, but also because most of us believe that’s wrong.
Because you’re not directly, nor indirectly, paying us, you cannot, ever expect that anyone will help you, less still, in any reasonable time. The overwhelming majority of people working in open source projects are directly or indirectly paid by companies, and that’s their day jobs. Folks that fix the problems that their companies think will best improve their products. Only a small minority of lucky bastards can work on free software without getting any compensation or direction from a company, but even those people have their own agenda. And that’s very rarely aligned with yours.
Expecting support, complaining about the lack of help or interest in your problems, is like carrying a large bag through the underground and be mad a people for not helping you. Granted, many people will help you, but as a selfless act, not as a support contract. Only those that are going in the same direction, or those that have a free hand, or that have some shared history (like, they have been in the same situation before), will likely help you, and different people align differently with your problem. If it’s a large suitcase, or a baby pram, or some clumsy and fragile painting. Different people will help in different times.
In libre/open source, the situation is exactly the same. We’re all working along our own projects and priorities, and unless your problem is directly related to my paid job, I will rarely even look at it. It’s not out of spite, but if I stop doing the work I’m paid and start helping all those in need, I’ll lose my job and I won’t be able to help anyone any more. Not to mention feed my family.
The social contract
When you send an email that no one pays attention, try to phrase it differently. Or better yet, do some more investigations, provide more information, show that you care about what you’re asking. There’s nothing worse in a forum, than people asking others to solve their homework. The general rule of free help is that you must show equal or more interest and sweat on what you’re asking, than the people that are helping you. It’s exactly the opposite than on a support contracts. Moreover, your behaviour will tell people whether to help you or not. The more aggressive and demanding you become, the less people will help you. The more humble and hard working you are, the opposite will happen.
To understand that social contract, think of it as an exchange. If you bring a lot of information with your request, I will learn a thing of two about that. I enjoy learning, so, even if it’s not my area, I may feel compelled to help you just because you might teach me something. If there is any payment in community help, this is it. The knowledge you pass on to people helping you, and the joy they feel of learning a new thing and helping a nice chap.
In the end, most people that are new to such environments, end up learning it really fast, and become enthusiastic contributors. This is, for me, the beauty of the lack of payments. Each one values the newly acquired knowledge in different ways, so it’d be impossible to treat them as standard currency. But, since I don’t tell you how much I value your contribution, and vice versa, we cannot know who has the profit. More importantly, in this case, profit is not the difference between my gains and your gains, but the difference between my expectation of gains and my actual gains, which is completely independent of your exchange ratios.
This is precisely what Buckminster Fuller meant as Synergetics. The total system behaviour is not always predictable from the behaviour of all its parts, and in some systems, the value aggregated can be more than the sum of its individual gains. This is why the open source business model is so infectious and addictive. Once you’re in, there’s no way out. But you have to put some effort.
Collection of data is not the only problem |
| November 13th, 2014 under Digital Rights, InfoSec, Life, Politics, rengolin. [ Comments: none ]
What the NSA has taught us is that mass surveillance is not as hard as people used to think. Other governments, and most commercial companies, do that, too. With the advent of smartphones we’ve learned to ignore most of that for the sake of convenience, and most of the time, it’s ok.
It’s true that the bulk surveillance from governments can spark enough false positives to make people worried, or that Google and Facebook are using your personal details to make a bucket load of money, and some others are selling those details, sometimes not even realising.
When you think of all the power that the government can do with your data, or all the money that big corporations are making with your personal information, it’s nor surprising to think: “where’s my share in this?”. Some people even tried to evaluate how much would you get for selling different types of personal information to corporations. But, is that the real question that we should be asking?
Should we be concerned with what data do we leak and try to minimise it, or should we really be thinking what can they really do with that information? Of course, any answer will be a mix of both (since not all investigating parties are well intentioned or law abiding), but there is the limit of government and corporation’s powers that can go a long way of making the data useful but not harmful.
I said this before and I still maintain my position that no one has ever had privacy. Parents eavesdrop on their kids behaviour since the dawn of humanity as a way to grow them into responsible adults. The concept of “being responsible” has changed over the millennia, but parents have not.
Law making and enforcing bodies have eavesdropping as their primordial way of acquiring information. Since people normally only do bad stuff when no one is looking, expecting the police to only use highly visual enquiring methods (such as asking personally or patrolling an area) become impossibly expensive very quickly. It is true that random checkpoints, fake speed cameras and signs do help awareness, but that’s also not optimal from a monetary point of view.
Privacy also goes against any common sense in the outside world. If you take a bus, every one in that bus knows you’re there, even if they don’t know who you are. If there is a picture of you on the bus saying “wanted, dead or alive”, they will see you and report you. There’s little you can do, besides hiding and never showing your face again. Famous people (actors, etc) have the same problem and the solution is pretty much hide.
The data you “leak” is also the data that defines you. Where you have been, what you like, where you work and live, what food you eat and what you do on Saturdays. Collecting that data and providing a service on that is actually extremely beneficial to you. The problem is who has access to that information.
Tesco knows what I need to buy better than I do. They send me vouchers with discount on fresh mozzarella cheese, fresh basil and fresh tomato on the vine. They know I love Caprese salad, and I actually like Tesco knowing that, because I get a slightly cheaper Caprese salad once in a while.
Google Maps knows where I live and work, so that when I’m going home I can just say: “Ok Google, go home”, and it does the rest. If I don’t share that kind of information with Google, it would never be able to do what I want it to. Examples like that are everywhere, and each company must have access to a wide range of data from you (location, shopping habits, browsing habits) for them to be able to do so. It’s the unavoidable fact of information theory that you need enough entropy to find patterns.
The real problem here is what companies end up doing with your data, and how well they protect it from malicious outsiders. Even if the company is benign, once they get hacked, your bundle of personal data which is enough to infer pretty accurate patters about your personal life, are out there. Who know what the attackers will do you that?
Another problem is blanket approvals to bypass any legal system and arrest, judge and execute individuals solely based on bulk surveillance patterns that are known to generate an immense amount of false positives, not only because the algorithms are inexact, but because the people filtering and creating the rules don’t posses enough knowledge to know what they’re looking for in the first place.
So, a pragmatic view on surveillance should attack the problem of the legality of actions on data, not just the legality of acquiring data in the first place. The legal system can already cope with that, for instance when evidence is found via illegal means (unapproved wire or microphone), it cannot be used against the accused. The “Patriot Act” changed all that in the US, and in other countries, and that’s the first thing that has to be changed back to a sane standard. Governments should never have the ability to bypass the judicial and executive system based on *any* collected data, especially if it was done in bulk, with irrelevant patterns to match.
Finally, there should be a guarantee in the license that the company is required to store such data in a protected way, following a set of standard cryptographic techniques and solutions, and there should be a clause on how they would destroy the data on the minimal attempt of intrusion. To compensate the total loss of service for all users, they must store such data in different locations, using different techniques and keys, and distribute it across multiple locations.
It may seem daunting for small companies to provide small services, but so did cheap scalable storage and service providing until Amazon created the AWS and all others followed suit. If there is a demand, someone will create the solution. That has been the human response to everything since we came down trees to conquer the planet and we won’t stop here.
It’s not the data, it’s what governments and corporations can do with the data, and how to protect it from malicious parties.
Trashing Chromebooks |
| June 5th, 2014 under Computers, Hardware, rengolin, Unix/Linux. [ Comments: 8 ]
At Linaro, we do lots of toolchain tests: GCC, LLVM, binutils, libraries and so on. Normally, you’d find a fast machine where you could build toolchains and run all the tests, integrated with some dispatch mechanism (like Jenkins). Normally, you’d have a vast choice of hardware to chose from, for each different form-factor (workstation, server, rack mount) and you’d pick the fastest CPUs and a fast SSD disk with space enough for the huge temporary files that toolchain testing produces.
The only problem is, there aren’t any ARM rack-servers or workstations. In the ARM world, you either have many cheap development boards, or one very expensive (100x more) professional development board. Servers, workstations and desktops are still non-existent. Some have tried (Calxeda, for ex.) but they have failed. Others are trying with ARMv8 (the new 32/64-bit architecture), but all of them are under heavy development, so not even Alpha quality.
Meanwhile, we need to test the toolchain, and we have been doing it for years, so waiting for a stable ARM server was not an option and still isn’t. A year ago I took the task of finding the most stable development board that is fast enough for toolchain testing and fill a rack with it. Easier said than done.
Amongst the choices I had, Panda, Beagle, Arndale and Odroid boards were the obvious candidates. After initial testing, it was clear that Beagles, with only 500MB or RAM, were not able to compile anything natively without some major refactoring of the build systems involved. So, while they’re fine for running remote tests (SSH execution), they have very little use for anything else related to toolchain testing.
Pandas, on the other hand, have 1GB or RAM and can compile any toolchain product, but the timing is a bit on the wrong side. Taking 5+ hours to compile a full LLVM+Clang build, a full bootstrap with testing would take a whole day. For background testing on the architecture, it’s fine, but for regression tracking and investigative work, they’re useless.
With the Arndales, we haven’t had such luck. They’re either unstable or deprecated months after release, which makes it really hard to acquire them in any meaningful volumes for contingency and scalability plans. We were left then, with the Odroids.
HardKernel makes very decent boards, with fast quad-A9 and octa-A15 chips, 2GB of RAM and a big heat sink. Compilation times were in the right ball park (40~80 min) so they’re good for both regression catching and bootstrapping toolchains. But they had the same problem as every other board we tried: instability under heavy load.
Development boards are built for hobby projects and prototyping. They normally can get at very high frequencies (1~2 GHz) and are normally designed for low-power, stand-by usage most of the time. But toolchain testing involves building the whole compiler and running the full test-suite on every commit, and that puts it on 100% CPU usage, 24/7. Since the build times are around an hour or more, by the time that the build finishes, other commits have gone through and need to be tested, making it a non-stop job.
CPUs are designed to scale down the frequency when they get too hot, so throughout the normal testing, they stay stable at their operating temperatures (~60C), and adding a heat sink only makes it go further on frequency and keeping the same temperature, so it won’t solve the temperature problem.
The issue is that, after running for a while (a few hours, days, weeks), the compilation jobs start to fail randomly (the infamous “internal compiler error”) in different places of different files every time. This is clearly not a software problem, but if it were the CPU’s fault, it’d have happened a lot earlier, since it reaches the operating temperature seconds after the test starts, and only fails hours or days after they’re running full time. Also, that same argument rules out any trouble in the power supply, since it should have failed in the beginning, not days later.
The problem that the heat sink doesn’t solve, however, is the board’s overall temperature, which gets quite hot (40C~50C), and has negative effects on other components, like the SD reader and the card itself, or the USB port and the stick itself. Those boards can’t boot from USB, so we must use SD cards for the system, and even using a USB external hard drive with a powered USB hub, we still see the failures, which hints that the SD card is failing under high load and high temperatures.
According to SanDisk, their SD cards should be ok on that temperature range, but other parties might be at play, like the kernel drivers (which aren’t build for that kind of load). What pointed me to the SD card is the first place was that when running solely on the SD card (for system and build directories), the failures appear sooner and more often than when running the builds on a USB stick or drive.
Finally, with the best failure rate at 1/week, none of those boards are able to be build slaves.
That’s when I found the Samsung Chromebook. I had one for personal testing and it was really stable, so amidst all that trouble with the development boards, I decided to give it a go as a buildbot slave, and after weeks running smoothly, I had found what I was looking for.
The main difference between development boards and the Chromebook is that the latter is a product. It was tested not just for its CPU, or memory, but as a whole. Its design evolved with the results of the tests, and it became more stable as it progressed. Also, Linux drivers and the kernel were made to match, fine tuned and crash tested, so that it could be used by the worst kind of users. As a result, after one and a half years running Chromebooks as buildbots, I haven’t been able to make them fail yet.
But that doesn’t mean I have stopped looking for an alternative. Chromebooks are laptops, and as such, they’re build with a completely different mindset to a rack machine, and the number of modifications to make it fit the environment wasn’t short. Rack machines need to boot when powered up, give 100% of its power to the job and distribute heat efficiently under 100% load for very long periods of time. Precisely the opposite of a laptop design.
Even though they don’t fail the jobs, they did give me a lot of trouble, like having to boot manually, overheating the batteries and not having an easy way to set up a Linux image easily deployable via network boot. The steps to fix those issues are listed below.
WARNING: Anything below will void your warranty. You have been warned.
To get your Chromebook to boot anything other than ChromeOS, you need to enter developer mode. With that, you’ll be able not only to boot from SD or USB, but also change your partition and have
sudo access on ChromeOS.
With that, you go to the console (CTRL+ALT+->), login with user
chronos (no password) and set the boot process as described on the link above. You’ll also need to set
sudo crossystem dev_boot_signed_only=0 to be able to boot anything you want.
The last step is to make your Linux image boot by default, so when you power up your machine it boots Linux, not ChromeOS. Otherwise, you’ll have to press CTRL+U every boot, and remote booting via PDUs will be pointless. You do that via
You need to find the partition that boots on your ChromeOS by listing all of them and seeing which one booted successfully:
$ sudo cgpt show /dev/mmcblk0
The right partition will have the information below appended to the output:
Attr: priority=0 tries=5 successful=1
If it had tries, and was successful, this is probably your main partition. Move it back down the priority order (6-th place) by running:
$ sudo cgpt add -i [part] -P 6 -S 1 /dev/mmcblk0
And you can also set the SD card’s part to priority 0 by doing the same thing over
With this, installing a Linux on an SD card might get you booting Linux by default on next boot.
You can chose a few distributions to run on the Chromebooks, but I have tested both Ubuntu and Arch Linux, which work just fine.
Follow those steps and insert the SD card in the slot and boot. You should get the Developer Mode screen and waiting for long enough, it should beep and boot directly on Linux. If it doesn’t, means your
cgpt meddling was unsuccessful (been there, done that) and will need a bit more fiddling. You can press CTRL+U for now to boot from the SD card.
After that, you should have complete control of the Chromebook, and I recommend adding your daemons and settings during the boot process (inid.d, systemd, etc). Turn on the network, start the SSD daemon and other services you require (like buildbots). It’s also a good idea to change the governor to
performance, but only if you’re going to use it for full time heavy load, and especially if you’re going to run benchmarks. But for the latter, you can do that on demand, and don’t need to leave it on during boot time.
To change the governor:
$ echo [scale] | sudo tee /sys/bus/cpu/devices/cpu[N]/cpufreq/scaling_governor
scale above can be one of performance, conservative, ondemand (default), or any other governor that your kernel supports. If you’re doing before benchmarks, switch to performance and then back to ondemand. Use cpuN as the CPU number (starts on 0) and do it for all CPUs, not just one.
Other interesting scripts are to get the temperatures and frequencies of the CPUs:
$ cat thermal
for dir in $ROOT/*/temp; do
temp=`echo $temp/1000 | bc -l | sed 's/0\+$/0/'`
echo "$device: $temp C"
$ cat freq
for dir in $ROOT/*; do
if [ -e $dir/cpufreq/cpuinfo_cur_freq ]; then
freq=`sudo cat $dir/cpufreq/cpuinfo_cur_freq`
freq=`echo $freq/1000000 | bc -l | sed 's/0\+$/0/'`
echo "`basename $dir`: $freq GHz"
As expected, the hardware was also not ready to behave like a rack server, so some modifications are needed.
The most important thing you have to do is to remove the battery. First, because you won’t be able to boot it remotely with a PDU if you don’t, but more importantly, because the head from constant usage will destroy the battery. Not just as in make it stop working, which we don’t care, but it’ll slowly release gases and bloat the battery, which can be a fire hazard.
To remove the battery, follow the iFixit instructions here.
Another important change is to remove the lid magnet that tells the Chromebook to not boot on power. The iFixit post above doesn’t mention it, bit it’s as simple as prying the monitor bezel open with a sharp knife (no screws), locating the small magnet on the left side and removing it.
With all these changes, the Chromebook should be stable for years. It’ll be possible to power cycle it remotely (if you have such a unit), boot directly into Linux and start all your services with no human intervention.
The only think you won’t have is serial access to re-flash it remotely if all else fails, as you can with most (all?) rack servers.
Contrary to common sense, the Chromebooks are a lot better as build slaves are any development board I ever tested, and in my view, that’s mainly due to the amount of testing that it has gone through, given that it’s a consumer product. Now I need to test the new Samsung Chromebook 2, since it’s got the new Exynos Octa.
While I’d love to have more options, different CPUs and architectures to test, it seems that the Chromebooks will be the go to machine for the time being. And with all the glory going to ARMv8 servers, we may never see an ARMv7 board to run stably on a rack.
Tale of The Water |
| October 20th, 2013 under Digital Rights, Media, Politics, rengolin, Stories. [ Comments: 1 ]
In a village, far from any big city, there lived a family which had access to clean water from a nearby river. With the rain from many spring and autumn months being abundant, the family never had any trouble to wash clothes, cook and drink, or even have a good long bath. But the village, as any good village in the world, grew along that river, and each family had access to clean and fresh water.
As times pass, the legend of good water spread across the land, and more and more people joined the thriving community of the water village. But with growth, there’s lack of space, and not everyone had direct access to the river, but had to cross the original settlers’ gardens to get to water. Some fights and some profits later, the community, that now extended across several rows of houses on both sides of the river, as far as the eye could see, had a meeting to decide what would be done about the “water problem”.
The eldest, and self-elected leader of the community, had many friends among the first settlers. He wasn’t himself living by the river, since he got there not long ago, but with a few favours (especially helping increasing the profits of the original settlers to share their water with the newcomers), he got himself in a pretty good spot, and had enough contacts on both sides of the river to reign almost unimpeded.
To no surprise, he was the first to speak: “Friends of the Water Village, we gather today to decide what to do with the water.” Half-way through the sentence, every body had stopped talking, so he proceeded: “We all know that the water in this village is of the best quality in all the land”, and a chorus in the background said “yeah!”. “We all know that the first settlers have the rights in accessing and distributing the water, which you all know I am not part of, nor I profit from their enterprise, I only help to see that their profits and rights are guaranteed.” There was silence, for most knew that it was a lie, but they either didn’t want to oppose (at least not publicly), or didn’t care.
“But recent events called for a special gathering. So many of you hear that there are people accessing the river via the bridge, which blocks the crossing and put the bridge, which is not of the best quality, in danger!”. “Not to mention that this is a disrespect with the original settlers, that fought so hard to build our thriving community, and gave us the bless of such good water, and have helped us in reaching the water in such beautiful and useful buckets of their own creation.” “We owe them the right to share with us their water, the right to charge for the tireless efforts to provide our homes with the best water, carefully selected and cared for.” There was a faint ovation from the bench where the original settlers were, with many of them only shrugging, or not even that.
“Some of you reported the efforts of our friend that decided to pass a pipe through his land to make it easier to other villagers to have access to water, and that was already dealt with. We destroyed his pipe, and let that be a warning of anyone trying to pervert the art of the original settlers, as we owe them our delicious water!”. “Now, as with any democracy, I open the floor for comments, on how are we going to solve this problems.”
With this, some of the original settlers mentioned how the town should restrict the access to the bridge, and to charge a fee to cross, so that people that uses the bridge have the intention to cross the bridge, not to collect water. Others mentioned that it still wouldn’t stop collectors, but, as some said, they could restrict the validity of the tickets to a short period of time, in which a new charge would be collected.
About the pipe “problem”, many suggested that it should be made illegal to have pipes in any house, not just on the original settles, because connecting pipes between houses was not technically difficult, and it would be hard to solve the problem in case many houses ended up connecting to each other, as it was already happening in the north area.
When all the citizens were heard, and all the votes were taken, most of the ideas were unanimously approved. When the final hammer stroke down, finishing the meeting, one citizen, who was not one of the original settlers rose up: “This is outrageous! It doesn’t make sense, the water comes from the rain, and there is no innate right of the original settlers to charge anything for it!”. As he was saying this, one of the man standing behind the bench left in silence.
To that, not much was done from the central bench, where the eldest was sitting in the middle. He slowly rose is head, adjusted his glasses and smiled. “Friend, we’d be happy to hear your pledge, but as you all know, you don’t have the right to address the council. Only original settlers, and those appointed by them, can speak at the council. If you want to voice your concerns, I suggest you talk to your representative.” To which the man responded: “But my representative is an original settler, and I can’t vote for anyone that is not one, so they don’t represent me, they never had!”. “I’m sorry friend, but this is how democracy works, we can’t change the world just because of you.”.
The villager’s face was red, his eyes twitched slightly. The despair in his mind was clear, but he didn’t have much time to fall into it, for the silent men returned to the settlers’ bench and whispered something to the eldest’s ear only. The eldest turned his head again to the nonconformist villager. “Dear sir, we hear stories that you have been consistently using the bridge in the past days, is that true?”. “Well, yes, my sister lives on the other side, and I go visit her every day.”. “The reports also say that you take a bucket with you, and that you fill it with water, do you agree?”. “Well, yes, of course, I take the water for my sick sister, she needs it to aid her recovery.”. “And you haven’t paid a single settler for more than a month, how much water do you have stored at your house, dear sir?”.
It didn’t take long for the strong men behind the bench take the poor villager into a closed room, and he was never heard of ever again. Even though the water is a resource from nature, and despite the fact that water is essential to every living creature, the innate right of ownership of basic needs is common place in many parts of the world.
Creativity is a gift we received from evolution, as a way to save ourselves from more powerful foes. Creativity has a large proportion of imitation, since other living beings have different ideas, equally effective, against our common foes, and those that copy and share ideas, survive for longer. And yet, out society believes, for some serious distortion of natural reality, that the right to own something is more important than the right to survive.
If you read this story again, but replacing “water” with “music”, and making the appropriate changes, you’ll see that it makes as much sense as the original tale. And yet, a huge empire is built on the presumption that creativity can be owned by anyone. Who was the first to play certain tune? How many completely separate cultures have the same beat on their millenarian songs? There are infinite ways of combining words, but only a few actually make sense, and a lot less than that ends up beautiful.
Songs, poems, tales, videos, films, theatre are all forms of expressing the same feelings in different ways, but some people have the luxury of owning the rights of a particular way of expression, mainly because the law is written to favour them, than because they have actually created something truly new. No one has.
We all copy ideas. That’s called survival. That’s genetic. That’s what define us.
Why are we so ashamed of our own past? Why do we accept that the rich gets richer on our own account? Why do we agree that paying millions of dollars to an already filthy rich actors, directors and producers makes sense, for them to give us the benefit of watching the “Hangover III”, when it’s an absolute copy of itself for the second time, when the original was a pout-pourri of many other films and stories? Why do we accept a law that makes us criminals by sharing creativity, a basic instinct of the human race?
What has come of the human race to accept this as “normal”?
Open Source and Profit |
| July 8th, 2013 under Corporate, Devel, Digital Rights, OSS, rengolin, World. [ Comments: 2 ]
I have written extensively about free, open source software as a way of life, and now reading back my own articles of the past 7 years, I realize that I was wrong on some of the ideas, or in the state of the open source culture within business and around companies.
I’ll make a bold statement to start, trying to get you interested in reading past the introduction, and I hope to give you enough arguments to prove I’m right. Feel free to disagree on the comments section.
The future of business and profit, in years to come, can only come if surrounded by free thoughts.
By free thoughts I mean free/open source software, open hardware, open standards, free knowledge (both free as in beer and as in speech), etc.
I began my quest to understand the open source business model back in 2006, when I wrote that open source was not just software, but also speech. Having open source (free) software is not enough when the reasons why the software is free are not clear. The reason why this is so is that the synergy, that is greater than the sum of the individual parts, can only be achieved if people have the rights (and incentives) to reach out on every possible level, not just the source, or the hardware. I make that clear later on, in 2009, when I expose the problems of writing closed source software: there is no ecosystem in which to rely, so progress is limited and the end result is always less efficient, since the costs to make it as efficient are too great and would drive the prices of the software too high up to be profitable.
In 2008 I saw both sides of the story, pro and against Richard Stallman, on the views of the legitimacy of propriety control, being it via copyright licenses or proprietary software. I may have come a long way, but I was never against his idea of the perfect society, Richard Stallman’s utopia, or as some friends put it: The Star Trek Universe. The main difference between me and Stallman is that he believes we should fight to the last man to protect ourselves from the evil corporations towards software abuse, while I still believe that it’s impossible for them to sustain this empire for too long. His utopia will come, whether they like it or not.
Finally, in 2011 I wrote about how copying (and even stealing) is the only business model that makes sense (Microsoft, Apple, Oracle etc are all thieves, in that sense) and the number of patent disputes and copyright infringement should serve to prove me right. Last year I think I had finally hit the epiphany, when I discussed all these ideas with a friend and came to the conclusion that I don’t want to live in a world where it’s not possible to copy, share, derive or distribute freely. Without the freedom to share, our hands will be tied to defend against oppression, and it might just be a coincidence, but in the last decade we’ve seen the biggest growth of both disproportionate propriety protection and disproportional governmental oppression that the free world has ever seen.
Can it be different?
Stallman’s argument is that we should fiercely protect ourselves against oppression, and I agree, but after being around business and free software for nearly 20 years, I so far failed to see a business model in which starting everything from scratch, in a secret lab, and releasing the product ready for consumption makes any sense. My view is that society does partake in an evolutionary process that is ubiquitous and compulsory, in which it strives to reduce the cost of the whole process, towards stability (even if local), as much as any other biological, chemical or physical system we know.
So, to prove my argument that an open society is not just desirable, but the only final solution, all I need to do is to show that this is the least energy state of the social system. Open source software, open hardware and all systems where sharing is at the core should be, then, the least costly business models, so to force virtually all companies in the world to follow suit, and create the Stallman’s utopia as a result of the natural stability, not a forced state.
This is crucial, because every forced state is non-natural by definition, and every non-natural state has to be maintained by using resources that could be used otherwise, to enhance the quality of the lives of the individuals of the system (being them human or not, let’s not block our point of view this early). To achieve balance on a social system we have to let things go awry for a while, so that the arguments against such a state are perfectly clear to everyone involved, and there remains no argument that the current state is non-optimal. If there isn’t discomfort, there isn’t the need for change. Without death, there is no life.
Of all the bad ideas us humans had on how to build a social system, capitalism is probably one of the worst, but it’s also one of the most stable, and that’s because it’s the closest to the jungle rule, survival of the fittest and all that. Regulations and governments never came to actually protect the people, but as to protect capitalism from itself, and continue increasing the profit of the profitable. Socialism and anarchy rely too much on forced states, in which individuals have to be devoid of selfishness, a state that doesn’t exist on the current form of human beings. So, while they’re the product of amazing analysis of the social structure, they still need heavy genetic changes in the constituents of the system to work properly, on a stable, least-energy state.
Having less angry people on the streets is more profitable for the government (less costs with security, more international trust in the local currency, more investments, etc), so panis et circenses will always be more profitable than any real change. However, with more educated societies, result from the increase in profits of the middle class, more real changes will have to be made by governments, even if wrapped in complete populist crap. One step at a time, the population will get more educated, and you’ll end up with more substance and less wrapping.
So, in the end, it’s all about profit. If not using open source/hardware means things will cost more, the tendency will be to use it. And the more everyone uses it, the less valuable will be the products that are not using it, because the ecosystem in which applications and devices are immersed in, becomes the biggest selling point of any product. Would you buy a Blackberry Application, or an Android Application? Today, the answer is close to 80% on the latter, and that’s only because they don’t use the former at all.
It’s not just more expensive to build Blackberry applications, because the system is less open, the tools less advanced, but also the profit margins are smaller, and the return on investment will never justify. This is why Nokia died with their own App store, Symbian was not free, and there was a better, free and open ecosystem already in place. The battle had already been lost, even before it started.
But none of that was really due to moral standards, or Stallman’s bickering. It was only about profit. Microsoft dominated the desktop for a few years, long enough to make a stand and still be dominant after 15 years of irrelevance, but that was only because there was nothing better when they started, not by a long distance. However, when they tried to flood the server market, Linux was not only already relevant, but it was better, cheaper and freer. The LAMP stack was already good enough, and the ecosystem was so open, that it was impossible for anyone with a closed development cycle to even begin to compete on the same level.
Linux became so powerful that, when Apple re-defined the concept of smartphones with the iPhone (beating Nokia’s earlier attempts by light-years of quality), the Android system was created, evolved and dominated in less than a decade. The power to share made possible for Google, a non-device, non-mobile company, to completely outperform a hardware manufacturer in a matter of years. If Google had invented a new OS, not based on anything existent, or if they had closed the source, like Apple did with FreeBSD, they wouldn’t be able to compete, and Apple would still be dominant.
Do we need profit?
So, the question is: is this really necessary? Do we really depend on Google (specifically) to free us from the hands of tyrant companies? Not really. If it wasn’t Google, it’d be someone else. Apple, for a long time, was the odd guy in the room, and they have created an immense value for society: they gave us something to look for, they have educated the world on what we should strive for mobile devices. But once that’s done, the shareable ecosystem learns, evolves and dominate. That’s not because Google is less evil than Apple, but because Android is more profitable than iOS.
Profit here is not just the return on investment that you plan on having on a specific number of years, but adding to that, the potential that the evolving ecosystem will allow people to do when you’ve long lost the control over it. Shareable systems, including open hardware and software, allow people far down in the planing, manufacturing and distributing process to still have profit, regardless of what were your original intentions. One such case is Maddog’s Project Cauã.
By using inexpensive RaspberryPis, by fostering local development and production and by enabling the local community to use all that as a way of living, Maddog’s project is using the power of the open source initiative by completely unrelated people, to empower the people of a country that much needs empowering. That new class of people, from this and other projects, is what is educating the population of the world, and what is allowing the people to fight for their rights, and is the reason why so many civil uprisings are happening in Brazil, Turkey, Egypt.
All that creates instability, social unrest, whistle-blowing gone wrong (Assange, Snowden), and this is a good thing. We need more of it.
It’s only when people feel uncomfortable with how the governments treat them that they’ll get up their chairs and demand for a change. It’s only when people are educated that they realise that oppression is happening (since there is a force driving us away from the least-energy state, towards enriching the rich), and it’s only when these states are reached that real changes happen.
The more educated society is, the quicker people will rise to arms against oppression, and the closer we’ll be to Stallman’s utopia. So, whether governments and the billionaire minority likes or not, society will go towards stability, and that stability will migrate to local minima. People will rest, and oppression will grow in an oscillatory manner until unrest happens again, and will throw us into yet another minimum state.
Since we don’t want to stay in a local minima, we want to find the best solution not just a solution, having it close to perfect in the first attempt is not optimal, but whether we get it close in the first time or not, the oscillatory nature of social unrest will not change, and nature will always find a way to get us closer to the global minimum.
Is it possible to stay in this unstable state for too long? I don’t think so. But it’s not going to be a quick transition, nor is it going to be easy, nor we’ll get it on the first attempt.
But more importantly, reaching stability is not a matter of forcing us to move towards a better society, it’s a matter of how dynamic systems behave when there are clear energetic state functions. In physical and chemical systems, this is just energy, in biological systems this is the propagation ability, and in social systems, this is profit. As sad as it sounds…
Amazon loves to annoy |
| June 27th, 2013 under Digital Rights, Gadgtes, rengolin, Software, Unix/Linux, Web. [ Comments: none ]
It’s amazing how Amazon will do all in their power to annoy you. They will sell you DRM-free MP3 songs, and even allow you to download on any device (via their web interface) the full version, for your own personal use, in the car, at home or when mobile. But, not without a cost, no.
For some reason, they want to have total control of the process, so if they’ll allow you to download your music, it has to be their way. In the past, you had to download the song immediately after buying, with a Windows-only binary (why?) and you had only one shot. If the link failed, you just lost a couple of pounds. To be honest, that happened to me, and customer service were glad to re-activate my “license” so I could download it again. Kudos for that.
Question 1: Why did they need an external software to download the songs when they had a full-featured on-line e-commerce solution?
It’s not hard to sell on-line music, other people have been doing it for years and not in that way, for sure. Why was it so hard for Amazon, the biggest e-commerce website on Earth, to do the same? I was not asking for them to revolutionise the music industry (I leave that for Spotify), just do what others were doing at the time. Apparently, they just couldn’t.
Recently, it got a lot better, and that’s why I started buying MP3 songs from Amazon. They now had a full-featured MP3 player on the web! They also have the Android version of it that is a little confusing but unobtrusive. The web version is great, once you buy an album you go directly to it and you can already start listening to songs and all.
Well, I’m a control freak, and I want to have all songs I own on my own server (and its backup), so I went to download my recently purchased songs. Well, it’s not that simple: you can download all your songs, on Windows and Mac… not Linux.
Question 2: Why on Earth can’t they make it work on Linux?
Undeterred, I knew the Android app would let me download, and as an added bonus, all songs downloaded by AmazonMP3 would be automatically added to the Android music playlists, so that both programs could play the same songs. That was great, of course, until I wanted to copy them to my laptop.
When running (the fantastic) ES File Explorer, I listed the folders consuming most of the SDCARD, found the amazonmp3 folder and saw that all my songs were in there. Since Android changed the file-system, and I can’t seem to mount it correctly via MTP (noob), I decided to use the ES File Explorer (again) to select all files and copy to my server via its own interface, that is great for that sort of thing, and well, found out that it’s not that simple. Again.
Question 3: Why can I read and delete the songs, but not copy them?
What magic Linux permission let me listen to a song (read) and delete the file (write) but not copy to another location? I can’t think of a way to natively do that on Linux, it must be a magic from Android, to allow for DRM crap.
At this time I was already getting nervous, so I just fired adb shell and navigated to the directory, and when I listed the files, adb just logged out. I tried again, and it just exited. No error message, no log, no warning, just shut down and get me back to my own prompt.
This was getting silly, but I had the directory, so I just ran adb pull /sdcard/amazonmp3/ and found that only the temp directory came out. What the hell is this sorcery?!
Question 4: What kind of magic stops me from copying files, or even listing files from a shell?
Well, I knew it was something to do with the Amazon MP3 application itself, if couldn’t be something embedded on Android, or the activists would crack on until they ceded, or at least provided means for disabling DRM crap from the core. To prove my theory, I removed the AmazonMP3 application and, as expected, I could copy all my files via adb to my server, where I could then, back them up.
So, if you use Linux and want to download all your songs from Amazon MP3 website, you’ll have to:
- Buy songs/albuns on Amazon’s website
- Download them via AmazonMP3 Android app (click on album, click on download)
- Un-install the AmazonMP3 app
- Get the files via: adb pull /sdcard/amazonmp3/
- Re-install the AmazonMP3 app (if you want, or to download more songs)
As usual, Amazon was a pain in the back with what should be really, really simple for them to do. And, as usual, a casual user finds its way to getting what they want, what they paid for, what they deserve.
If you know someone at Amazon, please let them know:
Uno score keeper |
| March 31st, 2013 under Devel, OSS, rengolin, Software. [ Comments: none ]
With the spring not coming soon, we had to improvise during the Easter break and play Uno every night. It’s a lot of fun, but it can take quite a while to find a piece of clean paper and a pen that works around the house, so I wondered if there was an app for that. It turns out, there wasn’t!
There were several apps to keep card game scores, but every one was specific to the game, and they had ads, and wanted access to the Internet, so I decided it was worth it writing one myself. Plus, that would finally teach me to write Android apps, a thing I was delaying to get started for years.
Card Game Scores
The app is not just a Uno score keeper, it’s actually pretty generic. You just keep adding points until someone passes the threshold, when the poor soul will be declared a winner or a loser, depending on how you set up the game. Since we’re playing every night, even the 30 seconds I spent re-writing our names was adding up, so I made it to save the last game in the Android tuple store, so you can retrieve it via the “Last Game” button.
It’s also surprisingly easy to use (I had no idea), but if you go back and forth inside the app, it cleans the game and start over a new one, with the same players, so you can go on as many rounds as you want. I might add a button to restart (or leave the app) when there’s a winner, though.
I’m also thinking about printing the names in order in the end (from victorious to loser), and some other small changes, but the way it is, is good enough to advertise and see what people think.
If you end up using, please let me know!
Download and Source Code
The app is open source (GPL), so rest assured it has no tricks or money involved. Feel free to download it from here, and get the source code at GitHub.
Distributed Compilation on a Pandaboard Cluster |
| February 13th, 2013 under Devel, Distributed, OSS, rengolin. [ Comments: 2 ]
This week I was experimenting with the distcc and Ninja on a Pandaboard cluster and it behaves exactly as I expected, which is a good thing, but it might not be what I was looking for, which is not. ;)
Long story short, our LLVM buildbots were running very slow, from 3 to 4.5 hours to compile and test LLVM. If you consider that at peak time (PST hours) there are up to 10 commits in a single hour, the buildbot will end up testing 20-odd patches at the same time. If it breaks in unexpected ways, of if there is more than one patch on a given area, it might be hard to spot the guilty.
We ended up just avoiding the make clean step, which put us around 15 minutes build+tests, with the odd chance of getting 1 or 2 hours tops, which is a great deal. But one of the alternatives I was investigating is to do a distributed build. More so because of the availability of cluster nodes with dozens of ARM cores inside, we could make use of such a cluster to speed up our native testing, even benchmarking on a distributed way. If we do it often enough, the sample might be big enough to account for the differences.
So, I got three Pandaboards ES (dual Cortex-A9, 1GB RAM each) and put the stock Ubuntu 12.04 on them and installed the bare minimum (vim, build-essential, python-dev, etc), upgraded to the latest packages and they were all set. Then, I needed to find the right tools to get a distributed build going.
It took a bit of searching, but I ended up with the following tool-set:
- distcc: The distributed build dispatcher, which knows about the other machines in the cluster and how to send them jobs and get the results back
- CMake: A Makefile generator which LLVM can use, and it’s much better than autoconf, but can also generate Ninja files!
- Ninja: The new intelligent builder which not only is faster to resolve dependencies, but also has a very easy way to change the rules to use distcc, and also has a magical new feature called pools, which allow me to scale job types independently (compilers, linkers, etc).
All three tools had to be compiled from source. Distcc’s binary distribution for ARM is too old, CMake’s version on that Ubuntu couldn’t generate Ninja files and Ninja doesn’t have binary distributions, full stop. However, it was very simple to get them interoperating nicely (follow the instructions).
You don’t have to use CMake, there are other tools that generate Ninja files, but since LLVM uses CMake, I didn’t have to do anything. What you don’t want is to generate the Ninja files yourself, it’s just not worth it. Different than Make, Ninja doesn’t try to search for patterns and possibilities (this is why it’s fast), so you have to be very specific on the Ninja file on what you want to accomplish. This is very easy for a program to do (like CMake), but very hard and error prone for a human (like me).
To use distcc is simple:
- Replace the
compiler command by
distcc compiler on your Ninja rules;
- Set the environment variable
DISTCC_HOSTS to the list of IPs that will be the slaves (including localhost);
- Start the distcc daemon on all slaves (not on the master):
distccd --daemon --allow <MasterIP>;
- Run ninja with the number of CPUs of all machines + 1 for each machine. Ex:
ninja -j6 for 2 Pandaboards.
A local build, on a single Pandaboard of just LLVM (no Clang, no check-all) takes about 63 minutes. With distcc and 2 Pandas it took 62 minutes!
That’s better, but not as much as one would hope for, and the reason is a bit obvious, but no less damaging: The Linker! It took 20 minutes to compile all of the code, and 40 minutes to link them into executable. That happened because while we had 3 compilation jobs on each machine, we had 6 linking jobs on a single Panda!
See, distcc can spread the compilation jobs as long as it copies the objects back to the master, but because a linker needs all objects in memory to do the linking, it can’t do that over the network. What distcc could do, with Ninja’s help, is to know which objects will be linked together, and keep copies of them on different machines, so that you can link on separate machines, but that is not a trivial task, and relies on an interoperation level between the tools that they’re not designed to accept.
And that’s where Ninja proved to be worth its name: Ninja pools! In Ninja, pools are named resources that bundle together with a specific level of scalability. You can say that compilers scale free, but linkers can’t run more than a handful. You simply need to create a pool called linker_pool (or anything you want), give it a depth of, say, 2, and annotate all linking jobs with that pool. See the manual for more details.
With the pools enabled, a distcc build on 2 Pandaboards took exactly 40 minutes. That’s 33% of gain with double the resources, not bad. But, how does that scale if we add more Pandas?
How does it scale?
To get a third point (and be able to apply a curve fit), I’ve added another Panda and ran again, with 9 jobs and linker pool at 2, and it finished in 30 minutes. That’s less than half the time with three times more resources. As expected, it’s flattening out, but how much more can we add to be profitable?
I don’t have an infinite number of Pandas (nor I want to spend all my time on it), so I just cheated and got a curve fitting program (xcrvfit, in case you’re wondering) and cooked up an exponential that was close enough to the points and use the software ability to do a best fit. It came out with
86.806*exp(-0.58505*x) + 14.229, which according to Lybniz, flattens out after 4 boards (about 20 minutes).
Distcc has a special mode called pump mode, in which it pushes with the C file, all headers necessary to compile it solely on the node. Normally, distcc will pre-compile on the master node and send the pre-compiled result to the slaves, which convert to object code. According to the manual, this could improve the performance 10-fold! Well, my results were a little less impressive, actually, my 3-Panda cluster finished in just about 34 minutes, 4 minutes more than without push mode, which is puzzling.
I could clearly see that the files were being compiled in the slaves (distccmon-text would tell me that, while there was a lot of “preprocessing” jobs on the master before), but Ninja doesn’t print times on each output line for me to guess what could have slowed it down. I don’t think there was any effect on the linker process, which was still enabled in this mode.
Simply put, both distcc and Ninja pools have shown to be worthy tools. On slow hardware, such as the Pandas, distributed builds can be an option, as long as you have a good balance between compilation and linking. Ninja could be improved to help distcc to link on remote nodes as well, but that’s a wish I would not press on the team.
However, scaling only to 4 boards will reduce a lot of the value for me, since I was expecting to use 16/32 cores. The main problem is again the linker jobs working solely on the master node, and LLVM having lots and lots of libraries and binaries. Ninja’s pools can also work well when compiling LLVM+Clang on debug mode, since the objects are many times bigger, and even on above average machine you can start swapping or even freeze your machine if using other GUI programs (browsers, editors, etc).
In a nutshell, the technology is great and works as advertised, but with LLVM it might not be yet the thing. It’s still more profitable to get faster hardware, like the Chromebooks, that are 3x faster than the Pandas and cost only marginally more.
Would also be good to know why the pump mode has regressed in performance, but I have no more time to spend on this, so I leave as a exercise to the reader. ;)
LLVM Vectorizer |
| February 12th, 2013 under Algorithms, Devel, rengolin. [ Comments: 2 ]
Now that I’m back working full-time with LLVM, it’s time to get some numbers about performance on ARM.
I’ve been digging the new LLVM loop vectorizer and I have to say, I’m impressed. The code is well structured, extensible and above all, sensible. There are lots of room for improvement, and the code is simple enough so you can do it without destroying the rest or having to re-design everything.
The main idea is that the loop vectorizer is a Loop Pass, which means that if you register this pass (automatically on
-O3, or with
-loop-vectorize option), the Pass Manager will run its
runOnLoop(Loop*) function on every loop it finds.
The three main components are:
- The Loop Vectorization Legality: Basically identifies if it’s legal (not just possible) to vectorize. This includes checking if we’re dealing with an inner loop, and if it’s big enough to be worth, and making sure there aren’t any conditions that forbid vectorization, such as overlaps between reads and writes or instructions that don’t have a vector counter-part on a specific architecture. If nothing is found to be wrong, we proceed to the second phase:
- The Loop Vectorization Cost Model: This step will evaluate both versions of the code: scalar and vector. Since each architecture has its own vector model, it’s not possible to create a common model for all platforms, and in most cases, it’s the special behaviour that makes vectorization profitable (like 256-bits operations in AVX), so we need a bunch of cost model tables that we consult given an instruction and the types involved. Also, this model doesn’t know how the compiler will lower the scalar or vectorized instructions, so it’s mostly guess-work. If the vector cost (normalized to the vector size) is less than the scalar cost, we do:
- The Loop Vectorization: Which is the proper vectorization, ie. walking through the scalar basic blocks, changing the induction range and increment, creating the prologue and epilogue, promote all types to vector types and change all instructions to vector instructions, taking care to leave the interaction with the scalar registers intact. This last part is a dangerous one, since we can end up creating a lot of copies from scalar to vector registers, which is quite expensive and was not accounted for in the cost model (remember, the cost model is guess-work based).
All that happens on a new loop place-holder, and if all is well at the end, we replace the original basic blocks by the new vectorized ones.
So, the question is, how good is this? Well, depending on the problems we’re dealing with, vectorizers can considerably speed up execution. Especially iterative algorithms, with lots of loops, like matrix manipulation, linear algebra, cryptography, compression, etc. In more practical terms, anything to do with encoding and decoding media (watching or recording videos, pictures, audio), Internet telephones (compression and encryption of audio and video), and all kinds of scientific computing.
One important benchmark for that kind of workload is Linpack. Not only Linpack has many examples of loops waiting to be vectorized, but it’s also the benchmark that defines the Top500 list, which classifies the fastest computers in the world.
So, both GCC and Clang now have the vectorizers turned on by default with
-O3, so comparing them is as simple as compiling the programs and see them fly. But, since I’m also interested in seeing what is the performance gain with just the LLVM vectorizer, I also disabled it and ran a clang with only
-O3, no vectorizer.
On x86_64 Intel (Core i7-3632QM), I got these results:
This is some statement! The GCC vectorizer exists for a lot longer than LLVM’s and has been developed by many vectorization gurus and LLVM seems to easily beat GCC in that field. But, a word of warning, Linpack is by no means representative of all use cases and user visible behaviour, and it’s very likely that GCC will beat LLVM on most other cases. Still, a reason to celebrate, I think.
This boost mean that, for many cases, not only the legality if the transformations are legal and correct (or Linpack would have gotten wrong results), but they also manage to generate faster code at no discernible cost. Of course, the theoretical limit is around 4x boost (if you manage to duplicate every single scalar instruction by a vector one and the CPU has the same behaviour about branch prediction and cache, etc), so one could expect a slightly higher number, something on the order of 2x better.
It depends on the computation density we’re talking about. Linpack tests specifically the inner loops of matrix manipulation, so I’d expect a much higher ratio of improvement, something around 3x or even closer to 4x. VoIP calls, watching films and listening to MP3 are also good examples of densely packet computation, but since we’re usually running those application on a multi-task operating system, you’ll rarely see improvements higher than 2x. But general applications rarely spend that much time on inner loops (mostly waiting for user input and then doing a bunch of unrelated operations, hardly vectorizeable).
Another important aspect of vectorization is that it saves a lot of battery juice. MP3 decoding doesn’t really matter if you finish in 10 or 5 seconds, as long as the music doesn’t stop to buffer. But taking 5 seconds instead of 10 means that on the other 5 seconds the CPU can reduce its voltage and save battery. This is especially important in mobile devices.
What about ARM code?
Now that we know the vectorizer works well, and the cost model is reasonably accurate, how does it compare on ARM CPUs?
It seems that the grass is not so green on this side, at least not at the moment. I have reports that on ARM it also reached the 40% boost similar to Intel, but what I saw was a different picture altogether.
On a Samsung Chromebook (Cortex-A15) I got:
The performance regression can be explained by the amount of scalar code intermixed with vector code inside the inner loops as a result of shuffles (movement of data within the vector registers and between scalar and vector registers) not being lowered correctly. This most likely happens because the LLVM back-end relies a lot on pattern-matching for instruction selection (a good thing), but the vectorizers might not be producing the shuffles in the right pattern, as expected by each back-end.
This can be fixed by tweaking the cost model to penalize shuffles, but it’d be good to see if those shuffles aren’t just mismatched against the patterns that the back-end is expecting. We will investigate and report back.
Got results for single precision floating point, which show a greater improvement on both Intel and ARM.
On x86_64 Intel (Core i7-3632QM), I got these results:
On a Samsung Chromebook (Cortex-A15) I got:
Which goes on to show that the vectorizer is, indeed, working well for ARM, but the costs of using the VFP/NEON pipeline outweigh the benefits. Remember than NEON vectors are only 128-bits wide and VFP only 64-bit wide, and NEON has no double precision floating point operations, so they’ll only do one double precision floating point operations per cycle, so the theoretical maximum depends on the speed of the soft-fp libraries.
So, in the future, what we need to be working is the cost model, to make sure we don’t regress in performance, and try to get better algorithms when lowering vector code (both by making sure we match the patterns that the back-end is expecting, and by just finding better ways of vectorizing the same loops).
Without further benchmarks it’s hard to come to a final conclusion, but it’s looking good, that’s for sure. Since Linpack is part of the standard LLVM test-suite benchmarks, fixing this and running it regularly on ARM will at least avoid any further regressions… Now it’s time to get our hands dirty!
« Previous entries