White European Males, GamerGate and DongleGate |
| May 8th, 2016 under Digital Rights, Games, Life, OSS, Politics, rengolin, Uncategorized, World. [ Comments: 1 ]
First things first, a disclaimer:
- I don’t condone, nor I accept any form of harassment, physical, verbal or electronic.
- I don’t mix technical qualities with life situations. Your choices, opinions, abilities and disabilities may affect the quality of your work, but this is not about those, but about the result: your work.
- I don’t promote abusive behaviour as a form of getting your point across, even if no abusive intention was meant.
- I do promote inclusion in STEM to balance towards the real proportion in society.
- Both GamerGate and DongleGate were disasters on their own, for very different reasons. I want neither to happen.
- I have Asperger’s Syndrome and see things more black and white than most people. I cannot accept qualitative features being used for quantitative purposes. None of this is meant as an offence, or to explain or validate harassment, abuse or any other unethical behaviour. It’s just an analysis.
When Charles Babbage begun creating his analytical machine, he was worried about the hardware and the implications of it for mathematics and the world. But we all know that hardware is only as good as its software, and so Ada Lovelace’s work was of equal importance on that critical milestone. Both of them were mathematicians of an elite that weren’t thoroughly recognised until much later. Both were extremely methodical, eccentric and disconnected from reality. All well known characteristics that Hans Asperger recognised circa 1920 as what we now know as autism.
In the 40’s to 60’s, only really brilliant mathematicians could understand computing, mostly because they were just developing it, but thousands of men and women took part in building and using them. At that time, the proportion of people “using computers” was closer to the social distribution than it is today. However, the number of people working “with computers” was independent of their understanding of the underlying technology. Naturally, the distribution then follows the source group’s own. But after the first real case for general computing (WWII), the world was left with a tool that could do so much more, and people realised that they needed to take it to the next level.
Still too many people were clueless as to how computers worked, and a huge effort was made to get people “into computing”. But the importance and prevalence of computing those days were inexistent, so the appeal to the general public, men and women, were close to zero. The kind of people that felt attracted by it then, and during the 70’s and 80’s were the same groups as Babbage, Lovelace, Turin: people in the autistic spectrum. This is not to say that non-autistic people did’t do it, or worse, that they couldn’t do it. On the contrary, the proof that this is not an autistic-only field is today’s proliferation of computer scientists around the world, regardless of their mental status, gender, race or culture.
During the 70’s, computers had specific purposes, and only universities and very big companies had them. The 80’s saw the first boom in “personal” computing, but it was still dominated by self-built kits, and those like me that remember that time fondly, know how much of a weirdo we were in the eyes of the general population. While more people were taking on computing careers, those experimenting at home still had a clear autistic predisposition.
It was only in the 90’s, when Bill Gates became a millionaire, that people started giving “some” credit to the field, and personal computing toppled and then completely replaced mainframes. During the 80’s, operating systems were developed for the common tasks like word editing, spreadsheets and simple databases, but it wasn’t until the 90’s that most people had one in their homes and small shops. It became ubiquitous only then.
But even in the 90’s, all the attempts to simplify programming (Logo? Basic?) couldn’t really help you do much with computers. They were (and still are) basically toys. So, people that learnt Basic have realised early on that they couldn’t write anything meaningful and would either have to delve deep into C, or give up completely. That was still promoting those of the more autistic disposition to stay and the rest to find something more interesting to do.
But as with every spectrum, thresholds are biased.
If you understand a bit about autism, you know that all we want is to be left alone to our own devices. Don’t come to my house telling me what to do and how. This is most upsetting for autistic people and you will be faced with some unintentional harsh responses due to the genetic reasons that autistic people cannot control or fix.
Autistic people were *always* banned from social life for thousands of years (maybe more?), and since ever they tried to group into segregated societies, often characterised with bigotry and rudeness, not uncommonly harassment. The Royal Society was such a place, and not unlike the current computer science scenario, was dominated by “White European Males“.
It seems obvious to me that the “White European” part is easily explained because of the degree of development that Europe had at the time (1600’s), compared to everywhere else in the world. The parallel with modern computer scientists is clear: North America and Western Europe have a much higher rate of Caucasians well educated and positioned in society, for obvious reasons that don’t fit this text to discuss, than the other groups.
When a field is new and needs pressure to get to, most of the people that will get in will be of a similar disposition. In the same way that most voluntary army cadets will have a similar mentality. I would never be an army volunteer, but I was a computer enthusiast since I was 5 years old.
Recent studies have shown that the proportion of males and females in high-functioning autistic people (the ones that like to solve complex problems for fun) is 4:1. But boys and girls behave very differently, with boys having a lot more physically and verbally violent games, and girls being more sensitive. With a start ratio of 4:1, it’s not hard how that biased self-selection can get to 10:1 or more.
What has become
But after the initial self-balancing, true bigots and abusers (trolls), saw the chance to belong to a society that was professing, for completely different reasons, that different people be kept out. I hope it’s clear enough that high-functioning autistic people have a valid and important reason to keep people out of their lives and groups. Otherwise, they cannot function properly.
Moreover, autistic people have the tendency of responding badly to social pressure, and that includes behaviour that is often misinterpreted as harassment, bullying and violent. It is not uncommon to see very drastic ends to really sad stories.
Autistic people also have a higher than usual rate of trusting people, and therefore much more easily abused by trolls, who will become part of a community and extend their modus operandi, but not necessarily their intention.
People on less advantageous backgrounds (wealth, disabilities, minorities, life choices) had even less chances of getting in a club that was trying to keep people out. But with trolls inside, they’ll make sure this becomes impossible, and that’s how situations like GamerGate happen.
It is important to separate the original cause of aggregation and demand for separation, sometimes aggressively, as a classic high-functioning autistic process, from the subsequent harassment and directed intentional aggression that trolls had after they took over well meaning but fearful and trusting mostly autistic people.
That fact, however, does not condone any aggression, including from autistic people. But what people have to understand is that, if the aggression comes from an autistic person, even high-functioning, they very likely cannot control it and need help. Being offended is ok, but reserving the right to then discharge your own contained aggression, even if you are a minority, is not the way to solve this.
We all have problems, but turning off your care-meter because you are a minority and have just being offended is not ok. And that includes autistic people, too.
Why is this important?
Because computer science has moved on from the nerd-zone for at least 20 years, but much more so in the last 10.
The barrier into technology is so low now that anyone can enter, and once they’re in, they don’t need to be autistic to enjoy. Furthermore, neurotypical people can be as good (or better) than autistic people even in the hardest of problems. After all, being high-functioning autistic doesn’t mean you’re smarter, just means you want to do something that keep you away from people, and talking to machines is the best thing I can think of.
So nowadays we have all kinds of people, and with that, we’re back to the real distribution that societies have. All minorities are now represented by what they are in society. But trolls are haters, and they know some very cunning ways to keep unwanted people around, mostly using subversive tactics like physical, verbal and social abuse, doxing, DDoSing, etc.
We need to remove the trolls from our societies together. This is not a minorities vs majorities fight, this is a fight for the right to be safe. The new minorities have as much right to be safe as the original minority who created the space. And both minorities have the right to be represented, but so does the majorities. The only thing we want to get rid of are the trolls.
What we should move towards
So, autistic people want a space of their own, trolls take over, destroy the Internet. Minorities try to participate, trolls shoot them down, behave like assholes. What else is news? As it all started in the 40’s, we need a compatible distribution with the rest of society. The very definition of minority is that there is less of. So it makes no sense to expect an equal distribution of minority and majority on each specific scale.
For instance, on average worldwide, we have half men, half women. So I would expect the same distribution in STEM subjects. We may be far from it in computer science and physics, but not in biology or chemistry. It’s still not 50/50, so we can’t take each topic to be exactly 50/50, but we can expect the whole STEM subjects to be around that ballpark.
Of the world population, at a glance I see 18% is Han Chinese, while about two thirds of that is “European”, and a third of each Arabic, Hindu and African, living all over the world. The real distribution doesn’t matter much, but I’d expect a similar distribution for STEM in the same way.
Now, getting there will involve two distinct activities:
- Deep grass root movements to increase the development and literacy of impoverished communities, education of better off communities regarding equality and inclusion.
- Improve STEM inclusion and attractiveness for all members of society, as well as remove the exclusion characteristics (trolls) of the already existing community.
People that are keen on seen global equality (1) have to fight that battle outside of STEM subjects. The fights you should have inside are those that discriminate minorities that can already be represented in STEM subjects (2).
For example, all the feminists advocate for inclusion in open source communities already have the will and ability to participate on equal grounds as men. The fact that someone is gay or transgender makes absolutely no difference in a STEM community and should bear no value in inclusion or acceptance. The fact that they are not included is a horrible mistake and has to be fixed inside STEM communities.
We should move towards STEM communities that have a relevant distribution as far as STEM can have on its own. We’re not looking for equal numbers of all minorities, we’re looking for equal distribution of minorities, and those are two very different things.
What we cannot have
What seems to be happening, and it’s something that will not fix anything, is that we’re moving to the other side.
We have to discourage any kind of troll, regardless if they agree with you. It may be satisfying to see someone on your side trolling someone you’re against, but that’s as bad as their side’s troll behaviour. Encouraging hate, even in the form of biased consensus and imposed cultural traits is just as bad as any other form of harassment.
More importantly, it’s that form of harassment that gets to the core of autistic people, including high-functioning ones. It’s the very reason why we hide from people and talk to machines. Cases like DongleGate are as extremist as GamerGate, and as offensive to me.
The fact that one misinterpreting person with one picture and one tweet can get someone fired is disconcerting beyond words. As disconcerting as people ganging up on girls just because they want representativeness on their games. Both behaviours are beyond words.
What we cannot have is to flip sides and have the suffering minorities so far gaining the upper hand and gaining the right to harass the majority or worse still, the forgotten minority that started it all and had no intentional part in any of the bullying.
We need to protect the minorities from abuse, and that includes the odd folks that don’t look mentally retarded or deficient in any way but behave oddly and sometimes aggressively. Those people are too often interpreted as bullies when all they want is to be left alone, and all they need is help adapting to an alien society.
The Falacy of Empathy |
| November 9th, 2015 under Life, rengolin, World. [ Comments: none ]
Empathy, or the ability to feel what other people are feeling, is often associated with good hearted people. In theory, empathy should provide the tools you need to understand someone else’s feelings before you act on instinct, and will block your impulsive actions, making you look like a nice person. Another view of empathy is of people that can display the same emotions as they see, for instance crying when watching a sad film. This empathy is powerful as a motivator, and that’s why so many charities use strong images of poverty or sick puppies to raise money.
On both cases, from an external point of view, it’s hard to understand the reason why people behave nicely or poorly. It’s often assumed that, when people behave well, either being nice to people or helping people in need, they have a high degree of empathy. Conversely, when they don’t, they don’t have empathy. However, that assumption is based on no facts other than apparent behaviour, which can (and often is) manipulative and false.
Why do we need empathy?
Human beings, as other animals, have behavioural strategies to enhance their survival rates. Dogs are known to show deep empathic behaviour, like standing by their owners, or fetching help and saving peoples’ lives, some times without request from a human. Other animals, like monkeys and dolphins show even higher degrees of empathy in some situations, but a much lower on others.
That begs the question: why do we need empathy? Is empathy really important for survival rates, or evolution? Or did we really evolve empathy after we stopped being naturally selected, a few dozen thousand years ago?
There are a few strategies that enhance survival of a species, some of them are related to social relationships. Social animals, those that live in large groups, understand the value in belonging to that group. A zebra alone is an easy prey. Naturally, wanting to belong to a group is a life and death choice. Like zebras, humans are social animals, and it was only after we started bundling ourselves in towns that we needed agriculture and it was after agriculture was introduced that the human race exploded in significance. From an evolutionary point of view, those who were more social ended up gathered inside towns, and prospered. The others, were more easily hunted, or suffered more from the elements, and probably died out in the long run. So, what was left of the human race, were the ones with more social affinity.
It seems obvious to me that empathy (feeling or simulation) is a great enhancement of one’s social abilities. It’ll help you get along with other people even if you don’t like them, it’ll help you not upset them and gain more from your relationship with them. It’s probably the best tool one can have when relating socially with other people. But, as I reasoned above, it’s very hard to separate intention from behaviour, and there are many people that can display extreme empathic behaviour at times, and be a sociopath at others. These people are very likely simulating their behavioural empathy, and in large numbers, it’s hard to separate them from the “real” empaths.
Up until a few thousand years ago, the civilisations were disconnected enough that displaying empathy would only take you so far. But as they began connecting with each other, invading and assimilating cultures, human interaction changed from mono-cultural to multi-cultural, and that’s when displays of empathy became the most powerful tool in human societies.
The Roman religion changed drastically from pluri theistic to mono theistic, and collected a pout-pourri of elements from the diverse cultures it had invaded in order to strongly relate to them and keep other cultures, not just countries, tight with Rome. The Roman empire has fallen, but the dominance of the Catholic church is as strong as ever. That display of empathy, which is the base of the whole catholic church, is what made them the most powerful people in the world for millennia.
We all know the outrageous behaviours that church officials have in all religions, from the most junior to the most senior positions, including most popes, which leads to the conclusion that empathy can be used for both good and evil, and that most people in the world find it hard to separate between good (real) empathy and bad (simulated) empathy.
Whenever you have two survival strategies that provide identical effects to identical stimuli, neither of them are selected over the other, but they coexist in a proportion that is not fixed themselves, but are selected by other means. For instance, politicians must posses the simulated empathy, otherwise, they would never be able to pass on laws that were harmful to a large group of people. The same can be said about the legal and advertising fields, and most senior positions on companies, like CEOs. Those professions are crucial to how our society expect to behave, and thus those kind of people will never be selected out.
One could argue that we don’t really need politicians or lawyers to thrive, and I would agree, but the fact that they’re in power, means they’ll continue to hold that power until there’s enough pressure to push them down. But it will never be strong enough, since people with simulated empathy are being born every day, so there is a pressure to keep the world as it is, due to the very existence of those people.
But of course, neurotypical people considered to have empathy form the large majority of human beings. So, not all simulated empathic people are politics, lawyers or evil people. Some of those people still believe they have real empathy, and can easily convince others of that. Maybe those others also have simulated empathy, or maybe they’re so empathic that they agree to avoid wrong judgement. All in all, by multiple mechanisms, empathic behaviour is self preserved.
To whom do we show empathy?
In the history of human kind, we have seen that most of the time, empathy is directional. Slavers did not feel bad for the slaves, nor whale hunters for whales. In a predominantly white, middle class neighbourhood in the US, people are more likely to cry over a puppy that was ran over by a car than a black kid that was murdered by the police. Excuses like “he probably deserved it” is how people cope with this severe lack of empathy.
In poor countries, like Brazil, the wealth difference is so striking that the same behaviour is common between rich and poor people, no matter the origin. In Sao Paulo, rich people drive their fancy cars around extremely poor people every day. Some otherwise average youngsters burn homeless people alive, others shoot stray cats and dogs in the street, but when they go back home, they love their families, and they’re good kids at school.
In other countries like Israel and Palestine, people are raised to protect their own and to kill the other side. They love their families and friends with such a strength that they would die for them, but as easily kill an entire family in brutal ways just because they belong to another group. This detachment from reality is extremely polarised: empathy towards your own group is as strong, as negative empathy towards competing groups.
This is not specific to Israel, Brazil or the US. I know many people in those countries who are great people who wouldn’t hurt a fly, and who would instantly help people (or other animals) in need. But examples like that can be seen anywhere in the world.
Trends, and the distortion of empathy
Another behaviour that was socially selected is how well you can follow the trends, which basically translates to being politically correct. One that doesn’t know or understand a conflict between gender equality since the last century may look very rude and lacking empathy if one says: “some people are naturally more direct and blunt, and women are normally more emotional, thus more easily take offence”. Regardless of the veracity in that phrase, this is an opinion like any other, and in itself, devoid of context. But fights for women suffrage in the last decades have made that kind of comparison somewhat rude.
More recently, the fight has also started on the sexuality realm, where people are no longer content with homo/bi-sexuality, but needed to create a huge number of terms to describe, with accuracy, their feelings about their own sexuality. All in all, a great effort, and certainly very important to the subgroup where this has any meaning, but to a large extent, this matters very little to most other human affairs.
Laws, education, health, jobs, technical discussions, travel, culture, religion and almost all other important subjects that we deal with on a daily basis need no separation between skin colour, ethnicy, gender or sexual orientation. The very fact of making those separations clear, is prejudice in itself, much like stating on a federal law that “black people should also get the vote” when there is no other legal separation between “white” people and “black” people in the voting laws.
We can all see that in practice this is not true, there is a huge separation of intent, execution and judgement across all minorities everywhere in the world, but creating specific categorisations will only create problems for the categories that don’t yet exist. We’ll have to repeat the legalisation of vote for every new category that appear in the future.
But this is also considered empathy, since it’s a matter of exacerbating your empathy towards the cases that you know need exacerbating, because the trend is to do so. Even people that would naturally empathise with some minorities’ problems, have to show an increased response to the topic. That increase response cannot be naturally selected, not even socially selected.
It stands to reason, then, that understanding empathy trends and acting upon them, also known as being politically correct, is solely an artificial factor, and therefore a simulated empathic response. The term politically correct already carries that meaning in itself, but the importance of this is wider: being politically correct is just another behaviour that natural and simulated empathic behaviour express itself, and it would be naive to believe that this is the only simulated empathic response since we split from apes.
This fact, not only reinforces the idea of the existence of simulated empathy embedded in human nature, but helps show that humanity will never get rid of it, as it is a crucial mechanism with which we group ourselves.
Lack of empathy? Or lack of simulation?
In Baron-Cohen’s original paper, where children were asked to choose an option from the point of view of the character, not themselves, autistic children have consistently selected the wrong answer. Is this because they cannot sympathise with the character, or because they cannot understand why, in possession of all the knowledge, they have to choose the wrong place?
I myself have answered that wrong by instinct, and got myself laughing at it. I have also cried like a baby in Grave of the Fireflies and The Boy in the Striped Pyjamas, and I can’t watch most American “comedies”, because they all rely on deep embarrassment, and I cannot cope with it, to the point of having real physical discomfort.
Autistic people (Asperger’s included), often offend people and are generally regarded as rude. Is it because they intend to offend, or because they can’t see past the social norm, the politically correct response? Or is it because they really cannot empathise with the other party and end up being cold hearted? I can’t answer that for other people, but I can certainly empathise with people’s feelings, I help a good number of charities that help animals and people, and I teach my kids to respect everyone, independent of their origins. When I’m told I offended someone, I feel deeply and I try very hard to make up, but that doesn’t mean I will be able to do “the right thing” next time.
I can’t simulate empathy. I can’t increase or create empathy for something that society demands me to. I can’t change my opinion or behaviour towards people just because they belong to a group that is trendy. In the same way, I can’t simulate behaviour (I’d be a horrible actor), but I have good imagination (I could be a writer). I can’t simulate affection, love, hate, laughter, but I do posses all those feelings and I display them wholeheartedly. This certainly makes me neurologically atypical, but does that make me wrong? Deficient?
As with everything else in the universe, neural diversity is a spectrum. But unlike simple things, it’s composed of a large number of factors that compose behaviour, capacity and plasticity. Even though most research on autism have found strong correlations for hereditary, most genes found in autistic more often than in neurotypical people were present in a small percentage of autistic people. This means that the number of genes and the expression of each one count little towards the overall phenotype. People with milder forms of autism, included Aspergers’ Syndrome, have even fuzzier relationships.
Even though it’s assumed that autism account for 1-2% of the population in one form or another, the behaviour autistic people demonstrate can be observed in otherwise neurotypical people, like perseverance on comfortable tasks, lack of sight amidst multiple choices, irritability towards certain sensory inputs (loud music, too many people talking, too hot, too cold), etc.
It would be an interesting line of research to determine how mild or abrupt is the theoretical wall that separates those tagged as “neurotypical” from those diagnosed as autistic of some form, with regards to all known “disability” traits. The fact that people consider that disability is a clear demonstration that they do not accept that behaviour as “normal”. However, how does one define normal? How many abnormal traits do I have to have to be considered autistic? How far should I be from “the norm”?
In IQ tests, the answer is simple. There’s an arbitrary number, 100, and everything above is good, below, not so much. There was a lot of work done to transform the answers of the diverse tests into virtually the same range of numbers (with small differences), but the general idea applies. You can also apply statistical modelling and define those one sigma above and below, and treat them accordingly. For instance, there are some countries that have reduced prison sentences for people below one or two sigma.
But, like when you mix all the colours of modelling clay together, everything is now turning grey. There is no black and white, no man and woman, no gay and straight. Everything is a spectrum, and behaviour is not an exception. So, what do we accept as behaviour? How do we include intent when that’s clearly hidden and easily simulated?
Murder is easily on the wrong side of the spectrum, but shouting is a very common and non-abusive behaviour in autistic people. It’s often a response to stressful situations, when you don’t know what else to do. The difference is that stressful situations arise in autistic people that wouldn’t in neurotypical people, for example, when having a hair cut, or when someone else cannot understand what you’re saying. Even though they’re directed at the barber or the other person, they’re absolutely not personal.
How much to accept?
There is, of course, the danger that, allowing some people to behave oddly because they have a letter from the doctor, we’ll encourage other opportunistic people to behave in the same way. An example, if my interpretation is correct, is the Linux mailing list. I don’t know Linus personally, but from his emails and what I hear from people that do know him well, he is most certainly not an asshole.
His consistent behaviour classed as “abuse” is his inflamed reaction to bad code, which cannot easily be separated from the people who wrote it. Some people take it personally, others don’t. My view is that those who do, are either neurotypical or have had history of abuse, and those who don’t, are either towards the autistic end of the spectrum or actively ignore it from a sense of higher purpose. So, while his behaviour is questionable, and mostly unnecessary, I don’t see abuse in it, and different people react one way or another for different reasons.
Clearly, a person of his position relying solely on abuse, even with a great intellect, would have fallen long ago (ex. Ulrich Drepper). But that’s not the same for the opportunists, like people that have lesser intellects and need to get their ways via abuse alone, or as a reinforcement. On a healthy community, that kind of behaviour gets automatically curbed with time, but on a community that has its key member behaving in an encouraging way, will have positive feedback, and it can be a lot harder to get rid of the opportunists, and it may even encourage the rise of more of them with time. Linus seem to deal with those people reasonably efficiently (he also trashes maintainers), but not only it generates more work for him and more stress to the community, but it also decreases the trust in that community, which translates into good people abandoning ship.
My opinion in all this is simple: treat the disease, not the symptoms.
If Linus encourages opportunistic people, convince him why he should avoid rants for the right reason. Saying it’s not politically correct will most certainly have the opposite effect. Codes of conduct can easily be cheated, abused and transformed. People with intent to do harm will plan well their actions, those that will be caught, however, will be the unaware and innocent.
Do not be offended, unless there was intent. Offence is the easiest thing to get wrong. I may cough and you interpret as an insult, and it all goes downhill. Every time you feel offended, ask the offender the reasons. He/she may surprise you.
It’s not all about you. If I behave badly near you, it doesn’t have to be because of what you’ve done, or who you are, or how you see yourself in the mirror. Most people don’t care much about you (or me), and offence is normally taken by self-important people. Self importance is not a bad thing, and it normally comes in response to previous abuse and sudden revelation (I’m going through that phase myself), but it doesn’t justify offence. I have since had people describing me as disabled or an asshole, and all I did was to explain most of the contents in this post. From now on, I can just send a link.
I personally don’t care much if they understand or agree, as long as they don’t affect my life. This is a liberating feeling that I recommend to every one that has “come out” for whatever reason: be yourself, but respect everyone else.
Oh, you want support? |
| August 25th, 2015 under Computers, Corporate, OSS, rengolin, Unix/Linux. [ Comments: none ]
I don’t know how many open source communities have the same problem, but in the LLVM list we do receive more than a few emails a year with people really upset that no one has fixed their bugs quick enough, or that no one replied to their emails. I find this behaviour quite interesting from a sociological point of view, but if you behave in that way, let me help you straight out: it’s rude. Really.
The open source business model relies on sharing of ideas, accumulation of technology and niche development. Small and incremental pieces are incorporated into stabilizing products that provide value to a groups of people.
For example, MacOS and Linux provide different values to the same user base (desktop users). The more commercial software, like MacOS, provide a stable, recognizable interface, with powerful integration to other products of the same line, while the open counterparts provide a more experimental interface, but greater control and spread of knowledge.
Apple’s business model is quite different than most Linux distributions, but both heavily use/derive open source infrastructure (kernel, compilers, libraries). So, if you purchase MacOS, you’re getting not only the eye candy, but also some components that are open source, like LLVM. What companies get from investing in LLVM is up for a different kind of post, but rest assured, the license is really clear: “THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED”.
Most Linux/BSD users, when they have a problem with their programs, they first search the web for the error message. In the uncommon case where they don’t find an answer, they then post on forums or mailing lists, often politely, dumping their logs and error messages, and gladly waiting for an answer, that may take a day, a week, sometimes, it may be forgotten. They, then, try a different forum, or “ping” their messages, work a bit harder, find more causes, etc.
After all, no one is as interested in your problems as much as yourself. Let me make that one clear:
No one is as interested in your problems as much as yourself.
Most people that deal with open source understand that. Most people that buy software don’t. But there is an intermediate crowd, that has recently grown tremendously: the freemium folks.
Most people now enjoy an impressive amount of free products, in midst of all the software that they did purchase, and for most of them, they do receive the same quality of support that they do for their paid products. That seems controversial, even paradoxical, but the answer is quite simple: they’re not free.
If you haven’t figured out yet, let’s get that one clear, too: you pay for it with your personal information. Accurate location logs, purchase history, personal identification, credit status, number of friends (and all their personal information too), who you like and who you don’t, etc. All that information is dutifully stored and used for their profit. A profit that is orders of magnitude higher than it would be if they did none of that and you paid $10 for it. Even $100. Hell, even if you paid $1000 per year, it would have been cheaper, or better said, they would have less money from you.
So, it only makes sense that they treat you like a full-paid member of their exclusive club, and treat you like a king so that you don’t jump ship and go share your cat picture on the other social website. Some people quickly understand what’s at stake, but most of them would keep using the service as a matter of convenience. They know the price of their privacy, and they exchange it for convenience.
As predicted by many in the 90s, and repeated by most in the last decade, open source (free/libre/etc) has taken the roots of computing and is now the base for all technology. From stock markets, to the ISS. From high-performance computing centres to schools. From operating systems to games. Open source is everywhere, and more people that never thought would have any contact with open source, are now getting exposed to it first hand. The pervasion of open source technologies is so complete, that I risk to say that there isn’t any profitable company today that doesn’t use or ship open source with its products. There isn’t a gadget that you own that didn’t use it during design or production, or rely on it for its operation.
And, as with any other technology, occasionally, open source fails. And as they fail, helpful messages pop up where users were expecting a nice “support contract” fixing it straight away. You may contact whoever you paid, and they may help you, or they may give the standard response that it’s not their problem. After all, your privacy is worth a lot of money, but not that much.
Because open source is everywhere, more and more people that were not used how it works are now falling pray to the support contract fallacy.
You may get expedite help from Android “free” apps makers, or social media websites, and they may provide their services for free and still be very friendly and helpful, but you cannot compare that freedom with libre/open source freedom. In free software / open source, we do not store your personal data, not we want to. We do not track your whereabouts, nor we contact your friend on your behalf. We don’t have that freedom, mostly because that’s not our business model, but also because most of us believe that’s wrong.
Because you’re not directly, nor indirectly, paying us, you cannot, ever expect that anyone will help you, less still, in any reasonable time. The overwhelming majority of people working in open source projects are directly or indirectly paid by companies, and that’s their day jobs. Folks that fix the problems that their companies think will best improve their products. Only a small minority of lucky bastards can work on free software without getting any compensation or direction from a company, but even those people have their own agenda. And that’s very rarely aligned with yours.
Expecting support, complaining about the lack of help or interest in your problems, is like carrying a large bag through the underground and be mad a people for not helping you. Granted, many people will help you, but as a selfless act, not as a support contract. Only those that are going in the same direction, or those that have a free hand, or that have some shared history (like, they have been in the same situation before), will likely help you, and different people align differently with your problem. If it’s a large suitcase, or a baby pram, or some clumsy and fragile painting. Different people will help in different times.
In libre/open source, the situation is exactly the same. We’re all working along our own projects and priorities, and unless your problem is directly related to my paid job, I will rarely even look at it. It’s not out of spite, but if I stop doing the work I’m paid and start helping all those in need, I’ll lose my job and I won’t be able to help anyone any more. Not to mention feed my family.
The social contract
When you send an email that no one pays attention, try to phrase it differently. Or better yet, do some more investigations, provide more information, show that you care about what you’re asking. There’s nothing worse in a forum, than people asking others to solve their homework. The general rule of free help is that you must show equal or more interest and sweat on what you’re asking, than the people that are helping you. It’s exactly the opposite than on a support contracts. Moreover, your behaviour will tell people whether to help you or not. The more aggressive and demanding you become, the less people will help you. The more humble and hard working you are, the opposite will happen.
To understand that social contract, think of it as an exchange. If you bring a lot of information with your request, I will learn a thing of two about that. I enjoy learning, so, even if it’s not my area, I may feel compelled to help you just because you might teach me something. If there is any payment in community help, this is it. The knowledge you pass on to people helping you, and the joy they feel of learning a new thing and helping a nice chap.
In the end, most people that are new to such environments, end up learning it really fast, and become enthusiastic contributors. This is, for me, the beauty of the lack of payments. Each one values the newly acquired knowledge in different ways, so it’d be impossible to treat them as standard currency. But, since I don’t tell you how much I value your contribution, and vice versa, we cannot know who has the profit. More importantly, in this case, profit is not the difference between my gains and your gains, but the difference between my expectation of gains and my actual gains, which is completely independent of your exchange ratios.
This is precisely what Buckminster Fuller meant as Synergetics. The total system behaviour is not always predictable from the behaviour of all its parts, and in some systems, the value aggregated can be more than the sum of its individual gains. This is why the open source business model is so infectious and addictive. Once you’re in, there’s no way out. But you have to put some effort.
Feature: Mass Murderer on Brutal Killing Spree |
| May 26th, 2015 under andre, Author, Fun, Stories. [ Comments: none ]
After the sudden end of a year-long crime wave, bodies begin to drop. At the end of December, death threats began spreading across London and Eastern England. Then, on the 1st of January, the first of many victims is hit; December, where less than 50 people were arrested in the whole East Anglia was naught but the calm before the storm. Since the first killing, exactly 90 days ago, 76 rather influential businessmen have fallen to this plague. The only connection between each kill? Two deep gashes, seeming to be knife wounds, or, ridiculously, sword wounds. No suspects have been confirmed as if yet.
The valiant police have hungrily followed every lead, but any evidence found so far has been destroyed. In the first week, fifty-three police officers have passed on, with another two hundred tallying up in March alone. The chief officer, in a press conference, admitted to being forced to close the case through blackmail; he was swiftly and remorselessly dispatched the following day – and his wife and three kids following him soon after. After this incident, the remaining officers have decided to stick with buddies, and a rule has passed in the police station that no one man can go somewhere without at least two other people. As such, seventeen policemen were found, all killed in one fell swoop. Amongst them were seven MI6 agents working covertly with the rest of the task force to catch this criminal – whose name, rumour has it, is Stormbringer.
Hundreds upon thousands are mourning, and many more besides joined the riot for an end to the deaths – until that is, the very criminal found his way through the crowds to another three victims before the parade was called off. Strangely enough, it seems as if he is not attacking citizens, for whenever he picks his targets outside of the police force, they always seems to be somewhat reserved and important; no connections have been determined so far.
Some would say that this man is devil incarnated, others may say he is simply crazy, but all agree that he must be locked away for longer than life, if not death; however, his true marvel is the speed and alacrity with which he, and he alone, has brought the greatest world superpower to its knees in three months.
In other news, morgue business income has skyrocketed.
Andre’s Writing Assesment: Newspaper Story.
18th May 2015
Gates of Hell |
| April 1st, 2015 under andre, Stories. [ Comments: none ]
Gates of Hell
A land, laid waste. A kingdom, in ruins. Worlds, at war. This was all knowledge of many people around the world these days. Karmac reviewed his captain’s log.
“The year 2087. 32 years since the Gates of Hell were opened. 32 years of war. Day 17 of Omega 8, our eight attack on what was the States 30 years ago, before the population was consumed by the Accursed Soul and his dreaded 9 foot minions. This time, the sole objective is to infiltrate the Dead Wastes and close the Gates. So far, the wastes have proved impregnable, but I believe that Alpha Triumph can overcome the horrors of Primordia. Captain Krumm out.”
Karmac sighed, and rallied his troops for what was hoped to be the final battle. 300 skilled soldiers, a fraction of the initial numbers, stood to attention, about 80 in each group (some significantly less). Their orders came, short and sweet, and in half an hour, the remains of Alphas Triumph, Proxy, Victory, Dawning Sun and Blackout marched out of Camp Nelegra and towards the Southern Wastes. Soon the black mass of the Gates of Hell loomed over the horizon, an imposing building of terrifying prospects; it was closely followed by the noise. The repetitive droning of cursed feet, falling on the hard rock in near harmony. It was a fear-inducing cacophony.
Karmac turned, and his words, no matter the din, rang loud and clear.
“Remember the old phrase – Good luck be with you, and keep your powder dry.” There was a small cackle from one corner of the battalion, and the laughter quickly spread throughout the soldiers. “A good-humoured battalion is a victorious one,” Karmac’s second-in-command muttered.
Shotguns were readied, Photon Torpedoes were loaded, Pulse Lasers were primed. The horrors had arrived. Rugged, jittery steps were taken by the monstrous nightmares as the horde closed in on Battalion Alpha. Their hands, if attached, were clawed and bloodied. Karmac doubted it was theirs. The writhing bodies outnumbered Karmac’s troops by over 30 to 1, but they weren’t exactly the smartest beings on the planet. As the armies neared each other, the dread had started to seep back into Alpha, like a bad cologne that permeated the air and the clothes.
Once he was prepared, Karmac uttered a cry so loud that the whole battalion sprung forth, invigorated by their leaders fearlessness, and their battle cries rapidly grew to a roar, and one blast went off, felling a wraith (for that was what they were called).The wraiths then leapt forwards, and the two masses crashed together. The noise was deafening, and Karmac’s adrenaline kicked in when a wraith rushed him. A controlled flourish of his vitro-blade, and the beast’s entrails spilled to the floor. Eventually, Karmac and a band of 26 soldiers broke through the lines.
“Come to me, fools; come towards your doom…” resonated between them and the Accursed Soul. He stood, large and proud, Scythian Blade in hand. Karmac breathed and stepped forward to meet his fate.
Andre’s entry to 2014’s BBC 500 words
Collection of data is not the only problem |
| November 13th, 2014 under Digital Rights, InfoSec, Life, Politics, rengolin. [ Comments: none ]
What the NSA has taught us is that mass surveillance is not as hard as people used to think. Other governments, and most commercial companies, do that, too. With the advent of smartphones we’ve learned to ignore most of that for the sake of convenience, and most of the time, it’s ok.
It’s true that the bulk surveillance from governments can spark enough false positives to make people worried, or that Google and Facebook are using your personal details to make a bucket load of money, and some others are selling those details, sometimes not even realising.
When you think of all the power that the government can do with your data, or all the money that big corporations are making with your personal information, it’s nor surprising to think: “where’s my share in this?”. Some people even tried to evaluate how much would you get for selling different types of personal information to corporations. But, is that the real question that we should be asking?
Should we be concerned with what data do we leak and try to minimise it, or should we really be thinking what can they really do with that information? Of course, any answer will be a mix of both (since not all investigating parties are well intentioned or law abiding), but there is the limit of government and corporation’s powers that can go a long way of making the data useful but not harmful.
I said this before and I still maintain my position that no one has ever had privacy. Parents eavesdrop on their kids behaviour since the dawn of humanity as a way to grow them into responsible adults. The concept of “being responsible” has changed over the millennia, but parents have not.
Law making and enforcing bodies have eavesdropping as their primordial way of acquiring information. Since people normally only do bad stuff when no one is looking, expecting the police to only use highly visual enquiring methods (such as asking personally or patrolling an area) become impossibly expensive very quickly. It is true that random checkpoints, fake speed cameras and signs do help awareness, but that’s also not optimal from a monetary point of view.
Privacy also goes against any common sense in the outside world. If you take a bus, every one in that bus knows you’re there, even if they don’t know who you are. If there is a picture of you on the bus saying “wanted, dead or alive”, they will see you and report you. There’s little you can do, besides hiding and never showing your face again. Famous people (actors, etc) have the same problem and the solution is pretty much hide.
The data you “leak” is also the data that defines you. Where you have been, what you like, where you work and live, what food you eat and what you do on Saturdays. Collecting that data and providing a service on that is actually extremely beneficial to you. The problem is who has access to that information.
Tesco knows what I need to buy better than I do. They send me vouchers with discount on fresh mozzarella cheese, fresh basil and fresh tomato on the vine. They know I love Caprese salad, and I actually like Tesco knowing that, because I get a slightly cheaper Caprese salad once in a while.
Google Maps knows where I live and work, so that when I’m going home I can just say: “Ok Google, go home”, and it does the rest. If I don’t share that kind of information with Google, it would never be able to do what I want it to. Examples like that are everywhere, and each company must have access to a wide range of data from you (location, shopping habits, browsing habits) for them to be able to do so. It’s the unavoidable fact of information theory that you need enough entropy to find patterns.
The real problem here is what companies end up doing with your data, and how well they protect it from malicious outsiders. Even if the company is benign, once they get hacked, your bundle of personal data which is enough to infer pretty accurate patters about your personal life, are out there. Who know what the attackers will do you that?
Another problem is blanket approvals to bypass any legal system and arrest, judge and execute individuals solely based on bulk surveillance patterns that are known to generate an immense amount of false positives, not only because the algorithms are inexact, but because the people filtering and creating the rules don’t posses enough knowledge to know what they’re looking for in the first place.
So, a pragmatic view on surveillance should attack the problem of the legality of actions on data, not just the legality of acquiring data in the first place. The legal system can already cope with that, for instance when evidence is found via illegal means (unapproved wire or microphone), it cannot be used against the accused. The “Patriot Act” changed all that in the US, and in other countries, and that’s the first thing that has to be changed back to a sane standard. Governments should never have the ability to bypass the judicial and executive system based on *any* collected data, especially if it was done in bulk, with irrelevant patterns to match.
Finally, there should be a guarantee in the license that the company is required to store such data in a protected way, following a set of standard cryptographic techniques and solutions, and there should be a clause on how they would destroy the data on the minimal attempt of intrusion. To compensate the total loss of service for all users, they must store such data in different locations, using different techniques and keys, and distribute it across multiple locations.
It may seem daunting for small companies to provide small services, but so did cheap scalable storage and service providing until Amazon created the AWS and all others followed suit. If there is a demand, someone will create the solution. That has been the human response to everything since we came down trees to conquer the planet and we won’t stop here.
It’s not the data, it’s what governments and corporations can do with the data, and how to protect it from malicious parties.
Moving to Europe |
| August 27th, 2014 under rengolin. [ Comments: none ]
After more than one year planning, we’re finally moving to Europe. Well, the blog, of course.
Ever since the exposure of the worst conspiracy theories we all knew existed, but were always called crazy, from Snowden’s documents about the NSA and later from many other countries, we’ve been trying to find a place where there would be less risks. Defining risk is hard, and that’s why it took us so long.
As followers of Bruce Schneider know all too well, humans are very poor at defining risk. The fear of the NSA can put you close to other players (like Russia) or other kinds of risk (like incompetence), and you wouldn’t be safer overall, just safer from the “monster” you fear.
So, I had to list all things that could go wrong with a blog, and try to rank the alternative places and then add up as see which one had the less overall risk. This are some of the risks I evaluated:
- Freedom of speech: This is not only what the law says, it’s what the government or the corporations on that particular country have the ability to do. Despite that been the first amendment to the US constitution, while the government has the ability to legally block your website, arrest you, defame you, spy on you without your consent or a court order, the constitution means absolutely nothing. The US and Russia are probably the worse, here.
- Privacy: While the government is concerned with what you say and share from a national security point of view, hosting services are interested in selling you stuff, or maybe even selling your stuff to the high bidder. Given how mainstream cloud computing has become and how your data does not belong to you any more, I fear that this worry will become less and less important and providers will sell more and more of your data. This item is more to do with the providers than the country they live in, since not many countries have laws against that kind of consensual abuse.
- Network stability: Not only good quality hosting, but good quality country infrastructure, back-bones and country-wide investment on interconnectivity. While the US ranks very high on this item, the cost of high quality hosting it higher than the European counterparts, and the cheap hosting solutions are very, very poor.
- Competence: Some countries have a much higher tolerance for incompetence than others, and the countries in the BRICS group, as well as the US are the ones that top the list. This is not just about legislation, but as the culture of the people. Europeans tend to be less understanding when it comes to incompetence, either from commercial or governmental enterprises.
- Price: All that comes with a price, of course.
I did some finger-in-the-air estimates of the ranks and came up with this:
- USA (where our blog was), ranks 0 for freedom and privacy, 7 for network stability and 5 for competence, giving it a paltry 3 score. Cheap hosting is cheap in quality, so you get what you pay for.
- Brazil, another alternative, ranks 10 for freedom of speech (because the government doesn’t really care), 7 for privacy (because few companies have the ability to eavesdrop, most don’t care), 3 for competence and network stability, with a higher score of 5.75. The price is cheap overall, but the level of quality varies greatly, even on the same company over the years, and that’s a constant source of headaches.
- UK is as bad as the US on freedom (about 1), but a lot better on privacy, say 4, because of European laws. The network stability is probably as good as (7), and the competence is a lot higher (about 7, too), but also a lot more expensive, resulting on a good, but expensive, average of 4.75.
- Germany was our final option, with the European laws and the German people being what they are, and how they felt about the NSA, I’d say we’re pretty safe here. At least for now, freedom and privacy matters are probably 7, if not more. And even though some do comply with Russian demands, the Russian government (like other BRICS) has a very incompetent public service, as well as being worried about other more important things than spying on international blogs. Stability and competence are probably similar to the UK, averaging out at about a good 7 score.
So, we ended up in Germany and so far it’s been uneventful. The migration was pain-free, too from both sides. We also have a new domain, systemcall.eu, which will be our main domain, with the systemcall.org as an add-on. Please, let us know if there are any glitches or missing things.
Trashing Chromebooks |
| June 5th, 2014 under Computers, Hardware, rengolin, Unix/Linux. [ Comments: 8 ]
At Linaro, we do lots of toolchain tests: GCC, LLVM, binutils, libraries and so on. Normally, you’d find a fast machine where you could build toolchains and run all the tests, integrated with some dispatch mechanism (like Jenkins). Normally, you’d have a vast choice of hardware to chose from, for each different form-factor (workstation, server, rack mount) and you’d pick the fastest CPUs and a fast SSD disk with space enough for the huge temporary files that toolchain testing produces.
The only problem is, there aren’t any ARM rack-servers or workstations. In the ARM world, you either have many cheap development boards, or one very expensive (100x more) professional development board. Servers, workstations and desktops are still non-existent. Some have tried (Calxeda, for ex.) but they have failed. Others are trying with ARMv8 (the new 32/64-bit architecture), but all of them are under heavy development, so not even Alpha quality.
Meanwhile, we need to test the toolchain, and we have been doing it for years, so waiting for a stable ARM server was not an option and still isn’t. A year ago I took the task of finding the most stable development board that is fast enough for toolchain testing and fill a rack with it. Easier said than done.
Amongst the choices I had, Panda, Beagle, Arndale and Odroid boards were the obvious candidates. After initial testing, it was clear that Beagles, with only 500MB or RAM, were not able to compile anything natively without some major refactoring of the build systems involved. So, while they’re fine for running remote tests (SSH execution), they have very little use for anything else related to toolchain testing.
Pandas, on the other hand, have 1GB or RAM and can compile any toolchain product, but the timing is a bit on the wrong side. Taking 5+ hours to compile a full LLVM+Clang build, a full bootstrap with testing would take a whole day. For background testing on the architecture, it’s fine, but for regression tracking and investigative work, they’re useless.
With the Arndales, we haven’t had such luck. They’re either unstable or deprecated months after release, which makes it really hard to acquire them in any meaningful volumes for contingency and scalability plans. We were left then, with the Odroids.
HardKernel makes very decent boards, with fast quad-A9 and octa-A15 chips, 2GB of RAM and a big heat sink. Compilation times were in the right ball park (40~80 min) so they’re good for both regression catching and bootstrapping toolchains. But they had the same problem as every other board we tried: instability under heavy load.
Development boards are built for hobby projects and prototyping. They normally can get at very high frequencies (1~2 GHz) and are normally designed for low-power, stand-by usage most of the time. But toolchain testing involves building the whole compiler and running the full test-suite on every commit, and that puts it on 100% CPU usage, 24/7. Since the build times are around an hour or more, by the time that the build finishes, other commits have gone through and need to be tested, making it a non-stop job.
CPUs are designed to scale down the frequency when they get too hot, so throughout the normal testing, they stay stable at their operating temperatures (~60C), and adding a heat sink only makes it go further on frequency and keeping the same temperature, so it won’t solve the temperature problem.
The issue is that, after running for a while (a few hours, days, weeks), the compilation jobs start to fail randomly (the infamous “internal compiler error”) in different places of different files every time. This is clearly not a software problem, but if it were the CPU’s fault, it’d have happened a lot earlier, since it reaches the operating temperature seconds after the test starts, and only fails hours or days after they’re running full time. Also, that same argument rules out any trouble in the power supply, since it should have failed in the beginning, not days later.
The problem that the heat sink doesn’t solve, however, is the board’s overall temperature, which gets quite hot (40C~50C), and has negative effects on other components, like the SD reader and the card itself, or the USB port and the stick itself. Those boards can’t boot from USB, so we must use SD cards for the system, and even using a USB external hard drive with a powered USB hub, we still see the failures, which hints that the SD card is failing under high load and high temperatures.
According to SanDisk, their SD cards should be ok on that temperature range, but other parties might be at play, like the kernel drivers (which aren’t build for that kind of load). What pointed me to the SD card is the first place was that when running solely on the SD card (for system and build directories), the failures appear sooner and more often than when running the builds on a USB stick or drive.
Finally, with the best failure rate at 1/week, none of those boards are able to be build slaves.
That’s when I found the Samsung Chromebook. I had one for personal testing and it was really stable, so amidst all that trouble with the development boards, I decided to give it a go as a buildbot slave, and after weeks running smoothly, I had found what I was looking for.
The main difference between development boards and the Chromebook is that the latter is a product. It was tested not just for its CPU, or memory, but as a whole. Its design evolved with the results of the tests, and it became more stable as it progressed. Also, Linux drivers and the kernel were made to match, fine tuned and crash tested, so that it could be used by the worst kind of users. As a result, after one and a half years running Chromebooks as buildbots, I haven’t been able to make them fail yet.
But that doesn’t mean I have stopped looking for an alternative. Chromebooks are laptops, and as such, they’re build with a completely different mindset to a rack machine, and the number of modifications to make it fit the environment wasn’t short. Rack machines need to boot when powered up, give 100% of its power to the job and distribute heat efficiently under 100% load for very long periods of time. Precisely the opposite of a laptop design.
Even though they don’t fail the jobs, they did give me a lot of trouble, like having to boot manually, overheating the batteries and not having an easy way to set up a Linux image easily deployable via network boot. The steps to fix those issues are listed below.
WARNING: Anything below will void your warranty. You have been warned.
To get your Chromebook to boot anything other than ChromeOS, you need to enter developer mode. With that, you’ll be able not only to boot from SD or USB, but also change your partition and have
sudo access on ChromeOS.
With that, you go to the console (CTRL+ALT+->), login with user
chronos (no password) and set the boot process as described on the link above. You’ll also need to set
sudo crossystem dev_boot_signed_only=0 to be able to boot anything you want.
The last step is to make your Linux image boot by default, so when you power up your machine it boots Linux, not ChromeOS. Otherwise, you’ll have to press CTRL+U every boot, and remote booting via PDUs will be pointless. You do that via
You need to find the partition that boots on your ChromeOS by listing all of them and seeing which one booted successfully:
$ sudo cgpt show /dev/mmcblk0
The right partition will have the information below appended to the output:
Attr: priority=0 tries=5 successful=1
If it had tries, and was successful, this is probably your main partition. Move it back down the priority order (6-th place) by running:
$ sudo cgpt add -i [part] -P 6 -S 1 /dev/mmcblk0
And you can also set the SD card’s part to priority 0 by doing the same thing over
With this, installing a Linux on an SD card might get you booting Linux by default on next boot.
You can chose a few distributions to run on the Chromebooks, but I have tested both Ubuntu and Arch Linux, which work just fine.
Follow those steps and insert the SD card in the slot and boot. You should get the Developer Mode screen and waiting for long enough, it should beep and boot directly on Linux. If it doesn’t, means your
cgpt meddling was unsuccessful (been there, done that) and will need a bit more fiddling. You can press CTRL+U for now to boot from the SD card.
After that, you should have complete control of the Chromebook, and I recommend adding your daemons and settings during the boot process (inid.d, systemd, etc). Turn on the network, start the SSD daemon and other services you require (like buildbots). It’s also a good idea to change the governor to
performance, but only if you’re going to use it for full time heavy load, and especially if you’re going to run benchmarks. But for the latter, you can do that on demand, and don’t need to leave it on during boot time.
To change the governor:
$ echo [scale] | sudo tee /sys/bus/cpu/devices/cpu[N]/cpufreq/scaling_governor
scale above can be one of performance, conservative, ondemand (default), or any other governor that your kernel supports. If you’re doing before benchmarks, switch to performance and then back to ondemand. Use cpuN as the CPU number (starts on 0) and do it for all CPUs, not just one.
Other interesting scripts are to get the temperatures and frequencies of the CPUs:
$ cat thermal
for dir in $ROOT/*/temp; do
temp=`echo $temp/1000 | bc -l | sed 's/0\+$/0/'`
echo "$device: $temp C"
$ cat freq
for dir in $ROOT/*; do
if [ -e $dir/cpufreq/cpuinfo_cur_freq ]; then
freq=`sudo cat $dir/cpufreq/cpuinfo_cur_freq`
freq=`echo $freq/1000000 | bc -l | sed 's/0\+$/0/'`
echo "`basename $dir`: $freq GHz"
As expected, the hardware was also not ready to behave like a rack server, so some modifications are needed.
The most important thing you have to do is to remove the battery. First, because you won’t be able to boot it remotely with a PDU if you don’t, but more importantly, because the head from constant usage will destroy the battery. Not just as in make it stop working, which we don’t care, but it’ll slowly release gases and bloat the battery, which can be a fire hazard.
To remove the battery, follow the iFixit instructions here.
Another important change is to remove the lid magnet that tells the Chromebook to not boot on power. The iFixit post above doesn’t mention it, bit it’s as simple as prying the monitor bezel open with a sharp knife (no screws), locating the small magnet on the left side and removing it.
With all these changes, the Chromebook should be stable for years. It’ll be possible to power cycle it remotely (if you have such a unit), boot directly into Linux and start all your services with no human intervention.
The only think you won’t have is serial access to re-flash it remotely if all else fails, as you can with most (all?) rack servers.
Contrary to common sense, the Chromebooks are a lot better as build slaves are any development board I ever tested, and in my view, that’s mainly due to the amount of testing that it has gone through, given that it’s a consumer product. Now I need to test the new Samsung Chromebook 2, since it’s got the new Exynos Octa.
While I’d love to have more options, different CPUs and architectures to test, it seems that the Chromebooks will be the go to machine for the time being. And with all the glory going to ARMv8 servers, we may never see an ARMv7 board to run stably on a rack.
Asperger’s and the failure of the educational system |
| December 28th, 2013 under Life, rengolin, World. [ Comments: none ]
Asperger’s Syndrome (more info), a condition within the Autism spectrum where social awareness is lacking, but communication skills are not affected much, is a topic floating around our house for a few years. After many ups and downs, our son has finally been diagnosed with it, and the rest of the family will need serious checking, too.
That has brought us many explanations to most of our problems at work and school, and got me thinking on many of the issues I found illogical in the educational system, but always though it was my fault for not adapting to it. Now, the more I think, the more I realise that any system that base teaching on the average child is, to say the least, mediocre.
On a large scale, children (and adults), range from very low to very high skills in many areas, from IQ, to social, to artistic or empathic skills. With so many different dimensions, and so many scales focused on defining people for what they are, and so many different types of peoples around, trying to create the imaginary “average child” to educate is a folly quest. But a lot more serious than folly, is the quest to force different children to accommodate to that imaginary average and brutalise them when they don’t. There is a name for it: bullying.
Schools are well known for not caring much for the “lesser minds“, since they don’t contribute much to the scoring system, under disability Acts, they’re free to refer those problematic children to special schools, where they will be marginalised and receive funding from the government for the rest of their lives, even though, if thought well, they could perfectly have a decent living by themselves.
But the brightest children are also in peril, for they do contribute to scoring, and in a positive way. They’re sought after by schools that have no idea on how to educate those children. With the failure to understand their advanced needs, those kids become repugnant braggarts. Even though they can go beyond on arts, maths or science, most of them lack any social skills or, for the very definition of “special“, fail miserably to conform to the “average child” norm.
The expectation that special children have the same traits as average children, plus a few special skills, is idiotic, and I’m really surprised that this has passed in so many countries and educational systems as the norm to be followed, and imposed. It shows that whomever is dealing with educating the brightest minds are not brightest minds themselves. It’s the same as giving the job to rehabilitate petty criminals to serial killers.
The very notion of scoring system is at the core of the standardisation of the human race.
Each group in society has a different take on what’s important for their cohesion. Some rely on competition and selfish behaviour to keep the capitalism alive and kicking, others rely on knowledge and logical thinking to progress science, and so on. This diversity is paramount to define the human race as a multi-cultural species, where every aspect of it is as valuable as every other.
The notion of a National Curriculum is a good one, since even the most artistic ones have to be able to add up at the grocery store, and the brightest mathematicians should be able to plat instruments, if they so chose. But what happens in most schools, and certainly in all public schools we’ve been in England, so far, it’s that they treat the curriculum as a golden standard, and don’t even attempt to go beyond.
The same way when you’re speeding on the road, and the policeman stop you and say “The speed limit is a limit, not a guideline”, the National Curriculum is a minimum, not a guideline. It means that, if you’re not teaching at least that, you should not be called a “school” to begin with. But it also means that you should go beyond, at least for the children that have the capacity to follow.
No child will follow on every category, so you need to know what each child can do on each extra topic. That also means that, while the least able children will have at least the National Curriculum, the average children will have more in different areas, and the only difference between the average and the above-average children is the amount of extra subjects and topics they learn. It’s that simple.
But for it to be that simple, the way exams work have to change completely. Exams today don’t test for what a child knows or have learnt, but it tests for what they are able to memorise in a short term, or how effectively they can guess, or how efficiently they can cheat.
Take, for example, the SAT tests, which are the exams taken by all children between primary and secondary schools. The format here is to fill the blanks. It’s a lot better than multiple choice, even though there are many questions in it that are multiple choice there, but it’s not testing the ability of children to think at all.
It is true that average children will have to think to answer those questions. It is also true that average children will have to have learnt that in the first place by listening and memorising the concepts, but not necessarily understood why they’re like that. There seems to be no questions about why the universe behaves in that way, or why I can solve the same mathematical problem in different ways and still get the same results.
But the biggest failure is that the tests are standardised to the National Curriculum, and standardised to what an ideal average child will be able to understand and answer from her memory. In the age of the technological revolution, we have to ask ourselves if this is the right way forward.
Do we want to continue forcing people to follow averages, if we want humans to be a better species? Do we need more average people doing specialised work? Isn’t our technological level ready for a de-centralised, de-normalised learning experience, which will fare a lot better on all non-average children in the world (ie. all children), and allow better matching to their own skills, desires and abilities?
One such way would be to have more meaningful questions, with non-obvious answer, and software to analyse them. So, instead of drawing the circulatory system and asking children to fill the lines pointing to organs with names, ask them to describe how the blood circulates inside the body. True, natural language processing is still not there yet, but there are a number of different ways to ask questions and make sure that the answer will be simple enough to be dealt with simple regular expressions or state machines that, in context, will be limited to only a number of valid answers.
Each answer will lead to different following questions, based on the answer, and each new step will take you towards harder or easier questions, or more specific to one topic or another. Recording the paths for each child will also tell you what are the missing knowledge in each child, and what topics the teachers have to cover more in depth, in general.
Personalised learning per se is not optimal, as I have seen myself with the Khan academy and programming books. My son could easily write new programs, and they would certainly work, but he couldn’t explain to me why. It was only when I intervene that he starting to understand why, but the attitude remains: he won’t need to understand why while questions, exams and results are measured by multiple choice, filling the blanks or guessing the answer.
Among intelligent people, those with Asperger’s have a serious disadvantage: as with other types of Autism, they can pattern match instinctively, and come up with accurate results without knowing how they did it. During primary school this is a huge advantage, since all questions are too silly to matter, but as you progress to secondary school (or worse, if you have a perfectionist father), you’ll have more and more difficulties in answering the questions that really matter: why?
Knowing “why” is fundamental because of reproducibility. Science is all about method. Mathematics is only consistent because it has a single method. Science follow suite, and is only consistent because it’s based on maths. This consistency comes in the form of reproducibility. If you can describe your method, and others can follow, than you have a proof, or a theory. Otherwise, it’s pseudo-science, or religion.
If one wants to answer questions, not just get them right on average, one wants to understand why certain method works, on which cases, with which constraints. If you spent your whole (short) life guessing and getting accurate answers (not necessarily correct ones), and if all the school cares is to be reasonably correct, than you’ll think you’re a genius (the school will, too), and you won’t learn how to think until it’s too late.
Since schools don’t even try to understand the differences between the learning process of children, they never spot this in any child. We only got an early warning from one of the head teachers (the best, so far, at Queen Edith’s), because of behaviour issues, not learning problems. They were simply unaware that our son would not even know why he was right. This is very similar to what expert computer systems can do, and we don’t consider them to be intelligent.
Recently, I took matters into my own hands and am teaching both my kids to think. I don’t care what answer they give me, I want to know why they think that’s the answer. I want explanations, not step-by-step equation solving that can be easily memorised, I want them to tell me why they can apply that step in solving that equation. Why do they think that stars are hotter than planets, why can’t you send messages faster than the speed of light, even with entanglement. Why is what really matters, and that’s the least worry in all schools I’ve ever been, or have ever seen.
Time for a change
Until we manage to find a way to ask why, and get meaningful and measurable answers from our children, we’ll still be in the stone ages. All the progress that we think we’ve made since the wheel is but a fleck on what we can achieve. People that assume our understanding as complete, or even good enough are idiots and should not be given any level of control over our society.
Next time you vote, ask your candidate why, and be ready to change candidates if they don’t understand, or can’t answer the question. You’ll see, like Russel Brand did, that you’ll end up without a candidate.
We need to change how we think, and the question of this century is why?. Ask your kids every day, why. Don’t let them ask why if they can’t answer why. Every day, wake up, look at yourself in the mirror and ask…
Second language curse |
| December 9th, 2013 under Fun, Life, rengolin. [ Comments: none ]
I count myself privileged of being proficient in a second language (English), which has helped me learn other languages and have a more elastic mind towards different concepts in life. But there is a curse that I just found out, and it turned out to be significant.
For a few years I realised that I was signing my emails with the wrong name: “reanto” instead of “renato“. And since I sign manually all my emails (and I send many emails a day), I could get a true sense of the problem. In the last year or so, the problem got a lot worse, and now I can’t sign my own emails any more without erasing “reanto” and re-writing “renato” almost every time.
Now, misspelling English words (even when you do know the correct spelling) is ok, since I haven’t started typing when we moved to England, far from it. Misspelling Portuguese words is also ok, because the contact with a new language will bring new sounds, and some uncertainty on how to spell a native word will arise after a few years without much contact with it. But misspelling your own name?! That’s a whole new class of fail.
Today it occurred to me that the reason for that might very well be the same as the rest, after all my name is just another word that I know how to spell. And, it turns out that, in the English language, “an” is the 5th most common digraph, while “na” doesn’t even register!
So, the frequency which I write the digraphs (and trigraphs) in English are shaping my ability to write my own name. Much the same as the problems that my native language have when I write English, for instance, I have to delete the “e” at the end of many words like “frequent“, as it seems to come before I even think about it.
While writing this small post, the browser’s spell checker has fixed my misspellings (including the previous word) many times, and forcing me to not have the checker bug me, has also forced me to misspell my own name.
The brain is a weird thing…
« Previous entries