Microblog : A very long article Wikipedia article on the orientation of toilet paper [7 jun à 22:52] [R]

## Samedi, 26 avril 2014

### Probabilities

Catégories : [ Science ]

Before Easter, the supermarket was selling Rölli suprise-eggs, announcing that every fifth egg contained a figurine related to Rölli's universe. This made me wonder: how many eggs you need to buy to ensure that you get at least one such figurine?

The following formula gives the probability (p) of getting at least one Rölli figurine given that one has bought n eggs, and that every k egg contains such a figurine:

p = 1 − (1 − 1/k)n

The answer to the first question is not straightforward, though. To be absolutely sure to get at least one figurine, you need to buy 4/5 of the egg production plus one egg, because there is always an (admitedly slim) chance that the 4/5 of are made entirely of eggs containing something else than a Rölli figurine, and that the 1/5 that is left is made only of eggs containing Rölli figurines. The extra egg that you need to buy is therefore taken from this last 1/5, and is guaranteed to contain a Rölli figurine.

If you are not willing to spend so much time and money tracking and buying most of the egg production, you can trade time and money for a tiny bit of uncertainty. For example, if you can accept to have only 90% chance of getting a Rölli figurine instead of 100%, it is enough to buy 11 eggs. If you want a better chance yet and want to go for 95%, you need to buy 14 eggs. Finally, if you want a 99% chance, you need to get 21 eggs. These values were computed from the formula above, setting k = 5 (“every fifth egg”), p = 0.90 or p = 0.95 or p = 0.99, solving the equation for n and rounding the result to the nearest, larger integer.

It is also worth noticing that if you decide to buy 5 eggs (because every 5th egg contains a Rölli figurine), you have only about 2 in 3 chances of getting a Rölli figurine.

The table below gives the minimum values of n for a given value of k and different probability thresholds. It also gives the ratio n over k, i.e., given a “one in k” probability, how many times k does one need to get to have a probability greater than the thresold. The second column also indicates, given a “one in k” probability, what are you chances of getting what you want if you get k items. Notice that these values grow toward a given, finite limit when k grows larger.

kp(n=k)p≥0.90 (n/k)p≥0.95 (n/k)p≥0.99 (n/k)
20.750 4 (2.000) 5 (2.500) 7 (3.500)
30.704 6 (2.000) 8 (2.667) 12 (4.000)
40.684 9 (2.250) 11 (2.750) 17 (4.250)
50.672 11 (2.200) 14 (2.800) 21 (4.200)
60.665 13 (2.167) 17 (2.833) 26 (4.333)
70.660 15 (2.143) 20 (2.857) 30 (4.286)
80.656 18 (2.250) 23 (2.875) 35 (4.375)
90.654 20 (2.222) 26 (2.889) 40 (4.444)
100.651 22 (2.200) 29 (2.900) 44 (4.400)
200.642 45 (2.250) 59 (2.950) 90 (4.500)
370.637 85 (2.297) 110 (2.973) 169 (4.568)
500.636 114 (2.280) 149 (2.980) 228 (4.560)
1000.634 230 (2.300) 299 (2.990) 459 (4.590)
5000.6321151 (2.302)1497 (2.994)2301 (4.602)
10000.6322302 (2.302)2995 (2.995)4603 (4.603)

One can use this table to find out how many times one needs to play the roulette in a casino to have 95% chance of winning at least once: a european roulette has 37 numbers (k = 37), and the limit of the n/k ratio is about 3; one therefore needs to play about n ≅ 37 × 3 = 111 times (the row for k = 37 indicates the actual value is n = 110).

[ Posté le 26 avril 2014 à 15:40 | pas de commentaire | ]

## Jeudi, 6 février 2014

### Changing GRUB from EFI to BIOS

Catégories : [ Informatique ]

My new computer has a UEFI firmware. I installed Debian Wheezy, which in turn installed the EFI variant of GRUB. For that purpose, the Debian installer made the first partition on the hard disk drive of type VFAT and mounted in `/boot/efi`.

My problem is that GRUB tends to freeze, either just before booting the kernel (showing forever “Loading initial ramdisk”) or just after the welcome message (“Welcome to Grub!”). Pressing the computer's reset button allowed to reboot the computer, and everying went then fine. It seems to be possible to reproduce the bug at will by switching off the power supply, waiting 15 seconds for the capacitors to get empty and then reboot the computer. Booting however also hangs quite often after powering the computer off in software (where the power supply still provides some power to the motherboard).

I read here and there that EFI GRUB was quite buggy, so I decided to switch to PC GRUB (the variant for booting with the Legacy firmware, aka BIOS).

In a first attempt, I configured the motherboard's firmware to use “Legacy ROM only” instead of “UEFI only”. Debian continued to boot normally with the still installed EFI GRUB, and the freeze when rebooting after having switched off the power supply seemed to have disappeared. It howerver froze again today and so I decided to change from EFI GRUB to BIOS GRUB.

I first ran `apt-get install grub-pc`, which complained that
```/usr/sbin/grub-setup: warn: This GPT partition label has no BIOS Boot
Partition; embedding won't be possible!.
/usr/sbin/grub-setup: warn: Embedding is not possible.
GRUB can only be installed in this setup by using blocklists.
However, blocklists are UNRELIABLE and their use is discouraged..
```

After a bit of research on the Web, I found someone's advice to change the flag of the FAT partition to `bios_grub`. I then forced the reinstallation of `grub-pc` with ```apt-get install --reinstall grub-pc```, which didn't complain anymore.

On the next reboot however, the startup script indicated that “fsck died with status 6”. I found out that it tried to check the VFAT partition, but since GRUB is now installed there, it is not anymore recognized as a VFAT partition, and fsck was legitimately skipping it. `parted` confirmed that fact, and `blkid` does not list the VFAT partition anymore either. I therefore commented it out in `/etc/fstab` and now the boot does not fail anymore.

[ Posté le 6 février 2014 à 19:23 | pas de commentaire | ]

## Samedi, 7 décembre 2013

### The Value of Money in Dodger

Catégories : [ Science ]

Terry Pratchett's Dodger is said to take place in the first quarter of the Victorian Era. We'll assume it is the year 1840. According to the National Archive's currency converter, 1 £ in 1840 is worth about 45 GBP in 2005.

Moreover, the same source indicates that with 100 GBP (about 2£ 5s) you could buy 11 days work of a craftsman wages in the building trade, 3 stones (42 lbs) of wool or 1 quarter (28 lbs) of wheat. As a reference, a person's daily needs in energy are equivalent to about 2 lbs of wheat (representing 2950 kcal according to Wikipedia).

These are the coins mentionned in Dodger:

• Guinea: 21s = 252d (47.25 GBP)
• Sovereign: 1 £ = 20s = 240d (45 GBP)
• Crown: 5s = 60d (11.25 GBP)
• Half-Crown: 2.5s = 30d (5.63 GBP) (about 1.5 lbs of wheat)
• Shilling: 1s = 12d (2.25 GBP)
• Sixpence: 1/2s = 6d (1.13 GBP)
• Groat: 4d (0.75 GBP)
• Thruppence: 3d (0.56 GBP)
• Penny: 1d (0.19 GBP)
• Half-penny: 1/2d (0.10 GBP)
• Farthing: 1/4d (0.05 GBP)

A day worth of a craftman's wages is therfore 4s, which could buy 2.5 lbs of wheat or 4 lbs of wool.

[ Posté le 7 décembre 2013 à 18:06 | pas de commentaire | ]

## Dimanche, 15 septembre 2013

### Instantiating Many Objects in Python

Catégories : [ Informatique ]

I have a list of files in a text file, and I want to load this list into some kind of data structure. The list is quite long, and requires to instantiate 100,000 objects in Python, all of the same type. I found out that depending on what kind of object is used, the time it takes to instantiate all these can vary greatly. Essentially, each line of the file is composed of tab-separated fields, which are split into a list with Python's `str.split()` method. The question therefore is: what should I do with that list?

The object must hold a few values, so basically a list or a tuple would be enough. However, I need to perform various operations on those values, so additional methods would be handy and justify the use of a more complex object.

#### The Contenders

These are the objects I compared:

A simple `list`, as returned by `str.split()`. It is not very handy, but will serve as a reference.

A simple `tuple`, no more handy than the `list`, but it may exhibit better performance (or not).

A class named `List` that inherits from `list`:
```class List(list):
def a(self): return self[0]
def b(self): return self[1]
def c(self): return self[2]
```
A class named `Tuple` that inherits from `tuple`:
```class Tuple(tuple):
def a(self): return self[0]
def b(self): return self[1]
def c(self): return self[2]
```
A class named `ListCustomInitList` that inherits from `List` and adds a custom `__init__()` method:
```class ListCustomInitList(List):
def __init__(self, *args): List.__init__(self, args)
```
A class named `TupleCustomInitTuple` that inherits from `Tuple` and adds a custom `__init__()` method:
```class TupleCustomInitTuple(Tuple):
def __init__(self, *args): Tuple.__init__(self)
```
A class named `ListCustomInit` that inherits from the `list` basic type but has the same features as `ListCustomInitList` instead of inheriting them from the custom `List`:
```class ListCustomInit(list):
def __init__(self, *args): list.__init__(self, args)
def a(self): return self[0]
def b(self): return self[1]
def c(self): return self[2]
```
A class named `TupleCustomInit` that inherits from `tuple` basic type but has the same features as `TupleCustomInitTuple` instead of inheriting them from the custom `Tuple`:
```class TupleCustomInit(tuple):
def __init__(self, *args): tuple.__init__(self)
def a(self): return self[0]
def b(self): return self[1]
def c(self): return self[2]
```
A class named `NamedTuple` that is made from the `namedtuple` type in the `collections` module:
```NamedTuple = namedtuple("NamedTuple", ("a", "b", "c"))
```
A very basic class named `Class` and that inherits from `object`:
```class Class(object):
def __init__(self, args):
self.a = args[0]
self.b = args[1]
self.c = args[2]
```
A variant of the previous that uses the `__slots__` feature:
```class Slots(object):
__slots__ = ("a", "b", "c")
def __init__(self, args):
self.a = args[0]
self.b = args[1]
self.c = args[2]
```
A old-style class, named `OldClass`, that does not inherit from `object`:
```class OldClass:
def __init__(self, args):
self.a = args[0]
self.b = args[1]
self.c = args[2]
```

#### The Benchmark

Each class is instantiated 100,000 times in a loop, with the same, constant input data: `["a", "b", "c"]`; the newly created object is then appended to a list. This process it timed by calling `time.clock()` before and after it and retaining the difference between the two values. The `time.clock()` method has quite a poor resolution, but is immune to the process being set to sleep by the operating systems's scheduler.

This is then repeated 10 times, and the smallest of these 10 values is retained as the performance of the process.

#### The Results

The results from the benchmark are shown relatively the speed of using a simple `list`. As expected, the use of a simple `list` is the fastest, since it requires not additional object instantiation. Below are the results:

• 1.000 list
• 2.455 tuple
• 3.273 Tuple
• 3.455 List
• 4.636 Slots
• 5.818 NamedTuple
• 6.364 OldClass
• 6.455 Class
• 6.909 TupleCustomInit
• 7.091 TupleCustomInitTuple
• 7.545 ListCustomInit
• 7.818 ListCustomInitList

#### Conclusion

One can draw several conclusions from this experiment:

• Not instantiating anything is much faster, even instantiating a simple tuple out of the original list increases the run time by 150%
• The slots feature makes object instantiation 28% faster compared to a regular class
• Deriving a class from a basic type and adding a custom `__init__()` method that calls the parent's `__init__()` adds a lot of overhead (instantiation is 7 to 8 times slower)

[ Posté le 15 septembre 2013 à 15:13 | pas de commentaire | ]

## Jeudi, 21 mars 2013

Catégories : [ Informatique ]

Passwords are difficult to generate and to remember, and once you finally know how to type yours quicky, you don't want to change it. That's usually the time when someone is forcing you to change it… Here is a synthesis of what I've found out about how to generate secure passwords.

#### Entropy as a measure of password strengh

The strength of a password is usually expressed as its entropy, measured in bits. In a nutshell, it expresses the total number of different passwords that can be created (given some construction rules), represented as the base 2 logarithm of that total number. For example, if you know that a password is composed of a single character which may be a letter (uppercase or lowercase), a digit, a white space or a period (which conveniently makes 64 different symbols: 26 lower case letters, plus 26 uppercase letters plus 10 digits plus 2 punctuation symbols), the entropy of that password is 6 bits (because 26 = 64). Non-integer entropy values are valid, so for example a single lowercase letter has an entropy of approximately 4.7 (because 24.7 ≈ 26). The addition of one bit of entropy multiplies the total number of different possible passwords by 2; a password made of 2 characters (64 symbols: upper/lowercase letters, digits and 2 punctuation signs) has therefore an entropy of 12 bits and a password made of 8 lowercase letters has an entropy of 37.6 bits.

#### The human factor

The above entropy measurement is true only if the password is truly randomly generated, and that each symbol has an equal probability of being selected. Humans seem to be rather bad at generating random passwords, and in Special Publication 800-63, the entropy of a human-generated password of length 8 is estimated to have an entropy of 18 bits.

Moreover, if the password is a word from a natural language, the number of possible different passwords is equal to the size of the vocabulary in that language; for English language this is estimated to be between 250,000 words. The entropy of a password made of a single English word is therefore approximately 17.9 bits.

#### Forcing more entropy

To increase the entropy of human-generated passwords, it is quite common to enforce rules, such as a minimum length, the use of more symbols than just the 26 lowercase letters and forbidding the use of common words. The NIST report above estimates that the additional symbols add 6 bits of entropy and the dictionary check adds 5 bits. An 8 character password following all the rules above is therefore estimated to have an entropy of 30 bits. For comparison, a randonly-generated password of 8 character chosen amongst the most common symbols on a computer keyword (80 symbols) has an entropy of 50.6 bits

Such password become however difficult to remember, especially if you have to memorize several of them and are forced to change them every few months.

And they are still pretty insecure.

There are two different methods for cracking a password.

The first method consists in connecting to the service asking for the password, and trying passwords until the right one is found. This method is slow, one can expect to test at most a few dozen of passwords per second (let's say 100 passwords per second). Using the entropy to measure the strength of the attack, that represents 6.6 bits per second, or 23.0 bits/day, or 27.9 bits/month, or 31.5 bits/year.

This gives the following times:

• 18 bits password: at most 45 minutes
• 30 bits password: at most 128 days
• 50.6 bits password: at most 560,000 years

The thing here is that reasonnably secure services will not allow that many trials.

The second method for cracking passwords requires a list of encrypted passwords e.g., stolen from a badly secured service. Depending on the encryption algorithm used with those passwords and the hardware at hand, one can expect an attacker to try between 2,000 and 15,500,000,000 passwords per second (between 11 and 33.8 bits/s) with a standard desktop computer (equipped with a modern GPU).

This gives the following times:

• 18 bits password: at best 128 seconds, at worst a few microseconds
• 30 bits password: at best 6 days, at worst less than a second
• 50.6 bits password: at best 274 years, at worst 32 hours.

#### How many bits are needed?

The times indicated above represent the maximum time needed for cracking the password. There is a 50% chance of cracking it in half that time, and a 10% chance of cracking it in a tenth of that time.

So if a password needs to be safe for at least 1 year, the time needed for cracking it needs to be at least a year i.e., 33.8 + 24.9 = 58.7 bits (entropy of the number of passwords tested per second plus the “entropy” of the number of seconds per year). There is however a chance that the password will be cracked in less time. Adding 1 bit of entropy will reduce the attacker's chance of finding the password in a given time by half, and adding 10 bits reduces it to 1 chance out of 1024 to crack it in that time. 7 bits would reduce it to 1 chance out of 128, which may be sufficient as well.

#### How to generate such a password?

A 68.7 bits password means 15 lowercase letters, or 11 common-keyboard-symbols. These have to be selected by a true random process, such as dice rolls, nuclear desintegration or electronic thermal noise. 6-sided dice are easy to come by, and the Diceware method is probably the easiest one for generating secure and easy-to-remember passwords. A rolls of 5 dice allows to select one word in a list of 7,776, providing 12.9 bits of entropy. The strenght of the password therefore depends on the number of words that are selected (by repeatedly rolling 5 dice):

• 1 word: 12.9 bits, cracked in less than a microsecond
• 2 words: 25.8 bits, cracked in less than 4 milliseconds
• 3 words: 38.3 bits, cracked in less than 30 seconds
• 4 words: 51.6 bits, cracked in less than 2 days
• 5 words: 64.5 bits, cracked in less than 55 years
• 6 words: 77.4 bits, cracked in less than 423,000 years
• 7 words: 90.3 bits, cracked in less than 3231 million years

The Diceware method also allows to add a random non-letter symbol to the password, adding about 9.5 bits of entropy for a 20 character password (about 5 words). Therefore a 5-word password with one random symbol can be considered secure for at least a few years.

#### How long will the password be safe?

Between 2002 and 2011, CPU and GPU computing power has been multiplied by 10 and 100 respectively i.e., +0.37 and +0.74 bits/year regarding password cracking. The rate of growth will probably not remain that high, but if one wants to keep a password for more than a year or two, it should be taken into consideration. For example, if a password must remain safe for the 4 next years, add 3 bits. The 5-word password with one random symbol will therefore be safe for the next 7 years.

One must also consider that computer clusters become affordable, and that a 25-GPU computer has been built exactly for the purpose of cracking passwords. This type of machine adds about 4 bits to capacity of cracking encrypted password (the “second method” above). That makes the 5-word diceware passphrase safe for barely over a year. Finally, cloud computing and parasitic computing using cloud-based browsers may reduce the safety period even further.

#### Conlcusion

The only truly secure passwords are long and truly random; any other method for generating passwords will lead to easily crackable passwords, and is therefore giving a false sense of security. Long enough passwords need to be changed, but not too often; 3 years is a reasonnable lifetime. The Diceware method allows to generate such password in a simple way.

Finally, memorizing a lot of passwords is difficult and induces people to reuse the same passwords. There is a simple solution to that, promoted by Bruce Schneier: write down your password and keep it in your wallet.

[ Posté le 21 mars 2013 à 22:50 | pas de commentaire | ]

## Mardi, 5 mars 2013

### Seven Segment Display

Catégories : [ Science ]

I recently discussed with a friend how to read with a computer the 3-digit numbers from a device using seven-segment displays. One solution we came up with was to put a phototransistor in front of each segment, read the seven on/off signals and recognize the digits. I then wondered if it's possible to use less than seven phototransistors per digit.

A minimum of four segments is obviously required, but after a bit of computer-aided experimentation, I found out that only 5 segments are enough: if you remove the lower and the lower-right segments, you can still identify all ten digits.

So with only 15 inputs instead of 21, you can read the 3 digits, using a 16 bit I/O expander (e.g., a MCP23017; this one even has internal 100 kΩ pull-up resistors, so it may be that nothing else than the phototransistors is needed).

[ Posté le 5 mars 2013 à 22:19 | pas de commentaire | ]

## Mardi, 19 février 2013

### Machine Inutile

Catégories : [ Bricolage ]

Quand on bascule l'interrupteur, un doigt sort de la boite et le rebascule dans l'autre sens. Strictement inutile, et donc parfaitement indispensable.

Vue de l'extérieur, c'est une boite en bois blanc, sans fioriture, à part l'interrupteur sur le couvercle.

Quand on pousse l'interrupteur, voici ce qui se passe:

La vidéo de la boite en action est également disponible.

Sous le capot, un moteur équipé d'une boite de vitesse 1:228, alimenté par 4 batteries rechargeables. Lorsqu'on bascule l'interrupteur, le « doigt » se lève, bascule l'interrupteur dans l'autre sens, ce qui renverse la la polarité et fait tourner le moteur dans le sens inverse. Le « doigt » revient alors à sa position initiale et finit sa course sur un deuxième interrupteur, qui coupe le courant. Le moteur s'arrête.

Mécaniquement et électriquement, il n'y a rien de compliqué, à part le fait de positionner les pièces au bon endroit.

Pour finir, une vidéo de la machine en action couvercle levé. On remarque que le « doigt » sursaute en arrivant sur l'interrupteur, au moment où le courant se coupe, et revient s'y poser une deuxième fois plus doucement, et reste en place.

[ Posté le 19 février 2013 à 21:51 | pas de commentaire | ]

## Lundi, 10 décembre 2012

### Ugly Ruby

Catégories : [ Informatique ]

A few months ago, I started to use ruby for work. Twice I burnt my fingers on the following behaviour in Ruby:

```def foo
"bar"
end

puts "foo = #{foo.inspect}"

if foo.nil?
foo = "quux"
puts "Not coming here"
end

puts "foo = #{foo.inspect}"
```
The method foo returns the string `"bar"`, which is therefore not `nil`. The result any sane coder expects would be
```foo = "bar"
foo = "bar"
```
What actually comes out when you run this snippet is
```foo = "bar"
foo = nil
```

I remember reading that in order to decide whether `foo` is a call to the foo method or the use of the local variable foo, Ruby checks the code before for any assignment to foo. As it happens, the local variable foo gets assigned inside the if clause, but the statement is never executed. My guess is that Ruby then decides that the local variable foo is put to use after the if clause, but is never actually assigned to, and therefore its value is `nil`. As it happens, the foo method still exists and returns `"bar"`, as expected, when called as `foo()`.

[ Posté le 10 décembre 2012 à 22:30 | pas de commentaire | ]

## Samedi, 16 juin 2012

### Faut-il courir sous la pluie ?

Catégories : [ Science ]

Est-on moins mouillé si on court sous la pluie au lieu de marcher ? En modélisant un piéton comme un parallélépipède rectangle de hauteur h et d'épaisseur l, et en notant vm la vitesse du piéton et vp la vitesse de chute de la pluie, on obtient une vitesse v de la pluie relativement au piéton. Le vecteur vitesse fait un angle a avec la verticale. On peut alors calculer que la projection de la surface frontale du piéton dans la direction de ce vecteur est Sh, et la projection correspondant à la surface supérieur est Sl. Le produit de ces surfaces avec la vitesse de la pluie donne le débit d'eau, et si on multiple par le temps mis pour parcourir un trajet d'une longueur donnée d sous la pluie, on obtient le volume d'eau qui s'est déversé sur le piéton. Le détails des calculs est laissé en exercice au lecteur, mais le résultat est V = d(h + l vp / vm).

En conséquence, pour minimiser le volume d'eau V, il faut maximiser que vm / vp >> l / h. Si on considère un piéton de 1,80 m de hauteur et 0,3 m d'épaisseur, il faut que la vitesse du piéton soit plus grande que 1/6 vp. Diverses sources sur le Web ont indiqué que la vitesse de la pluie en arrivant près du sol est d'environ 9 m/s, il faut donc que le piéton se déplace à plus de 1,5 m/s, soit 5,4 km/h.

[ Posté le 16 juin 2012 à 18:35 | pas de commentaire | ]

## Mardi, 15 mai 2012

### E-mail Git Patches

Catégories : [ Informatique/Git ]

This is, in a nutshell, how to send commits to the (single) maintainer of a project by e-mail.

`git config --set sendemail.to "John Smith <john.smith@example.com>"`
Make a set of patches from the commits e.g.,
`git format-patch HEADˆ`
or
`git format-patch origin/master..master`

Send the patches by e-mail:

`git send-email *.patch`
(this sends one e-mail per patch).

On the receiving side, the maintainer can then feed the content of each e-mail into `git am` to apply the patches and record new commits.

The `git send-email` command is packaged separately in Debian, the package `git-email` needs to be installed.

[ Posté le 15 mai 2012 à 19:47 | pas de commentaire | ]

## Lundi, 14 mai 2012

### More Git Recipes

Catégories : [ Informatique/Git ]

Resolve a binary file conflict with Git

Found on lostechies.com

In case of conflict with a binary file during a merge, you have two choices for resolving it:

`git add thefile`
• Use the other version:
`git checkout --theirs -- thefile; git add thefile`

Then commit the changes.

Show the content of a deleted file

Found on stackoverflow.com

`git show commitid:path/to/file`

The trick here is that one must use the full path to the file (relatively to the repository's root)

Restore a deleted file in a Git repo

Found on stackoverflow.com

Find the last commit where the file was deleted:
`git rev-list -n 1 HEAD -- thefile`
Then checkout the file from the commit before that:
`git checkout commitid -- thefile`

[ Posté le 14 mai 2012 à 13:39 | pas de commentaire | ]

## Mercredi, 9 mai 2012

### Super Mario Dice

Catégories : [ Jeux ]

Je me suis récemment intéressé aux jeux de dés, et j'ai découvert Zombie Dice. Le principe est très simple, le jeu est rapide, mais le thème ne me plaisait pas. Alors j'ai décidé de changer le thème en Super Mario. J'avais d'abord songé au Super Mario Bros. original tournant sur NES, mais je trouvais les graphismes de la réédition pour SNES (Super Mario All Stars) nettement plus beaux, et donc j'ai repris les graphiques de cette version là.

Les dés viennent de blankdice.co.uk, chez qui j'ai déjà acheté un certain nombre de dés. Les faces ont un léger creux, qui permettent de coller un autocollant carré de 14,5 mm de coté. J'ai imprimé sur un autocollant au format A4 et découpé les carrés avec un cutter rotatif sans trop appuyer, ce qui permet de couper l'autocollant sans couper le papier-support et donc de décoller les petits autocollants bien plus facilement.

Zombie Dice est vendu sous blister et ne comprend qu'un gros gobelet et 13 dés. Il nécessite quelque chose pour marquer le score (papier/crayon, jetons…) J'ai donc fabriqué une piste pliante de 13 cases plus une case départ, encore une fois en imprimant sur un autocollant A4. À la place du gobelet, je vais fabriquer un sac en tissu, assez grand pour contenir tous les composants (il reste à trouver le tissu).

Une amie m'a proposé de fabriquer des pions en forme de personnages de Super Mario en échange de quoi je lui fabrique quelques copies du jeu. On verra ce que ça donne.

[ Posté le 9 mai 2012 à 18:28 | 2 commentaires | ]

## Samedi, 24 mars 2012

Catégories : [ Bricolage/Arduino | Informatique ]

For my car heater controller I decided to use Alan Burlison's scheduler. I like it, because it leaves the main program file reasonnably short and allows to separate the code into multiple objects. I don't know if it makes the software more or less easy to write/maintain, but I find it fun to do it this way, and that's all that counts.

To implement 2-way communication between the JeeLink (master) and the JeeNode (slave) using Jean-Claude Wippler's RF12 library, I created a `Listener` object and a `Speaker` object that deal with receiving data and sending data respectively, while the `Protocol` object implements the higher-level protocol.

Here' how the slave's `.pde` file looks like. Notice how it contains only definitions and a bit of initialization, but no big mess of code?

```#define NB_ELEMENTS(a) sizeof(a) / sizeof(a[0])

Speaker speaker;
Protocol protocol(&speaker);
Listener listener(&protocol);

void setup() {
rf12_initialize(SLAVE_ID, RF12_868MHZ, HEATER_GROUP);
}

void loop() {
scheduler.run(); // infinite loop
}
```
Here's a sample of the slave's Listener.
```class Listener: public Task { // Task from Alan Burlison's scheduler
public:
Listener(Protocol * protocol):
protocol(protocol)
{};

bool canRun(uint32_t now); // Taks's interface
void run(uint32_t now);    // Task's interface

private:
Protocol * protocol;    // higher-level protocol handler
uint8_t recv_buffer[BUFFER_LEN];
uint8_t recv_buffer_len;
};

bool Listener::canRun(uint32_t now) {
if (rf12_recvDone())
return (rf12_crc == 0 &&  rf12_len <= BUFFER_LEN);
return false;
}

void Listener::run(uint32_t now) {
recv_buffer_len = rf12_len;
memcpy((void *)recv_buffer, (void *)rf12_data, recv_buffer_len);
if (rf12_hdr == (RF12_HDR_CTL | (MASTER_ID & RF12_HDR_MASK)))
protocol->got_ack();
else {
if (RF12_WANTS_ACK) {
rf12_sendWait(0);
}
protocol->handle(recv_buffer, recv_buffer_len);
}
}
```

And there's the slave's Speaker. Note that the Spaker tries to send data only if its `buffer_len` is greater than zero. This prevents calling rf12_canSend() when it's not necessary (according to the RF12 driver, you must not call rf12_canSend() only if you intend to send data immediately after calling it). When the `Protocol` wants to send something, it needs to get the `Speaker`'s buffer with `get_buffer()`, fill the buffer with data, and then call `send()`. Also, I implemented a retry mechanism in case no ACK has been received from the master.

```class Speaker: public Task { // Task from Alan Burlison's scheduler
public:
Speaker();
uint8_t* get_buffer();
void send(uint8_t len, bool ack);
void got_ack(); // called by the Protocol when it gets an ACK
bool canRun(uint32_t now);  // Task interface
void run(uint32_t now);     // Task interface

private:
uint8_t buffer[BUFFER_LEN];
uint8_t buffer_len;
bool with_ack;
uint8_t retry_count;
unsigned long next_retry_millis;
};

bool Speaker::canRun(uint32_t now) {
if (buffer_len > 0 && retry_count > 0
&& millis() > next_retry_millis)
return rf12_canSend();
return false;
}

void Speaker::run(uint32_t now) {
if (with_ack && retry_count == 1) {
buffer_len = 0;
}
uint8_t header = (with_ack ? RF12_HDR_ACK : 0)
| RF12_HDR_DST | MASTER_ID;
rf12_sendWait(0);
if (with_ack) {
retry_count  – ;
next_retry_millis = millis() + SEND_RETRY_TIMEOUT;
}
else
buffer_len = 0;
}

void Speaker::send(uint8_t len, bool ack) {
with_ack = ack;
buffer_len = len;
retry_count = SEND_RETRY_COUNT + 1;
next_retry_millis = millis();
}

void Speaker::got_ack() {
buffer_len = 0;
}

```

The master's code is very similar, you can check it there.

[ Posté le 24 mars 2012 à 16:17 | pas de commentaire | ]

## Jeudi, 22 mars 2012

### Car Heater Controller

Catégories : [ Bricolage/Arduino ]

Finnish winters are cold, and petrol engines don't like starting when it's very cold. That's why cars are often equipped with an electric heater that preheat's the cooling liquid for some time before starting the engine. The duration of this pre-heating depends on the temperature (in French). Moreover, since I don't leave home every morning at the same time, I don't want to start heating the car at the same time every day, even if the outside temperature doesn't change much from day to day. Hence this project of a remote-controlled switch for the car heater.

The system is composed of two Arduino-compatible parts: one master, connected to the home computer (always on), and one slave, in the garage. The master is a JeeLink and the slave is based on a JeeNode. Master and slave communicate with each other with a radio (868 MHz, a free Low-power_communication_device band in Europe).

#### The master

Not much to say about the master, the hardware is a standard JeeLink, which for the purpose of this project is really only a radio transceiver on a Serial-over-USB interface.

#### The slave

The electronic is very simple, and only a few components are needed. I paid special attention to selecting components that are specified to work from -40 °C (although I have no idea how well the device works at that temperature).

The slave is organised around a JeeNode (vertical, in the right-hand corner of the picture), and has the following three features.

It controls a relay (the black box above the orange PCB on the picture), which can be open or closed.

It measures the outside temperature with a DS18S20 sensor.

It measures the current flowing out of the relay using a current transformer (the black ring aroung the brown wire on the left side of the picture).

Moreover, it has a power supply (the black box on the lower left corner of the picture).

You can also notice that the mains cable (a 5 m, outdoors prolongation cord) has an earth wire that has not been severed. The live (brown) and neutral (blue) wires have been cut and connected to the relay. Power for the power supply is taken from the plug-side of the cable, before the relay (so it's always connected to the mains).

##### Power supply

The power comes from a compact switching power supply that converts 230 V AC into 12 V DC (maximum output power: 4 W). In case the power supply fails, the 1 A fuse (in the holder on the big red wire on the lower-left corner of the picture) should blow before the whole thing catches on fire. Also, although the power supply is designed to be placed on a PCB, I decided not to have any 230 VAC on the PCB, so I soldered the wires straight to its input pins, and isolated them with heat-shrink tube and added epoxy for strenght (the pins are not so strong, I don't want to break them once they are connected to the thick and not-so-flexible wires).

The relay requires 12 V, hence the output value for the power supply. The JeeNode requires a 3.3 V supply, and the onboard voltage regulator could take the 12 V, but would ouput only a low current (less than 100 mA). By adding a 5 V regulator (7805) to supply the JeeNode, the latter can get more current from its on-board regulator.

##### Relay

The relay (specified to switch up to 400 VAC and 30 A) requires 160 mA to be activated. It is therefore controlled via a BC337 transistor, which is strong enough to withstand the current. The base of the transistor is connected to one digital pin of the JeeNode via a 2 kΩ resistor, which allows to open or close the relay by applying a High or Low signal to that pin.

##### Temperature sensor

The DS18S20 transmits the temperture information digitally over a 1-Wire bus, and therefore requires really nothing more than a 4.7 kΩ pull-up resistor. It works quite happily with 3.3 V at the end of a 15 m cable (an old phone extension cord). Note that since I have three wires in the cable, I didn't even try to power the sensor with parasitic power (anyway, I read somewhere that it doesn't work well at 3.3 V). The Arduino OneWire library does all the work for you, all you need is to connect the data pin of the sensor to one digital input of the JeeNode.

##### Current sensor

Finally, the current transformer is placed around the live wire coming out of the relay. The design is based on a page at OpenEnergyMonitor that does not appear to exist anymore, but this one should give a good start.

Basically, the current transformer produces a current (not a voltage) that is proportional (depending on the number of turns, my transformer has 1012 turns) to the current flowing through the mains wire that goes through the transformer. A burden resistor (68 Ω in my case) across the two wires produces an AC voltage that is proportional to this current and varies between -1.65 and +1.65 V (corresponding to mains current between -23 and +23 A peak-to-peak). Then one wire of the transformer is connected to a voltage divider made of two 20 kΩ resistors (with a filtering 47 μF capacitor) and the other wire goes to one of the analog inputs of the JeeNode. This way, the analog input sees a voltage that varies between 0 and 3.3 V, which is within the tolerance of the device.

After that, the software samples the analog value 3000 times, applies a high-pass filter to remove the DC offset, and simple math computes the RMS current. After a bit of calibration (using a 60 W lamp, and 500 W halogen lamp, a 1400 W flat iron and a 2300 W electric kettle, and comparing my measurments against those of a wattmeter), I noticed that the reported current is quite accurate (to about 0.01 A, which is more than enough for my purposes).

##### Box

The PCB and the relay a screwed on a piece of plexiglas, and the whole device is placed in a project box to protect it from dust. Zip ties around the cables near the holes prevent the cables from being accidentally pulled out of the box.

##### Schematics

The schematics are available as a PNG picture, and in Eagle format. Note that since I used strip board to assemble the circuit, I haven't paid attention to choosing the right component packages, nor the right type of component. The Eagle schematics is therefore provided for information purposes only, generating a board from it is not going to produce correct results.

#### The Software

It's all available there. You can obtain it with git using the command
`git clone http://weber.fi.eu.org/software/arduino/heater_controller/.git/`

[ Posté le 22 mars 2012 à 23:23 | 1 commentaire | ]

## Jeudi, 15 mars 2012

### Very big, very small

Catégories : [ Science ]

Scaling the solar system down to human sizes leads to more interesting comparisons. Let's assume the Earth is the size of a pin head (2 mm in diameter). We then have the following results:

• A human being living on the surface of the pinhead would be 0.30 nm (the size of two dihydrogen molecules)
• Mount Everest would be 1.4 μm tall (1/5 of the thickness of a strand of spider's web silk)
• Geostationary satellites would orbit the pin head at 5 mm from its surface but the International Space Station would be at only 60 μm from the surface (the thickness of a hair)
• A lightyear would be 1500 km long (the distance between Copenhagen, Denmark and Rome, Italy), so the closest star (Proxima Centauri) would be 6600 km away (distance between Paris, France and New Dehli, India, or between New York, USA and Berlin, Germany)
• The Oort cloud would be 3000 km in diameter (the distance between Madrid, Spain and Helsinki, Finland, with the volley ball representing the Sun located somewhere near Cologne, Germany)
• The diameter of the Milky Way (our galaxy) would be equivalent to the distance between the Sun and Earth
• The whole universe would be 20·1012 km in diameter, that is 2 lightyears (diameter of the Oort cloud, or half the distance to Proxima Centauri)

[ Posté le 15 mars 2012 à 10:06 | pas de commentaire | ]

## Mercredi, 14 mars 2012

### The Solar system is big!

Catégories : [ Science ]

Aerospaceweb.org

The solar system is big, that's well known. But how big, exactly?

Let's assume the Sun is the size of a volleyball (about 21 cm in diameter). We would then have the following relative planet sizes and distances to the ball:

• Mercury: 1 mm diameter (a small pin head), 9 m from the ball (a bus)
• Venus: 2 mm diameter (a pin head), 16 m from the ball (a semi-trailer truck)
• Earth: 2 mm diameter (a pin head), 23 m from the ball
• Mars: 1 mm diameter (a small pin head), 35 m from the ball (a blue whale)
• Jupiter: 22 mm diameter (a walnut), 120 m from the ball (maximum length of a football/soccer field)
• Saturn: 19 mm diameter (a smaller walnut), 220 m from the ball (one TGV train)
• Uranus: 8 mm diameter (a cherry's kernel), 460 m from the ball (a double TGV train)
• Neptune: 8 mm diameter (a cherry's kernel), 700 m from the ball (length of the Avenue de l'opéra, Paris, France)

By comparison, the Moon would be 0.5 mm in diameter (a grain of sand) and 6 cm from the pin head (the length of the little finger).

[ Posté le 14 mars 2012 à 23:20 | pas de commentaire | ]

## Vendredi, 9 mars 2012

### Large Numbers

Catégories : [ Science ]

My daughter asked me yesterday what is the largest number I know. The answer was “a Googolplex”, which is 10googol with googol = 10100.

While you can write a googol on a sheet of paper (it's a one followed by 100 zeros), you cannot write a googolplex on paper. Or can you? how much paper do you need for that?

Let's assume you can write 10 000 digits on one sheet of A4 paper. You therefore need 1096 sheets of paper. One tree can produce 10 000 sheets of paper, and there are about 1012 trees on Earth. You'd need 1080 Earths to provide all the paper. Not going to work.

Now let's see if there's even enough matter in the universe to make all this paper: assuming that all the matter in the universe can be converted to paper, is there enough of it? Paper is made of cellulose, chains of D-glucose, the latter weighing 128 g/mol. So a 5 g sheet of A4 paper contains about 2.5·1022 molecules of linked D-glucose, each of which is made of 128 hadrons (protons and neutrons). A sheet of paper is therefore made of 3·1024 hadrons, which is almos the same thing as an atom of hydrogen. The universe contains roughly 1080 atoms, which translate roughly as 1056 sheets of paper. We'd need 1040 universes to make all the needed paper. Not going to work either.

That was a very big number.

[ Posté le 9 mars 2012 à 11:44 | 1 commentaire | ]

## Lundi, 13 février 2012

### Git Status in Shell Prompt

Catégories : [ Informatique/Git ]

I thought it would be very convenient to see from the shell's prompt what branch I am currently working on. Of course, someone had got that idea well before me, and I found this implementation and this variant (the second adds space between the name of the branch and the symbols indicating the state of the branch relative to the remote branch it is tracking).

[ Posté le 13 février 2012 à 15:30 | pas de commentaire | ]

## Jeudi, 9 février 2012

### Stripboard Sketch Paper

Catégories : [ Bricolage ]

I don't like to start soldering on a stripboard without having sketched the positions of the different components beforehand on a paper. While this is easily done on the back of an envelope, having 1:1 scale sketchpaper would be handy. So I made such a printable sketchpaper.

[ Posté le 9 février 2012 à 08:55 | pas de commentaire | ]

## Mardi, 31 janvier 2012

### Temperature Histogram

Catégories : [ Science ]

I have collected slightly over 10 years worth of hourly tempertature readings in Jyväskylä. The sources of temperature over the years have been the University of Jyväskylä's weather station, the METAR data for Jyväskylä's airport and the Finnish meteorological service which is currently being used; these services together were on average available 98.7% of the time.

The other day I wondered what a histogram of those values would look like, and here it is. The numbers have been rounded to the their closest integer values, and range from -35 °C to +34 °C. One disproportionately large peak large can be observed at 0 °C, probably due to melting ice/freezing water remaing in the sensor's surface and keeping it at exactly that temperature while the air around it would be slightly different (if you have a better explanation, don't hesitate to leave a comment!). Surprisingly, there is a second peak at 13 °C (explanations are welcome too!).

Finally, the average temperature is 4.0 °C, which is consistent wit the average minima (-1.4 °C) and maxima (7 °C) values given on Wikipedia's article about Jyväskylä.

[ Posté le 31 janvier 2012 à 22:54 | pas de commentaire | ]

## Jeudi, 5 janvier 2012

### Circuit auto Noël 2011

Catégories : [ Jeux ]

Pas de circuit auto l'été passé, mais j'en ai monté un pendant ces vacances. Composé essentiellement de virages, j'ai réussi à en faire le tour en (un peu) moins de 4 secondes.

[ Posté le 5 janvier 2012 à 19:46 | pas de commentaire | ]

## Mercredi, 28 décembre 2011

### Timelapse Controler

Catégories : [ Bricolage/Arduino ]

The timelapse photography controller I helped a friend build is finally complete. He built the hardware, I wrote the software. The latter is uselessly complicated, but I wanted to have fun with C++ and multiple inheritance, so here it is. The device is controlled by a rotary encoder with an integrated push button and a 2x16 character LCD display. It also has a plug to connect it to the camera (via a 2-channel optocoupler) and is powered with 4 AA batteries.

The UI is composed of 4 screens:

• a status screen, showing how many pictures have been taken so far, as well as the voltage of the batteries
• a start/stop screen
• a screen for setting the number of pictures
• a screen for setting the time interval between the pictures.

Turning the knob moves from one screen to the other, while pressing its button activates the current screen (e.g., starting or stopping, or allowing to change the value of e.g., the time interval).

The last two screens are disabled when the timer is started, and re-enabled when it is stopped. Also, the screen is turned off after a 10s timeout, and switched back on when the button is pressed or the knob is rotated. This allows to reduce the power consumption from abuot 24 mA to 8 mA. This way, a set of 2000 mAh rechargeable batteries should last over 200 hours.

[ Posté le 28 décembre 2011 à 20:40 | pas de commentaire | ]

## Vendredi, 2 décembre 2011

### Don't Forget pinMode()

Catégories : [ Bricolage/Arduino ]

I spent hours with a friend trying to solve the following problem: an LED and a 430R resistor are connected to the pin of an Arduino (actually an RBBB powered with 3.3 V). Using `digitalWrite(pin, HIGH)` it did light the LED, but it was very dim. What was more weird, is that the pin was showing only 1 V instead of 3.3 V. After two hours of scratching our heads, I looked up on Google and found the answer: “don't forget to call ```pinMode(pin, OUTPUT)```…”

At boot time, the pins are set as inputs. `digitalWrite(pin, HIGH)` switches the internal pullup resistor (20K – 50K), which is enough to allows the pin to source a little bit of current at a quite low voltage. It was enough the dimly light the LED, but not enough to get the optocoupler (that was initially connected to the pin) to work properly.

[ Posté le 2 décembre 2011 à 16:38 | pas de commentaire | ]

## Vendredi, 11 novembre 2011

### Traffic Shaping

Catégories : [ Informatique ]

I have an asymetrical ADSL connecion (1024 kbps downstream, 512 kbps upstream) and when I'm downloading a large file, SSH connections become unresponsive. After a bit of reading, I found one traffic shaping script that allows to keep responsive interactive SSH connections, at the cost of a slightly limited download speed. The explanations are from the Linux advanced routing and traffic control howto, in the cookbook chapter.

The explanations goes like this:

“ISPs know that they are benchmarked solely on how fast people can download. Besides available bandwidth, download speed is influenced heavily by packet loss, which seriously hampers TCP/IP performance. Large queues can help prevent packet loss, and speed up downloads. So ISPs configure large queues.

These large queues however damage interactivity. A keystroke must first travel the upstream queue, which may be seconds (!) long and go to your remote host. It is then displayed, which leads to a packet coming back, which must then traverse the downstream queue, located at your ISP, before it appears on your screen.

This HOWTO teaches you how to mangle and process the queue in many ways, but sadly, not all queues are accessible to us. The queue over at the ISP is completely off-limits, whereas the upstream queue probably lives inside your cable modem or DSL device. You may or may not be able to configure it. Most probably not.

So, what next? As we can't control either of those queues, they must be eliminated, and moved to your Linux router. Luckily this is possible.

Limit upload speed By limiting our upload speed to slightly less than the truly available rate, no queues are built up in our modem. The queue is now moved to Linux.

Limit download speed This is slightly trickier as we can't really influence how fast the internet ships us data. We can however drop packets that are coming in too fast, which causes TCP/IP to slow down to just the rate we want. Because we don't want to drop traffic unnecessarily, we configure a 'burst' size we allow at higher speed.”

It really does wonders, on the condition that you set the DOWNLINK speed to 800 kbps (80% of my downlink) and the UPLINK to 440 kbps (85% of my uplink). I tried with 900 kpbs instead of 800, and it didn't work. One day, I will take the time to think about the why, but for now I'm just happy that it works properly.

Next step: try to get this to work on the ADSL modem/router (luckily running linux and accessible with ssh) instead of the desktop.

[ Posté le 11 novembre 2011 à 22:09 | pas de commentaire | ]

## Mercredi, 2 novembre 2011

### Git in a (Very Small) Nutshell

Catégories : [ Informatique/Git ]

When I started to use git and read the man pages, I was sorely missing a brief description of how Git's features and concepts relate. Now that I finally understand (at least, I think) how Git works, I wrote this document. It's not a tutorial (the existing ones are good enough that I don't need to write another one), but rather a summary of how Git's main features relate to the jargon used in the man pages.

Let's say you have a set of files in your working tree. Git works by saving a full copy (snapshot) of this set; this is called a commit. When you want to make a new commit using Git, you first need to tell Git which files are going to be part of this commit. You do this with the `git add my_file` command. The files are then added to the index, which is the list of files that are going to compose the commit. You then run `git commit`, which creates a new commit based on the files listed in the index. You are also prompted for a message that describes the commit. The message is structured with a heading (the first line of the message) separated by an empty line, from the body of the message. Lines starting with a hash symbol are comments and are not recorded into the message.

Adding a new file to the index and creating a commit containing this file has the side effect of letting Git track this file. If you want to create a commit composed of all the tracked files, you can run `git commit -a`, which implicitely adds all the tracked files to the index before creating a new commit.

A commit is identified by a SHA1 hash of its content, e.g, `cdf18108b03386e1b755c1f3a3feaa30f9529390`. Any non-ambiguous prefix of that hash can be used as a commit ID e.g., `cdf1810`.

The add/commit mechanism allows to split a set of changes into multiple commits (you create a commit for a subset of your files, then you create another commit for the rest of your files).

#### Creating a new repository

##### For a new project

The command `git init` creates a repository in the current directory (a `.git` directory that holds all the data necessary for Git to work). You can then add the files you need to have under version control (using `git add`, wildcards such as '*' are accepted) and create the initial commit with ```git commit```.

##### Copying an existing repository

To copy an existing repository, use the `git clone` command. Most services that offer source code as Git repositories indicate the necessary Git command line to run.

#### Finding a commit

To view a summary of the changes that have happened in the repository, you can use `git log`; the top of the list is the most recent commit. To view the succession of changes (as diffs) that were made, use `git log -p`.

Git does its best not to lose anything you have recorded. The command ```git reflog``` shows a log of how the tip of branches have been updated, even if you have done acrobatic things.

#### Branches

When you make changes to your working tree and create a new commit, Git links the new commit to the commit that represents the state of the working directory before the changes (called in this context the parent of the new commit). The chain composed of the new commit, its parent, its parent's parent and so on, is called a branch. The name of the default branch is “master”. The most recent commit in a branch is called its HEAD.

A branch is nothing more than a name and the commit identifier of its tip; this is called a ref. For example `refs/heads/master` is the ref for the master branch. Finding the commits that compose the branch is a simple matter of following the tip's parent, and the parent's parent, and so on.

If you can decide to fork your work at some point, create a new branch by running `git branch new_branch`. This command creates the branch, but does not switch to that branch (changes and commits will still be appended to the current branch). To effectively change branch, you need to checkout the HEAD of the new branch by running `git checkout new_branch`. From this point on, changes and commits will be appended to the new branch.

##### Merging

If at some point it is necessary to merge the content of e.g., the new branch into the “master” branch, you need to checkout “master” and then run `git merge new_branch`.

If Git doesn't know how to merge two branches, it complains about conflicts and lets the user edit the incriminated files by hand. This is done by choosing, in sections of these files indicated with `<<<<<<` and `>>>>>>>` markers, which variant is to be retained. Once the editing has been made, the changes need to be committed (with ```git commit -a```).

##### Checking out

You can checkout any commit with `git checkout ` and thus have your working directory reflect the state of the repository at any point in time. When you do that, you are not on any branch anymore, which will cause various warning messages (such as “You are in 'detached HEAD' state”) and cause Git to behave in a way you may not expect (that is, if you don't understand properly yet how Git works). To go back to a “normal” situation, just run `git branch master` (or any other branch that exists). To prevent going into detached HEAD state, use `git checkout -b new_branch ` to create a new branch that starts at `<commit>`.

If you have made local changes, Git won't let you checkout another branch. You must either commit them or reset the working tree before being allowed to do the checkout.

##### Reset

The command `git reset` allows to do multiple things. One of its most common use (`git reset --hard`) is to cancel all changes you have made to the working tree since the last commit.

If you specify a commit ID after `git reset`, it will move the HEAD of the current branch back to that commit, which becomes the new HEAD; all commits after this point are removed from the branch (but not from the repository! You can always restore the old HEAD by finding its commit ID with `git reflog`).

#### Working with remote repositories

##### Pull (and Fetch)

Some time after you have cloned a public repository, you may want to update your local copy so that it mathtches the latest version available at the original repository. This update is done with with `git pull`. When the repository was cloned, Git had created a remote (a link to the source repository) called by default “origin”. Below the hood, `git pull` calls `git fetch` to retrieve the commits from all the relevant branches on “origin”, and then calls `git merge` to merge those changes with the local current branch.

Note that `refs/remotes/origin/master` is the ref to the master branch at “origin”, but it is actually a branch stored locally that reflects the “master” branch on the “origin” repository. This kind of ref is used for specifying what remote branch is tracked by what local branch when using `git fetch`. Typically, `+refs/heads/:refs/remotes/origin/` indicates that e.g., the local branch “master” tracks the remote branch “origin/master” (“*” represents a wildcard).

##### Push

If you have writing permissions on the remote repository, you can send your changes using `git push` (it defaults to the “origin” remote). Note that the HEAD of the branch to which you push changes must be the parent of your changes. If this is not the case, the push will fail and you will be asked to first pull from the remote repository to get the latest version, fix potential conflicts and only then push your changes.

It is also important to remember that you cannot normally push to a repository that has a working tree. The remote repository must have been created with the `git init --bare` command.

[ Posté le 2 novembre 2011 à 08:36 | pas de commentaire | ]

## Dimanche, 23 octobre 2011

### Git Recipes

Catégories : [ Informatique/Git ]

Here are a few recipes I use with git.

#### Using git log

Show which files have been modified by the commits: ```git log --name-status```

View the successive changes for a given file: `git log -p -- my_file` (latest change first)

View the changes at word-level instead of line-level: ```git log --color-words```

Make `--color-words` more readable with LaTeX files: add `*.tex diff=tex` to the repository's `.git/info/attributes` or to your `\$HOME/.gitattributes` (read `man gitattributes` for more info on this, it supports other languages too)

#### Browseable Web Repository

To make an online, browsable web repository on a web server (I assume you have ssh access to it).

On the server, run:
```\$ mkdir some_directory
\$ cd some_directory
\$ git init
#!/bin/sh
GIT_WORK_TREE=.. git checkout -f
GIT_WORK_TREE=.. git update-server-info --force
EOF
```
The in the source repository, run:
```\$ git remote add web username@my.web.server:path/to/some_directory/.git
\$ git push web master
```

(“username”, “my.web.server” and “path/to/” are exactly what you think they are.) Note the “.git” at the end of the path, it has to be there because `git push` is going to send its data into that directory.

When you run `git push web master` to upload the content of the source repository to the web repository, the `post-receive` hook checks out the latest version. The next time, you don't need to specify the “master” branch any more, simply running `git push web` is enough.

[ Posté le 23 octobre 2011 à 15:57 | pas de commentaire | ]

## Mercredi, 12 octobre 2011

### Autofs

Catégories : [ Informatique ]

Autofs allows to automatically mount a filesystem when changing directory to the mounting point. The process is frozen while the file system is being mounted, and then the `chdir()` completes and enters the root of newly mounted filesystem.

I have been using autofs for several years, at first for mounting USB sticks, but now also for SMB shares and SSHFS. It is meant to be used for command-line users, I don't doubt that modern desktop environments provide the same features for GUI users.

The following describes my setup on a Debian Squeeze; there may be a more modern way of doing things (the original setup was on a Debian Sarge). I assume that you know how to administrate a Debian system (in other words, I won't be held responsible for damaging your system if you follow those instructions without understanding what they mean).

#### Autofs

The first step is to install the `autofs`, `bsdutils`, `coreutils`, `lockfile-progs`, `gawk` (or `mawk`), `sed` and the `util-linux` packages (most of those are probably installed already). The `wget` package (and command) allows me to write concise commands below, but feel free to use any other tool to download the necessary files from my web page.

#### USB Automounter

This setup allows to mount automatically any USB Mass Storage device, without the need to manually configure anything (as is the case in this Debian tutorial).

Edit as root the `/etc/auto.master` and add the following line:

```/media  /etc/auto.usb  --timeout=3
```

Then run the following as root:

```# mkdir -p /media
# mkdir -p /var/run/usbautomount
# mkdir -p /etc/usbautomount
# wget -O /etc/usbautomount/usbautomount http://weber.fi.eu.org/software/autofs/usbautomount
# chmod 755 /etc/usbautomount/usbautomount
# wget -O /etc/udev/usbautomount.rules http://weber.fi.eu.org/software/autofs/usbautomount.rules
# cd /etc/udev/rules.d
# ln -s ../usbautomount.rules z60_usbautomount.rules
# /etc/init.d/autofs restart
```

You then need to edit `/etc/usbautomount/usbautomount` to change references to mweber to your own username, and possibly change the mounting options (see line 142 of the script). I still use latin1 as a character encoding, so I want filenames to be automatically translated into latin1 for VFAT filesystems. Other filesystems are mounted without particular options.

The idea here is that when you plug a USB Mass Storage device into the computer, `udev` runs the `usbautomount` script that creates a name based on the USB device's vendor, model, instance (in the case the physical device contains more than one USB Mass Storage device, such as multi-card readers or smartphones with internal and removable flash memory) and partition number, creates a symlink in `/var/run/usbautomount` that points to a directory of the same name in `/media`. When you access the symlink, `automount` creates the directory in `/media` and mounts the file system from the USB device to that directory.

If you `chdir()` out of that directory, after 3 seconds `automount` unmounts the filesystem. If you `chdir()` into the directory again, `automount` mounts it again. The short timeout for automatic unmounting allows to unplug the device almost immediately after cd-ing out of the mount point (provided that there is no data to be written onto the device anymore). When you unplug the device, a script (generated by `usbautomount` when the device was plugged in) is run to remove the symlink from `/var/run/usbautomount`.

The cherry on the cake of this setup is the following:

```\$ mkdir \$HOME/mnt
\$ ln -s /var/run/usbautomount \$HOME/mnt/USB
```

This way, `\$HOME/mnt/USB` is automatically populated with links to the devices that you plug to your computer.

#### SMB/CIFS shares

The `smbclient` and `cifs-utils` packages need to be installed.

Run as root:

```# mkdir -p /mnt/smb
```

Edit as root the `/etc/auto.master` and add the following line:

```/mnt/smb  /etc/auto.smb  --timeout=10
```

(`/etc/auto.smb` comes with the `autofs` package).

Run this as your regular user:

```\$ ln -s /mnt/smb \$HOME/mnt/smb
```

If you have an SMB/CIFS server named "foo" containing shares called "bar" and "quux", then going into `/mnt/smb/foo` will mount "bar" and "quux" in `/mnt/smb/foo/bar` and `/mnt/smb/foo/quux`. Automatic unmounting will happen 10 seconds after leaving `/mnt/smb/foo`.

If "foo" requires credentials (login and password), you can put them in `/etc/auto.smb.foo` as follows:

```username = my_username
```

#### SSHFS

SSHFS itself requires some amount of setup. I will assume in the following that the client computer is called "local" and it accesses using SSH and SSHFS a remote computer called "remote" with user "user".

First, install the `sshfs` package. Then you need to create an SSH key for root@client, and authorize this key for user@remote. In the end, root@local must be able to log into user@remote using an SSH key instead of a password.

Once this is working, run as root:

```# mkdir -p /mnt/ssh
```

Then edit `/etc/auto.master` as root and add the line:

```/mnt/ssh  /etc/auto.ssh  --timeout=10,--ghost
```

Then create `/etc/auto.ssh` and add a line that looks like this (change the "user", and "remote" as needed):

```remote -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#user@remote\:
```

By default, SSHFS will mount the home directory of "user@remote" into `/mnt/ssh/remote`. If you need to acces another directory (e.g., `/tmp`), just append this path to the end of the line in `/etc/auto.ssh`:

```remote-tmp -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#user@remote\:/tmp
```

Finally, run this as your regular user:

```\$ ln -s /mnt/ssh \$HOME/mnt/ssh
```

#### CDROM

It just came to my mind the other day that I could automount CD and DVD as well, but I haven't thought about the details yet. I remember the feature to be very annoying in Solaris twelve years ago, where the system was mounting the CD in a directory named after the label of the CD's filesystem, and never removing this directory. After mounting a few discs, the automount directory was filled with various directories with abtruse names, and you had to guess which one contained your data (I suppose `mount` would have told me what I wanted to know, I don't remeber if I tried that).

[ Posté le 12 octobre 2011 à 11:49 | 3 commentaires | ]

## Dimanche, 25 septembre 2011

### Crumble aux poireaux

Catégories : [ Cuisine ]

Cette recette est adaptée de ce que j'ai vu dans un épisode de l'émission 30-minute meals avec Jamie Oliver. J'ai vu moins de la moitié de l'épisode, et donc je n'avais aucune idée de ce que le plat devait donner.

#### Ingrédients

• 3 poireaux
• 1 gros oignon
• 140g de lard fumé en tranches fines
• 3 gousses d'ail
• 4 grosses tranches de pain
• huile d'olive
• thym
• romarin
• 100mL eau

#### Préparation

Faire revenir le lard coupé en morceaux, verser (avec la graisse fondue) dans un plat allant au four. Ajouter l'oignoon émincé (pas trop fin), puis couvrir avec les poireaux émincés. Verser l'eau, saupoudrer avec le thym et le romarin, et placer 20 min au four préchauffé à 200 °C. Placer le pain, les gousses d'ail non épluchées et une bonne rasade d'huile d'olive dans un mixer, et mixer jusqu'à obtenir des miettes imbibées d'huile. En saupoudrer le plat, et repasser au four 10 minutes.

#### Commentaires

• J'ai servi ce plat en plus de pommes de terre roties comme accompagnement de morceaux de filets de truite saumonnée grillés à la poële.
• Après lecture de la recette, je pense qu'il faudrait faire revenir les oignons et les poireaux avec le lard, probablement sans eau, et ne mettre au four que pour faire dorer les miettes de pain.
• La recette d'origine est un pseudo-cassoulet, donc on rajoute de la purée de tomates et des haricots blancs dans la préparation, et on y dépose des saucisses.

[ Posté le 25 septembre 2011 à 18:10 | pas de commentaire | ]

## Dimanche, 18 septembre 2011

### Tréma manuel en LaTeX

Catégories : [ Informatique ]

La fonte Stork Bill ne possède pas de caractère accentué du tout. Pour fignoler le livret de règles du Decktet, il me fallait un “ä” et un “ö” pour écrire “Quäsenbö” (le titre d'un jeu). J'ai donc bricolé un tréma à la main à partir de deux points:

```\makeatletter

\newcommand{\umlaut}[1]{%
\sbox\@tempboxa{#1}%
\@tempdima -.5\wd\@tempboxa
\@tempdimb 1.1\ht\@tempboxa
\sbox\@tempboxa{..}%
#1\hskip\@tempdima\raisebox{\@tempdimb}{\usebox\@tempboxa}%
}
```

La commande s'utilise de cette manière:

`Qu\umlaut{a}senb\umlaut{o}`

Le résultat est potable pour la fonte en question, qui en tant que fonte décorative n'a pas besoin d'être parfaitement régulière, mais ne donne rien de bon avec par exemple lmodern (qui n'en a pas besoin de toutes façons puisque cette dernière contient déjà les caractères accentués idoines).

[ Posté le 18 septembre 2011 à 15:52 | pas de commentaire | ]

## Jeudi, 8 septembre 2011

### Autofs and CD drive

Catégories : [ Informatique ]

After yesterday's tutorial on Autofs, I gave some thought on using Autofs to automatically mount a CD/DVD. Here's the result (read the previous tutorial first for complete information).

Run as root:

```# mkdir -p /mnt/cdrom
```

Then edit `/etc/auto.master` as root and add the line:

```/mnt/cdrom  /etc/auto.cdrom  --timeout=10
```

Then create /etc/auto.cdrom and add a line that looks like this (change `/dev/cdrom` to the proper device if needed):

```cd1   -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom
```

(shamelessly copied from the supplied `/etc/auto.misc`). Finally, run this as your regular user:

```\$ ln -s /mnt/cdrom/cd1 \$HOME/mnt/cdrom
```

This setup is a bit crooked, because `autofs` is designed to watch one directory (`/mnt/cdrom` in this case) and create different subdirectories for different devices. In this case howerver we have only one device, that we mount always to the same point. On the brighter side, if you have more than one CD/DVD/floppy/zip drive, you can rename references to cdrom as removable, and add multiple lines to `/etc/auto.removable`, one for each drive. See `/etc/auto.misc` (that file comes by default with the `autofs` package) for extra possibilities of configuration.

[ Posté le 8 septembre 2011 à 15:59 | pas de commentaire | ]

## Jeudi, 25 août 2011

### Second Page

Catégories : [ Blog ]

Les résumés et les bières n'intéressent finalement pas grand monde, donc j'ai remplacé le filtre optionnel par une « seconde page » obligatoire. Pour désactiver ce filtre, il faut ajouter `?all=1` dans l'URL. Ceci s'applique aussi aux flux RSS.

Summaries and beers are not interesting to most people, so I replaced the optional filter with a mandatory “second page”. To disable this filter, add `?all=1` to the URL. The same applies to RSS feeds.

Yhteenvedot ja oluet eivät kiinnosta useimpia lukijoita, joten korvasin vapaaehtoisen suodattimen pakollisella “toisella sivulla”. Suodattimen voi pysäyttää lisäämällä `?all=1` URL-osoitteen loppuun. Sama koskee RSS-syötteitä.

[ Posté le 25 août 2011 à 18:34 | pas de commentaire | ]

## Mardi, 23 août 2011

### Dieharder Test

Catégories : [ Informatique ]

It took about a week to generate a bit over 100 MB of random data with the arduino-based hardware random number generator. I used the JeeLink-based one, and the last chunk of random data (50 MB) was generated at a speed of 1562 bits/s.

And now for some statistical tests.

Fourmilab's ent test returns:

```Entropy = 7.999998 bits per byte.

Optimum compression would reduce the size
of this 106315776 byte file by 0 percent.

Chi square distribution for 106315776 samples is 240.83, and randomly
would exceed this value 72.90 percent of the times.

Arithmetic mean value of data bytes is 127.4987 (127.5 = random).
Monte Carlo value for Pi is 3.140987091 (error 0.02 percent).
Serial correlation coefficient is 0.000165 (totally uncorrelated = 0.0).
```

I also ran the Dieharder test suite, which ran 40 tests on the data. Out of those, I got:

• 34 PASSED
• 1 POOR (1 RGB Bit Distribution Test out of 12 instances of the test)
• 2 POSSIBLY WEAK (Diehard OPSO and Diehard Squeeze Test)
• 3 FAILED (Diehard Sums Test and two instances of Marsaglia and Tsang GCD Test)

At the end of the series of tests, the software indicates that “The file file_input_raw was rewound 181 times”, meaning that I should get a lot more random data than 100 MB (ideally 18 GB, which means running the generator for 3.5 years) not to have the rewind the file for any of the tests.

The important question is however: 34 passed out of 40, is it good enough or not?

[ Posté le 23 août 2011 à 10:48 | 2 commentaires | ]

## Jeudi, 18 août 2011

### Journalisme du dimanche

Catégories : [ Râleries | Science ]

Soit une dépèche AP Un Suédois essayait de fractionner des atomes dans sa cuisine (via fr.news.yahoo.com) commençant par (le gras a été rajouté)

« Un Suédois arrêté pour avoir tenté de réaliser une fission nucléaire dans sa cuisine »

Cette dépèche, reprise par Zignonet, devient Il essayait de construire un réacteur nucléaire dans sa cuisine (via fr.news.yahoo.com encore une fois) et commence par (le gras a été rajouté)

« Un Suédois de 31 ans a été arrêté fin juillet pour avoir fait une fission nucléaire à son domicile »

On n'a pas besoin d'un doctorat en physique nucléaire pour comprendre que tenter de réaliser et faire ne signifient pas la même chose. Il faut avoir appris à lire pour comprendre la différence, certes, c'est peut-être là que le bât blesse.

Un peu plus loin, AP indique

« il avait provoqué une petite fusion sur sa cuisinière »

que Zigonet interprète comme

« il avait réussi à provoquer une fission nucléaire dans sa cuisine »

Un élève de l'école primaire est capable de reconnaître que les mots fusion et fission ne sont pas les mêmes, même s'ils n'y a que deux lettres de différence entre les deux. Tout le monde conviendra que par exemple lapin et clampin, bien que n'ayant que deux lettres de différence, ne signifient pas la même chose : l'un est un adorable rongeur aux longues oreilles, tandis que l'autre non.

Les cours de physique atomique du lycée permettent de savoir que la seule fusion nucléaire obtenue sur Terre à ce jour concerne des atomes d'hydrogène, et que si l'apprenti Oppenheimer avait obtenu une fission nucléaire, la police n'aurait retrouvé personne à arrêter (mais les problèmes de surpopulation à Stockholm auraient été promptement résolus). Avec les histoires récentes de Fukushima, tout le monde devrait cependant savoir ce qu'est la fusion du combustible nucléaire (qui n'est pas la même chose que la fusion nucléaire), c'est à dire que ce dernier, produisant naturellement de la chaleur, peut fondre sous l'effet de cette dernière. On peut donc supposer que c'est ce qui s'est passé dans cette cuisine. AJOUT en lisant le blog du bonhomme je ne suis même pas sûr qu'il se soit agit de ça. Il a fait chauffer de l'americium, du radium et du beryllium dans de l'acide sulfurique et ça a explosé, probablement sous l'effet de l'ébullition de l'acide (qui bout à 337 °C, alors que les trois premiers fondent à 1176, 700 et 1287 °C respectivement).

En conclusion, quelques mots de différence changent complètement le sens d'un texte, et ce n'est pas parce qu'on publie des trucs sur le Web qu'on est un journaliste.

[ Posté le 18 août 2011 à 17:41 | pas de commentaire | ]

Catégories : [ Informatique ]

Angela Bartoli

To protect the password card from theft, there is one possibility. First, randomly generate and memorize a secret key composed of 12 numbers between 0 and 35 (one for each line of the card). Then for each letter of the mnemonic, shift this letter to the right (looping around the end of the line back to its beginning if needed) by the amount indicated by this line's secret key's digit before reading the symbol.

For an 8-symbol mnemonic, the entropy of this secret key is 41.4 bits, which gives a reasonnable amount of protection to the card even if it is stolen.

One obvious drawback is of course the strain it puts on the brain (although some may say it's good for the organ's health to work it out this way) and the time it takes to read one password. Another drawback is that the secret key is hard to remember, and if you forget it, you loose all your passwords.

Translating the secret key into letters and digits might make it easier to remember.

[ Posté le 18 août 2011 à 17:24 | pas de commentaire | ]

## Mardi, 16 août 2011

Catégories : [ Informatique ]

The PasswordCard sounds like a good idea (and it actually may be in practice), but I don't like it so much for three reasons:

• The entropy is too low (64 bits spread over 232 symbols) and generated from an unknown source of entropy.
• You have to memorize a cryptic symbol and a color for each password, which makes it easy to forget which symbol/color pair is associated with what password.
• I didn't invent it :)

My current idea is to generate a similar card using a hardware random number generator so that each symbol on the card has an entropy of 6 bits (2592 bits in total on he card). I also would like to get rid of the method that consists in choosng one spot on the card and reading in one direction, and instead use the card as a lookup table for a substitution cipher: you choose a cleartext mnemonic for a given website with a length corresponding to the length of the password you want to generate (e.g., “EXAMPLEC” for an 8-symbol password to be used on `example.com`), and you generate the corresponding password by looking up the symbol corresponding to “E” on the first row of the card, then the one corresponding to “X” on the second row, “A” on the third, “M” on the fourth, and so on.

The drawbacks are numerous:

• Reading a password this way is very slow and error-prone (the alternating gray and white areas and the repeated header lines make it only slightly less painful).
• Generating two passwords from the same card is fine as long as the two mnemonics don't share the same characters in the same positions (e.g., “EXAMPLEC” and “EXNIHILO” share “EX” in positions 1 and 2) (if this is the case, the entropy of those particular symbols will be divided by the number of passwords sharing them).
• The mnemonics are meant to be easy to remember, and therefore easy to guess by the thief of the card (that's howerver only slightly worse than the case of the stolen PasswordCard).
• It requires a computer to generate a card that is readable in a small format, so the random bits are temporarily stored on a system that may be compromised (if the physical size of the card does not matter, you can generate such a card by rolling a pair of 6-sided dice about 729 times and writing the symbols down by hand).

There is one benefit though: the card looks very geeky :)

As usual, any comment/idea/criticism is welcome.

[ Posté le 16 août 2011 à 23:14 | 1 commentaire | ]

## Lundi, 15 août 2011

### Hardware Random Number Generator

Catégories : [ Bricolage/Arduino | Informatique ]

Software random number generators are usually so-called pseudo-random number generators, because they produce a deterministic sequence of numbers that have some of the properties of true random numbers. Obtaining genuinly random numbers howerver requires a non-deterministic processus as the source of randomness. Thermal noise in electronics or radioactive decay have been used, usually requiring an external device to be built and plugged to the computer.

Peter Knight's TrueRandom generates random bits by using the Arduino's ADC (with nothing connected to the analog input pin) to measure electronic noise. It flips the pin's internal pull-up resistor while the measure takes place to increase the amount of noise. The software then keeps only the least significant bit of the result, filters it using Von Neumann's whitening algorithm (read pairs of bits until they are of different values and return 0 (respectively 1) on a 01 (respectively 10) transition). There are several functions that generate different types of numbers based on those random bits.

I reused that code, modified it to allow using another pin than the Arduino's Analog0 and I made my own random number generator. I also wrote a Python script that reads the bits from the serial port, uses the SHA-1 hashing algorithm to distil the data (the raw data has about 6 bit of entropy per byte, distillation produces data with 7.999 bits of entropy per byte; based on the work of Jeff Connelly on IMOTP) and writes them to the standard output or into a file. On my Duemilanove, it can output about 1500 bits/s, while it outputs 1300 bits/s on a JeeLink. The latter makes it an easy-to-transport device that is reasonnably sturdy and fits in the pocket, even if its features (it contains a radio transceiver) are a bit overkill for the job (not to mention expensive).

I also adapted the core of the TrueRandom software to run on my ButtonBox (which is conveniently always connected to my desktop computer). There the output rate is a mere 300 bps, but it's still reasonnably fast for generating a few random numbers when needed (for example for generating one's own PasswordCard). The access to the ButtonBox is shared among multiple clients using button_box_server.py, so a modified Python script was used for obtaining the stream of random bits through the `button_box_server`.

I haven't had the patience to generate a few megabytes of random data to test the generator with the DieHarder test suite, but the output of Fourmilab's ent test tool looks reasonnable.

[ Posté le 15 août 2011 à 11:08 | 2 commentaires | ]

## Vendredi, 12 août 2011

Catégories : [ Informatique ]

It all started a few days ago with this Xkcd strip. Someone pointed it out passwordcard.com to me, and it made me wonder how safe are the passwords generated with that tool. Those passwords are meant to be used on all those websites that require you to create a user account with a password. Using a single password for all those web sites means that when the attacker of one of those websites gets your password, he can access your account on every other website where you have an account.

Beware that I'm no mathematician, and neither am I a specialist in cryptography or information theory, but here are my thoughts on this system.

The generator is based on what looks like a 64-bit key, so in theory, the entropy is 64 bits, which is reasonnably much (it would take 6x108 years to break at 1000 attempts per second). However, since you need to feed the key to an unknown web server, the practical entropy is much less, since someone else than you knows the key. But let's assume you can generate the card yourself on a secure computer.

Here are a few examples of password types and strengths:

• 1 password of 6 symbols: 35.7 bits of entropy, cracked in 1.8 years
• 1 password of 7 symbols: 41.7 bits of entropy, cracked in 112 years
• 1 password of 8 symbols: 47.7 bits of entropy, cracked in 7000 years
• 2 passwords of 6 symbols each: 32 bits of entropy, cracked in 50 days
• 2 passwords of 7 symbols each: 32 bits of entropy, cracked in 50 days

However, if the card is stolen, the thief only has to test a few tens of thousands combinations to find a password made of 4-8 symbols (29 x 8 symbols, 8 reading directions and 5 possible password-lengths is 55680), which represent 15.8 bits of entropy and takes less than a minute to crack. Loosing the card is therefore a bad move.

As a conclusion, the password card is fine on the following three conditions:

• Use a real random number for the key (e.g., by rolling 25 times a 6-sided die) or a hardware random number generator (there will be a post on that soon).
• Use the card for passwords totalizing no more than 10 symbols (best to use only one password of 8, 9 or 10 symbols).

Disclaimer: once again, I'm no specialist in cryptography or information theory, but the above is based on how I understand those things. It may be completely wrong.

[ Posté le 12 août 2011 à 21:52 | 2 commentaires | ]

## Jeudi, 14 juillet 2011

### 5 ans

Catégories : [ Blog ]

5 ans de blog, 914 messages (plus 73 dans le microblog), 371 commentaires et 209900 spams.

[ Posté le 14 juillet 2011 à 22:27 | pas de commentaire | ]

## Samedi, 9 juillet 2011

### ATSHi

Catégories : [ Informatique ]

I followed my dream, and I wrote the Automatic Transparent Syntax HIghlighting software.

I have files (mainly source code) put as-is on my web site. Those files can be browsed with a regular web browser, and Apache's internal file indexing is used for accessing the directory structure. When the user requests a (source code) file of a known type, it would be nice to highlight the syntax. `atshi.php` does just that, automatically (no need for the webmaster to manipulate the files) and transparently (the user doesn't know a PHP program is being executed).

You can view the code, highlighted by itself of course (recursive computing is fun). It expects to be called as `/path/to/atshi.php/path/to/example.pl` and uses the `PATH_INFO` variable to find the path to the file to be displayed (in the example above, `example.pl`). It uses the GeSHi library for the actual syntax coloring (which is therefore a dependency), and theoretically supports any file format/programming language supported by GeSHi. In practice however, ATSHi detects the files that it should highlight (source code must be highlighted, but .tar.gz or .jpg must not) by checking first the filename's extension, or, if the file doesn't have one, checking the “magic header” (the one starting with `#!`) followed by the name of the interpreter. It also recognizes the filename `Makefile`. If it's unable to recognize the file, it simply sends its content (with proper `Content-Type` header) to the browser and lets the latter deal with it. Finally, the highlighted version also provides a link at the top of the page for downloading the raw file (`atshi.php` sends the raw file instead of the highlighted version when you append “?src” to the URL).

But this was all quite a simple job, and even if it was my first PHP program, it was quite simple (PHP is an horrible language, but the doc is good, which helped a lot). The real problem was getting Apache doing my bidding. Here's a sample of the `.htaccess` I use:

```RewriteEngine on
RewriteRule ˆlatex/latex.css - [L]
RewriteRule ˆpoppikone/poppikone.css - [L]
RewriteCond /home/mweber/weber.fi.eu.org/www/\$1 -f
RewriteRule ˆ((software|leffakone|poppikone|latex)/.*)\$ /atshi/atshi.php/\$1
```

The two top `RewriteRule` (with the L flag) prevent ATSHi from highlighting the stylesheets used in the corresponding directories (those stylesheets must be sent as-is to the browser). The bottom `RewriteRule` actually catches specific paths and rewrite the URL using `atshi.php`. Finally, the `RewriteCond` just above allows rewriting only if the path (identified as `\$1` when the regexp in the `RewriteRule` below is evaluated) is a regular file (highlighting directories doesn't make sense, does it?); note that you must put an absolute path in the condition.

The difficult part here was not really to get the URL rewriting properly written (although mentioning the absolute path trick in Apache's doc would have been nice). The really difficult part was to find out that the bloody Firefox always looks in its cache instead of asking the server if something has changed. So after making a change, Firefox still didn't show what was supposed to be showing… Erasing the cache before every test is therefore a must.

[ Posté le 9 juillet 2011 à 20:01 | 2 commentaires | ]