Unary coding
In-game article clicks load inline without leaving the challenge.
Unary coding, or the unary numeral system, is an entropy encoding that represents a natural number, n, with n ones followed by a zero (if the term natural number is understood as non-negative integer) or with n − 1 ones followed by a zero (if the term natural number is understood as strictly positive integer). A unary number's code length would thus be n + 1 with that first definition, or n with that second definition. Unary code when vertical behaves like mercury in a thermometer that gets taller or shorter as n gets bigger or smaller, and so is sometimes called thermometer code. An alternative representation uses n or n − 1 zeros followed by a one, effectively swapping the ones and zeros, without loss of generality. For example, the first ten unary codes are:
| Unary code | Alternative | n (non-negative) | n (strictly positive) |
|---|---|---|---|
| 0 | 1 | 0 | 1 |
| 10 | 01 | 1 | 2 |
| 110 | 001 | 2 | 3 |
| 1110 | 0001 | 3 | 4 |
| 11110 | 00001 | 4 | 5 |
| 111110 | 000001 | 5 | 6 |
| 1111110 | 0000001 | 6 | 7 |
| 11111110 | 00000001 | 7 | 8 |
| 111111110 | 000000001 | 8 | 9 |
| 1111111110 | 0000000001 | 9 | 10 |
Unary coding is an optimally efficient[clarification needed] encoding for the following discrete probability distribution[citation needed]
P ( n ) = 2 − n {\displaystyle \operatorname {P} (n)=2^{-n}\,}
for n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...}.
In symbol-by-symbol coding, it is optimal for any geometric distribution
P ( n ) = ( k − 1 ) k − n {\displaystyle \operatorname {P} (n)=(k-1)k^{-n}\,}
for which k ≥ φ = 1.61803398879..., the golden ratio, or, more generally, for any discrete distribution for which
P ( n ) ≥ P ( n + 1 ) + P ( n + 2 ) {\displaystyle \operatorname {P} (n)\geq \operatorname {P} (n+1)+\operatorname {P} (n+2)\,}
for n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...}. Although it is the optimal symbol-by-symbol coding for such probability distributions, Golomb coding achieves better compression capability for the geometric distribution because it does not consider input symbols independently, but rather implicitly groups the inputs. For the same reason, arithmetic encoding performs better for general probability distributions, as in the last case above.
Unary coding is both a prefix-free code and a self-synchronizing code.
Unary code in use today
Examples of unary code uses include:
- In Golomb Rice code, unary encoding is used to encode the quotient part of the Golomb code word.
- In UTF-8, unary encoding is used in the leading byte of a multi-byte sequence to indicate the number of bytes in the sequence so that the length of the sequence can be determined without examining the continuation bytes.
- Instantaneously trained neural networks use unary coding for efficient data representation.
Unary coding in biological networks
Unary coding is used in the neural circuits responsible for birdsong production. The nucleus in the brain of the songbirds that plays a part in both the learning and the production of bird song is the HVC (high vocal center). The command signals for different notes in the birdsong emanate from different points in the HVC. This coding works as space coding which is an efficient strategy for biological circuits due to its inherent simplicity and robustness.
Standard run-length unary codes
All binary data is defined by the ability to represent unary numbers in alternating run-lengths of 1s and 0s. This conforms to the standard definition of unary i.e. N digits of the same number 1 or 0. All run-lengths by definition have at least one digit and thus represent strictly positive integers.
| n | RL code | Next code |
|---|---|---|
| 1 | 1 | 0 |
| 2 | 11 | 00 |
| 3 | 111 | 000 |
| 4 | 1111 | 0000 |
| 5 | 11111 | 00000 |
| 6 | 111111 | 000000 |
| 7 | 1111111 | 0000000 |
| 8 | 11111111 | 00000000 |
| 9 | 111111111 | 000000000 |
| 10 | 1111111111 | 0000000000 |
| ... |
These codes are guaranteed to end validly on any length of data (when reading arbitrary data) and in the (separate) write cycle allow for the use and transmission of an extra bit (the one used for the first bit) while maintaining overall and per-integer unary code lengths of exactly N.
Uniquely decodable non-prefix unary codes
Following is an example of uniquely decodable unary codes that is not a prefix code and is not instantaneously decodable ()
| n | Unary code | Alternative |
|---|---|---|
| 1 | 1 | 0 |
| 2 | 10 | 01 |
| 3 | 100 | 011 |
| 4 | 1000 | 0111 |
| 5 | 10000 | 01111 |
| 6 | 100000 | 011111 |
| 7 | 1000000 | 0111111 |
| 8 | 10000000 | 01111111 |
| 9 | 100000000 | 011111111 |
| 10 | 1000000000 | 0111111111 |
| ... |
These codes also (when writing unsigned integers) allow for the use and transmission of an extra bit (the one used for the first bit). Thus they are able to transmit 'm' integers * N unary bits and 1 additional bit of information within m*N bits of data.
Symmetric unary codes
The following symmetric unary codes can be read and instantaneously decoded in either direction:
| Unary code | Alternative | n (non-negative) | n (strictly positive) |
|---|---|---|---|
| 1 | 0 | 0 | 1 |
| 00 | 11 | 1 | 2 |
| 010 | 101 | 2 | 3 |
| 0110 | 1001 | 3 | 4 |
| 01110 | 10001 | 4 | 5 |
| 011110 | 100001 | 5 | 6 |
| 0111110 | 1000001 | 6 | 7 |
| 01111110 | 10000001 | 7 | 8 |
| 011111110 | 100000001 | 8 | 9 |
| 0111111110 | 1000000001 | 9 | 10 |
| ... |
Canonical unary codes
For unary values where the maximum length is known, one can use canonical unary codes that are of a somewhat numerical nature and different from character based codes. The largest length n is known, numerical 0 ( 2 n − 1 {\displaystyle \operatorname {2} ^{n}-1\,}in bijective ) or -1 ( 2 2 n − 2 {\displaystyle \operatorname {2} ^{2n}-2\,}in bijective ) is assigned as the boundary condition equivalent to repeating a digit the maximum 'n' number of times, then for each step reducing the number of digits by one and increasing/decreasing the result by numerical '1'.
| n | Unary code | Bijective | Standard Bijective | Alternative | Bijective | Standard Bijective | ||||
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1 | 2 | 2 | 1 | 1 | 0 | 1 | 1 | 2 | 2 |
| 2 | 01 | 12 | 4 | 11 | 3 | 10 | 21 | 5 | 22 | 6 |
| 3 | 001 | 112 | 8 | 111 | 7 | 110 | 221 | 13 | 222 | 14 |
| 4 | 0001 | 1112 | 16 | 1111 | 15 | 1110 | 2221 | 29 | 2222 | 30 |
| 5 | 00001 | 11112 | 32 | 11111 | 31 | 11110 | 22221 | 61 | 22222 | 62 |
| 6 | 000001 | 111112 | 64 | 111111 | 63 | 111110 | 222221 | 125 | 222222 | 126 |
| 7 | 0000001 | 1111112 | 128 | 1111111 | 127 | 1111110 | 2222221 | 253 | 2222222 | 254 |
| 8 | 00000001 | 11111112 | 256 | 11111111 | 255 | 11111110 | 22222221 | 509 | 22222222 | 510 |
| 9 | 000000001 | 111111112 | 512 | 111111111 | 511 | 111111110 | 222222221 | 1021 | 222222222 | 1022 |
| 10 | 0000000000 | 1111111111 | 1023 | 1111111111 | 1023 | 1111111111 | 2222222222 | 2046 | 2222222222 | 2046 |
Canonical codes ( different from Canonical Huffman Code where only the code book is discussed ) , canonical being a term used to imply use of any method, numerical in nature here, ie when they are processed as numbers not a string. If the number of codes required per symbol length is different to 1, i.e. there are more non-unary codes of some length required, those would be achieved by increasing/decreasing the values numerically without reducing the length. In order to obtain a set of codes of certain length, you have to start with the boundary condition usually 0....0 for the largest and last code and work upwards, increasing the value numerically till the number of codes of a certain length are exhausted, then chopping a number of bits from the right and numerically increasing the remaining number by 1 to get the first number of the next range ( set of numbers belonging to a certain length ) and so on from largest length to smallest length.
One can also start from the smallest length ( and give it a numerical value of the largest boundary condition, like 1 in binary or 2 in bijective ), and working downwards reducing the value numerically by 1 for each new code of the same length, OR ( reducing value numerically by 1 AND increasing the length ) ( appending 2 or 22 etc or setting lower limit of new range to 2*n+1 ( binary ) or 2*n + 2 ( bijective ) or 4*n + 3 ( binary with increase of 2 bits for the next set ) or 4*n + 6 ( bijective with increase of 2 bits for the next set ) etc. This allows you to construct codes without knowing the frequency prior. You can choose to increase the next set by 2 bits instead of 1 to fit 3 new symbols and 1 for the possibility of new symbols, or 2 new symbols and 2 for unknown symbols to contain code lenghts, because if you reach the lowest boundary condition numerically, you cannot add any more symbols.
Both these methods can be used to create canonical ( numerical ) codes same in length to any Huffman code set ( limited in length and code size ) and the small to large method can be used for any Huffman or non Huffman, limited or arbitrary length codeset. The advantage being that the parser is then numerical instead of character based. Refer paper for comparing the number of 'memory accesses'.
Goldbach Biunary codes
Goldbach Biunary codes ( or lookalikes ) are two unary codes appended together that can represent non trivial fractions ( which don't amount to 0 or 1 ). The length of the first unary 'n' represents the numerator and the total length of the two unary codes ( 'n'+'m' ) represents the denominator, hence the total number of bits required to represent the entire fraction is the base itself 'n'+'m' and the total number of bits required to represent any denominator is 1.
| Code | Value |
|---|---|
| 11 | 1/2 |
| 101 | 1/3 |
| 011 | 2/3 |
| 1001 | 1/4 |
| 0101 | 2/4 |
| 0011 | 3/4 |
| 10001 | 1/5 |
| 01001 | 2/5 |
| 00101 | 3/5 |
| 00011 | 4/5 |
| ... |
Normally the base is omitted on a per digit basis for number representation because it is assumed that all digits are in a certain base, but all numbers can be represented in variable base format where each digit is of a different base. These were used in the past as multilingual numbers or for solving sudoku puzzles where only changing the base from digit to digit could solve them. This is understood intuitively for decimal numbers between 0 and 1 which are sums of fractions, but is not commonly used or known for integers > 0. Thus representing the denominator or base ( usually a subscript written beside the entire number ) becomes a matter of writing the base in subscript for each digit or using Goldbach biunary codes.
N ( Integer >= 0 ) = Ax + By + Cz ; ( A,B,C >= 0 and < x,y,z ) or ( A,B,C > 0 and <= x,y,z for bijective notation )
M ( Decimal float < 1 and > 0 ) = A/x + B/y + C/z ; ( A,B,C > 0 and < x,y,z and x < y < z for non trivial fractions )
where each digit in base n reduces the search space to 1/n and the remainder is a matter of subtracting the exact fraction and continuing with the operation.
Do note that in the M ( decimal float ) variation of this method, all digits are fractions that add up to M, so multiplying two decimal numbers stored in this representation is a matter of running DIGIT_COUNT1 x DIGIT_COUNT2 small multiplications in parallel in any order and adding the values. Or one can choose to store the
NUMERATOR1*NUMERATOR2/DENOMINATOR2*DENOMINATOR2
format and not conducting the operation till the value is required. Or pre compute the possible operations and do no operations at all if the maximum number of digits / precision is limited to say around 64 bits.
This allows integer math to be used for floating point numbers and arbitrary precision floats. Further since the use of random denominators to approximately represent the decimal part of a float is an involved process ( and representing them with n bits would be cumbersome), this method lends itself to small denominators, one can simply have sequences of increasing denominators, where you start:
O ( Decimal float < 1 and > 0 ) = A/x , B/x , C/y , D/z , E/z ; ( A,B,C,D,E > 0 and < x,y,z and x < y < z for non trivial fractions )
gaining some storage space but having a power operation and losing some parallelism.
With the denominator of 2 and keep writing the Goldbach code till one encounters a zero, then increase the base / denominator by one ( 3 in this case ) and find the numerator and write the Goldbach Biunary code and repeat with denominator of 3 till one encounters a 0 and then increasing the denominator to 4 and so on. Since 0 is a trivial fraction, one does not write any code for it, comfortable in the knowledge that the next denominator will be either the same or one higher implying a zero was found earlier and the base was increased. The advantage of this is that bifocal images, where a high precision image artifact from say interplanetary or sky imagery data can be represented using continued fractions which are increasing in granularity with every digit, and the foreground can be represented using a different precision. Furthermore the increase in the denominator at each 0 results in a triviality where the next digit can only be 1 (1/2 only fits 1/3 and not 2/3, 1/3 only fits 1/4 and not 2/4,3/4 etc), so one is even allowed to multiply the remainder by the base before increasing the denominator for increasing granularity and to magnify the image in a fractal like logic.
You can use runlengths of increasing base codes or you can have a simple rule that every digit will result in an increasing base/denominator:
M ( Decimal float < 1 and > 0 ) = A/x + B/y + C/z ; ( A,B,C > 0 and < x,y,z and x < y < z for non trivial fractions )
Simply omit the 0 and increase the base in every step. Reduction in the visible base implies that it is a new number and no separator is required. This increases the bit sizes somewhat but those are low entropy or one can even write a count of the digits first and then permutate the digits in DIGIT_COUNT! ways to store DIGIT_COUNT! amount of data for practically free ( just the cost of a small number of the count of the digits ).
Generalized unary coding
A generalized version of unary coding was presented by Subhash Kak to represent numbers much more efficiently than standard unary coding. Here's an example of generalized unary coding for integers from 0 through 15 that requires only 7 bits (where three bits are arbitrarily chosen in place of a single one in standard unary to show the number). Note that the representation is cyclic where one uses markers to represent higher integers in higher cycles.
| n | Unary code | Generalized unary |
|---|---|---|
| 0 | 0 | 0000000 |
| 1 | 10 | 0000111 |
| 2 | 110 | 0001110 |
| 3 | 1110 | 0011100 |
| 4 | 11110 | 0111000 |
| 5 | 111110 | 1110000 |
| 6 | 1111110 | 0010111 |
| 7 | 11111110 | 0101110 |
| 8 | 111111110 | 1011100 |
| 9 | 1111111110 | 0111001 |
| 10 | 11111111110 | 1110010 |
| 11 | 111111111110 | 0100111 |
| 12 | 1111111111110 | 1001110 |
| 13 | 11111111111110 | 0011101 |
| 14 | 111111111111110 | 0111010 |
| 15 | 1111111111111110 | 1110100 |
Generalized unary coding requires that the range of numbers to be represented to be pre-specified because this range determines the number of bits that are needed.