Unary coding, or the unary numeral system, is an entropy encoding that represents a natural number, n, with n ones followed by a zero (if the term natural number is understood as non-negative integer) or with n − 1 ones followed by a zero (if the term natural number is understood as strictly positive integer). A unary number's code length would thus be n + 1 with that first definition, or n with that second definition. Unary code when vertical behaves like mercury in a thermometer that gets taller or shorter as n gets bigger or smaller, and so is sometimes called thermometer code. An alternative representation uses n or n − 1 zeros followed by a one, effectively swapping the ones and zeros, without loss of generality. For example, the first ten unary codes are:

Unary codeAlternativen (non-negative)n (strictly positive)
0101
100112
11000123
1110000134
111100000145
11111000000156
1111110000000167
111111100000000178
11111111000000000189
11111111100000000001910

Unary coding is an optimally efficient[clarification needed] encoding for the following discrete probability distribution[citation needed]

P ⁡ ( n ) = 2 − n {\displaystyle \operatorname {P} (n)=2^{-n}\,}

for n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...}.

In symbol-by-symbol coding, it is optimal for any geometric distribution

P ⁡ ( n ) = ( k − 1 ) k − n {\displaystyle \operatorname {P} (n)=(k-1)k^{-n}\,}

for which k ≥ φ = 1.61803398879..., the golden ratio, or, more generally, for any discrete distribution for which

P ⁡ ( n ) ≥ P ⁡ ( n + 1 ) + P ⁡ ( n + 2 ) {\displaystyle \operatorname {P} (n)\geq \operatorname {P} (n+1)+\operatorname {P} (n+2)\,}

for n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...}. Although it is the optimal symbol-by-symbol coding for such probability distributions, Golomb coding achieves better compression capability for the geometric distribution because it does not consider input symbols independently, but rather implicitly groups the inputs. For the same reason, arithmetic encoding performs better for general probability distributions, as in the last case above.

Unary coding is both a prefix-free code and a self-synchronizing code.

Unary code in use today

Examples of unary code uses include:

  • In Golomb Rice code, unary encoding is used to encode the quotient part of the Golomb code word.
  • In UTF-8, unary encoding is used in the leading byte of a multi-byte sequence to indicate the number of bytes in the sequence so that the length of the sequence can be determined without examining the continuation bytes.
  • Instantaneously trained neural networks use unary coding for efficient data representation.

Unary coding in biological networks

Unary coding is used in the neural circuits responsible for birdsong production. The nucleus in the brain of the songbirds that plays a part in both the learning and the production of bird song is the HVC (high vocal center). The command signals for different notes in the birdsong emanate from different points in the HVC. This coding works as space coding which is an efficient strategy for biological circuits due to its inherent simplicity and robustness.

Standard run-length unary codes

All binary data is defined by the ability to represent unary numbers in alternating run-lengths of 1s and 0s. This conforms to the standard definition of unary i.e. N digits of the same number 1 or 0. All run-lengths by definition have at least one digit and thus represent strictly positive integers.

nRL codeNext code
110
21100
3111000
411110000
51111100000
6111111000000
711111110000000
81111111100000000
9111111111000000000
1011111111110000000000
...

These codes are guaranteed to end validly on any length of data (when reading arbitrary data) and in the (separate) write cycle allow for the use and transmission of an extra bit (the one used for the first bit) while maintaining overall and per-integer unary code lengths of exactly N.

Uniquely decodable non-prefix unary codes

Following is an example of uniquely decodable unary codes that is not a prefix code and is not instantaneously decodable ()

nUnary codeAlternative
110
21001
3100011
410000111
51000001111
6100000011111
710000000111111
81000000001111111
9100000000011111111
1010000000000111111111
...

These codes also (when writing unsigned integers) allow for the use and transmission of an extra bit (the one used for the first bit). Thus they are able to transmit 'm' integers * N unary bits and 1 additional bit of information within m*N bits of data.

Symmetric unary codes

The following symmetric unary codes can be read and instantaneously decoded in either direction:

Unary codeAlternativen (non-negative)n (strictly positive)
1001
001112
01010123
0110100134
011101000145
01111010000156
0111110100000167
011111101000000178
01111111010000000189
01111111101000000001910
...

Canonical unary codes

For unary values where the maximum length is known, one can use canonical unary codes that are of a somewhat numerical nature and different from character based codes. The largest length n is known, numerical 0 ( 2 n − 1 {\displaystyle \operatorname {2} ^{n}-1\,}in bijective ) or -1 ( 2 2 n − 2 {\displaystyle \operatorname {2} ^{2n}-2\,}in bijective ) is assigned as the boundary condition equivalent to repeating a digit the maximum 'n' number of times, then for each step reducing the number of digits by one and increasing/decreasing the result by numerical '1'.

nUnary codeBijectiveStandard BijectiveAlternativeBijectiveStandard Bijective
11221101122
20112411310215226
3001112811171102211322214
400011112161111151110222129222230
500001111123211111311111022221612222262
60000011111126411111163111110222221125222222126
7000000111111121281111111127111111022222212532222222254
8000000011111111225611111111255111111102222222150922222222510
900000000111111111251211111111151111111111022222222110212222222221022
100000000000111111111110231111111111102311111111112222222222204622222222222046

Canonical codes ( different from Canonical Huffman Code where only the code book is discussed ) , canonical being a term used to imply use of any method, numerical in nature here, ie when they are processed as numbers not a string. If the number of codes required per symbol length is different to 1, i.e. there are more non-unary codes of some length required, those would be achieved by increasing/decreasing the values numerically without reducing the length. In order to obtain a set of codes of certain length, you have to start with the boundary condition usually 0....0 for the largest and last code and work upwards, increasing the value numerically till the number of codes of a certain length are exhausted, then chopping a number of bits from the right and numerically increasing the remaining number by 1 to get the first number of the next range ( set of numbers belonging to a certain length ) and so on from largest length to smallest length.

One can also start from the smallest length ( and give it a numerical value of the largest boundary condition, like 1 in binary or 2 in bijective ), and working downwards reducing the value numerically by 1 for each new code of the same length, OR ( reducing value numerically by 1 AND increasing the length ) ( appending 2 or 22 etc or setting lower limit of new range to 2*n+1 ( binary ) or 2*n + 2 ( bijective ) or 4*n + 3 ( binary with increase of 2 bits for the next set ) or 4*n + 6 ( bijective with increase of 2 bits for the next set ) etc. This allows you to construct codes without knowing the frequency prior. You can choose to increase the next set by 2 bits instead of 1 to fit 3 new symbols and 1 for the possibility of new symbols, or 2 new symbols and 2 for unknown symbols to contain code lenghts, because if you reach the lowest boundary condition numerically, you cannot add any more symbols.

Both these methods can be used to create canonical ( numerical ) codes same in length to any Huffman code set ( limited in length and code size ) and the small to large method can be used for any Huffman or non Huffman, limited or arbitrary length codeset. The advantage being that the parser is then numerical instead of character based. Refer paper for comparing the number of 'memory accesses'.

Goldbach Biunary codes

Goldbach Biunary codes ( or lookalikes ) are two unary codes appended together that can represent non trivial fractions ( which don't amount to 0 or 1 ). The length of the first unary 'n' represents the numerator and the total length of the two unary codes ( 'n'+'m' ) represents the denominator, hence the total number of bits required to represent the entire fraction is the base itself 'n'+'m' and the total number of bits required to represent any denominator is 1.

CodeValue
111/2
1011/3
0112/3
10011/4
01012/4
00113/4
100011/5
010012/5
001013/5
000114/5
...

Normally the base is omitted on a per digit basis for number representation because it is assumed that all digits are in a certain base, but all numbers can be represented in variable base format where each digit is of a different base. These were used in the past as multilingual numbers or for solving sudoku puzzles where only changing the base from digit to digit could solve them. This is understood intuitively for decimal numbers between 0 and 1 which are sums of fractions, but is not commonly used or known for integers > 0. Thus representing the denominator or base ( usually a subscript written beside the entire number ) becomes a matter of writing the base in subscript for each digit or using Goldbach biunary codes.

N ( Integer >= 0 ) = Ax + By + Cz ; ( A,B,C >= 0 and < x,y,z ) or ( A,B,C > 0 and <= x,y,z for bijective notation )

M ( Decimal float < 1 and > 0 ) = A/x + B/y + C/z ; ( A,B,C > 0 and < x,y,z and x < y < z for non trivial fractions )

where each digit in base n reduces the search space to 1/n and the remainder is a matter of subtracting the exact fraction and continuing with the operation.

Do note that in the M ( decimal float ) variation of this method, all digits are fractions that add up to M, so multiplying two decimal numbers stored in this representation is a matter of running DIGIT_COUNT1 x DIGIT_COUNT2 small multiplications in parallel in any order and adding the values. Or one can choose to store the

NUMERATOR1*NUMERATOR2/DENOMINATOR2*DENOMINATOR2

format and not conducting the operation till the value is required. Or pre compute the possible operations and do no operations at all if the maximum number of digits / precision is limited to say around 64 bits.

This allows integer math to be used for floating point numbers and arbitrary precision floats. Further since the use of random denominators to approximately represent the decimal part of a float is an involved process ( and representing them with n bits would be cumbersome), this method lends itself to small denominators, one can simply have sequences of increasing denominators, where you start:

O ( Decimal float < 1 and > 0 ) = A/x , B/x , C/y , D/z , E/z ; ( A,B,C,D,E > 0 and < x,y,z and x < y < z for non trivial fractions )

gaining some storage space but having a power operation and losing some parallelism.

With the denominator of 2 and keep writing the Goldbach code till one encounters a zero, then increase the base / denominator by one ( 3 in this case ) and find the numerator and write the Goldbach Biunary code and repeat with denominator of 3 till one encounters a 0 and then increasing the denominator to 4 and so on. Since 0 is a trivial fraction, one does not write any code for it, comfortable in the knowledge that the next denominator will be either the same or one higher implying a zero was found earlier and the base was increased. The advantage of this is that bifocal images, where a high precision image artifact from say interplanetary or sky imagery data can be represented using continued fractions which are increasing in granularity with every digit, and the foreground can be represented using a different precision. Furthermore the increase in the denominator at each 0 results in a triviality where the next digit can only be 1 (1/2 only fits 1/3 and not 2/3, 1/3 only fits 1/4 and not 2/4,3/4 etc), so one is even allowed to multiply the remainder by the base before increasing the denominator for increasing granularity and to magnify the image in a fractal like logic.

You can use runlengths of increasing base codes or you can have a simple rule that every digit will result in an increasing base/denominator:

M ( Decimal float < 1 and > 0 ) = A/x + B/y + C/z ; ( A,B,C > 0 and < x,y,z and x < y < z for non trivial fractions )

Simply omit the 0 and increase the base in every step. Reduction in the visible base implies that it is a new number and no separator is required. This increases the bit sizes somewhat but those are low entropy or one can even write a count of the digits first and then permutate the digits in DIGIT_COUNT! ways to store DIGIT_COUNT! amount of data for practically free ( just the cost of a small number of the count of the digits ).

Generalized unary coding

A generalized version of unary coding was presented by Subhash Kak to represent numbers much more efficiently than standard unary coding. Here's an example of generalized unary coding for integers from 0 through 15 that requires only 7 bits (where three bits are arbitrarily chosen in place of a single one in standard unary to show the number). Note that the representation is cyclic where one uses markers to represent higher integers in higher cycles.

nUnary codeGeneralized unary
000000000
1100000111
21100001110
311100011100
4111100111000
51111101110000
611111100010111
7111111100101110
81111111101011100
911111111100111001
10111111111101110010
111111111111100100111
1211111111111101001110
13111111111111100011101
141111111111111100111010
1511111111111111101110100

Generalized unary coding requires that the range of numbers to be represented to be pre-specified because this range determines the number of bits that are needed.

See also

Notes