Thursday, April 15, 2010

5.1.1. Using Lists as Stacks¶



The list methods make it very easy to use a list as a stack, where the last element added is the first element retrieved (“last-in, first-out”). To add an item to the top of the stack, use append(). To retrieve an item from the top of the stack, use pop() without an explicit index. For example:

>>> stack = [3, 4, 5]
>>> stack.append(6)
>>> stack.append(7)
>>> stack
[3, 4, 5, 6, 7]
>>> stack.pop()
7
>>> stack
[3, 4, 5, 6]
>>> stack.pop()
6
>>> stack.pop()
5
>>> stack
[3, 4]

video resuts of searching in data structres

http://video.google.com/videoplay?docid=-6008732165729572263#docid=-4399947741817682002

video resuts of data structures

http://video.google.com/videoplay?docid=-6008732165729572263#

The IEEE Standard for Floating-Point Arithmetic (IEEE 754)

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is the most widely-used standard for floating-point computation, and is followed by many hardware (CPU and FPU) and software implementations. Many computer languages allow or require that some or all arithmetic be carried out using IEEE 754 formats and operations. The current version is IEEE 754-2008, which was published in August 2008; it includes nearly all of the original IEEE 754-1985 (which was published in 1985) and the IEEE Standard for Radix-Independent Floating-Point Arithmetic (IEEE 854-1987).

The standard defines

* arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special 'not a number' values (NaNs)
* interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form
* rounding algorithms: methods to be used for rounding numbers during arithmetic and conversions
* operations: arithmetic and other operations on arithmetic formats
* exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)

The standard also includes extensive recommendations for advanced exception handling, additional operations (such as trigonometric functions), expression evaluation, and for achieving reproducible results.

The standard is derived from and replaces IEEE 754-1985, the previous version, following a seven-year revision process, chaired by Dan Zuras and edited by Mike Cowlishaw. The binary formats in the original standard are included in the new standard along with three new basic formats (one binary and two decimal). To conform to the current standard, an implementation must implement at least one of the basic formats as both an arithmetic format and an interchange format.
Contents
[hide]

* 1 Formats
o 1.1 Basic formats
o 1.2 Arithmetic formats
o 1.3 Interchange formats
* 2 Rounding algorithms
o 2.1 Roundings to nearest
o 2.2 Directed roundings
* 3 Operations
* 4 Exception handling
* 5 Recommendations
o 5.1 Alternate exception handling
o 5.2 Recommended operations
o 5.3 Expression evaluation
o 5.4 Reproducibility
* 6 Character representation
* 7 See also
* 8 Further reading
* 9 External links

[edit] Formats

Formats in IEEE 754 describe sets of floating-point data and encodings for interchanging them.

A given format comprises:

* Finite numbers, which may be either base 2 (binary) or base 10 (decimal). Each finite number is most simply described by three integers: s= a sign (zero or one), c= a significand (or 'coefficient'), q= an exponent. The numerical value of a finite number is
(−1)s × c × bq
where b is the base (2 or 10). For example, if the sign is 1 (indicating negative), the significand is 12345, the exponent is −3, and the base is 10, then the value of the number is −12.345.

* Two infinities: +∞ and −∞.

* Two kinds of NaN (quiet and signaling). A NaN may also carry a payload, intended for diagnostic information indicating the source of the NaN. The sign of a NaN has no meaning, but it may be predictable in some circumstances.

The possible finite values that can be represented in a given format are determined by the base (b), the number of digits in the significand (precision, p), and the exponent parameter emax:

* c must be an integer in the range zero through bp−1 (e.g., if b=10 and p=7 then c is 0 through 9999999)
* q must be an integer such that 1−emax ≤ q+p−1 ≤ emax (e.g., if p=7 and emax=96 then q is −101 through 90).

Hence (for the example parameters) the smallest non-zero positive number that can be represented is 1×10−101 and the largest is 9999999×1090 (9.999999×1096), and the full range of numbers is −9.999999×1096 through 9.999999×1096. The numbers closest to the inverse of these bounds (−1×10−95 and 1×10−95) are considered to be the smallest (in magnitude) normal numbers; non-zero numbers between these smallest numbers are called subnormal numbers.

Zero values are finite values with significand 0. These are signed zeros, the sign bit specifies if a zero is +0 (positive zero) or −0 (negative zero).
[edit] Basic formats

The standard defines five basic formats, named using their base and the number of bits used to encode them. A conforming implementation must fully implement at least one of the basic formats. There are three binary floating-point basic formats (which can be encoded using 32, 64 or 128 bits) and two decimal floating-point basic formats (which can be encoded using 64 or 128 bits). The binary32 and binary64 formats are the single and double formats of IEEE 754-1985.

The precision of the binary formats is one greater than the width of its significand, because there is an implied (hidden) 1 bit.
Name Common name Base Digits E min E max Notes
binary16 Half precision 2 10+1 -14 +15 storage, not basic
binary32 Single precision 2 23+1 -126 +127
binary64 Double precision 2 52+1 -1022 +1023
binary128 Quadruple precision 2 112+1 -16382 +16383
decimal32 10 7 -95 +96 storage, not basic
decimal64 10 16 -383 +384
decimal128 10 34 -6143 +6144

All the basic formats are available in both hardware and software implementations.
[edit] Arithmetic formats

A format that is just to be used for arithmetic and other operations need not have an encoding associated with it (that is, an implementation can use whatever internal representation it chooses); all that needs to be defined are its parameters (b, p, and emax). These parameters uniquely describe the set of finite numbers (combinations of sign, significand, and exponent) that it can represent.
[edit] Interchange formats

Interchange formats are intended for the exchange of floating-point data using a fixed-length bit-string for a given format.

For the exchange of binary floating-point numbers, interchange formats of length 16 bits, 32 bits, 64 bits, and any multiple of 32 bits ≥128 are defined. The 16-bit format is intended for the exchange or storage of small numbers (e.g., for graphics).

The encoding scheme for these binary interchange formats is the same as that of IEEE 754-1985: a sign bit, followed by w exponent bits that describe the exponent offset by a bias, and p−1 bits that describe the significand. The width of the exponent field for a k-bit format is computed as w = floor(4 log2(k))−13. The existing 64- and 128-bit formats follow this rule, but the 16- and 32-bit formats have more exponent bits (5 and 8) than this formula would provide (3 and 7, respectively).

As with IEEE 754-1985, there is some flexibility in the encoding of signaling NaNs.

For the exchange of decimal floating-point numbers, interchange formats of any multiple of 32 bits are defined.

The encoding scheme for the decimal interchange formats similarly encodes the sign, exponent, and significand, but uses a more complex approach to allow the significand to be encoded as a compressed sequence of decimal digits (using Densely Packed Decimal) or as a binary integer. In either case the set of numbers (combinations of sign, significand, and exponent) that may be encoded is identical, and signaling NaNs have a unique encoding (and the same set of possible payloads).
[edit] Rounding algorithms

The standard defines five rounding algorithms. The first two round to a nearest value; the others are called directed roundings:
[edit] Roundings to nearest

* Round to nearest, ties to even – rounds to the nearest value; if the number falls midway it is rounded to the nearest value with an even (zero) least significant bit, which occurs 50% of the time; this is the default algorithm for binary floating-point and the recommended default for decimal
* Round to nearest, ties away from zero – rounds to the nearest value; if the number falls midway it is rounded to the nearest value above (for positive numbers) or below (for negative numbers)

[edit] Directed roundings

* Round toward 0 – directed rounding towards zero (also called truncation)
* Round toward +∞ – directed rounding towards positive infinity
* Round toward −∞ – directed rounding towards negative infinity.

[edit] Operations

Required operations for a supported arithmetic format (including the basic formats) include:

* Arithmetic operations (add, subtract, multiply, divide, square root, fused-multiply-add, remainder, etc.)
* Conversions (between formats, to and from strings, etc.)
* Scaling and (for decimal) quantizing
* Copying and manipulating the sign (abs, negate, etc.)
* Comparisons and total ordering
* Classification and testing for NaNs, etc.
* Testing and setting flags
* Miscellaneous operations.

[edit] Exception handling

The standard defines five exceptions, each of which has a corresponding status flag that (except in certain cases of underflow) is raised when the exception occurs. No other action is required, but alternatives are recommended (see below).

The five possible exceptions are:

* Invalid operation (e.g., square root of a negative number)
* Division by zero
* Overflow (a result is too large to be represented correctly)
* Underflow (a result is very small (outside the normal range) and is inexact)
* Inexact.

These are the same five exceptions as were defined in IEEE 754-1985.
[edit] Recommendations
[edit] Alternate exception handling

The standard recommends optional exception handling in various forms, including traps (exceptions that change the flow of control in some way) and other exception handling models which interrupt the flow, such as try/catch. The traps and other exception mechanisms remain optional, as they were in IEEE 754-1985.
[edit] Recommended operations

A new clause in the standard recommends fifty operations, including log, power, and trigonometric functions, that language standards should define. These are all optional (none are required in order to conform to the standard). The operations include some on dynamic modes for attributes, and also a set of reduction operations (sum, scaled product, etc.). All are required to supply a correctly rounded result, but they do not have to detect or report inexactness.
[edit] Expression evaluation

The standard recommends how language standards should specify the semantics of sequences of operations, and points out the subtleties of literal meanings and optimizations that change the value of a result.
[edit] Reproducibility

The IEEE 754-1985 allowed many variations in implementations (such as the encoding of some values and the detection of certain exceptions). IEEE 754-2008 has tightened up many of these, but a few variations still remain (especially for binary formats). The reproducibility clause recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language), and describes what needs to be done to achieve reproducible results.
[edit] Character representation

The standard requires operations to convert between basic formats and external character sequence formats. Conversions to and from a decimal character format are required for all formats. Conversion to an external character sequence must be such that conversion back using round to even will recover the original number. There is no requirement to preserve the payload of a NaN or signaling NaN, and conversion from the external character sequence may turn a signaling NaN into a quiet NaN.

Correctly rounded results can be obtained converting to decimal and back again to the binary format using:

5 decimal digits for binary16
9 decimal digits for binary32
17 decimal digits for binary64
36 decimal digits for binary128

For other binary formats the required number of decimal digits is

1 + ceiling(p×log102)

where p is the number of significant bits in the binary format, e.g. 24 bits for binary32.

The decimal representation will be preserved using:

7 decimal digits for decimal32
16 decimal digits for decimal64
34 decimal digits for decimal128

Correct rounding is only guaranteed for these numbers of decimal digits plus 3. For instance a conversion from a decimal external sequence with 8 decimal digits is guaranteed to be correctly rounded when converted to binary16, but conversion of a sequence of 9 decimal digits is not.
[edit] See also

* IEEE 754-1985
* Intel 8087, an early implementation of the then-draft IEEE 754-1985
* Minifloat, low-precision binary floating-point formats following IEEE 754 principles
* half precision – single precision – double precision – quadruple precision
* IBM System z9, the first CPU to implement IEEE 754-2008 (using hardware microcode)
* z10, a CPU that implements IEEE 754-2008 fully in hardware
* POWER6, a CPU that implements IEEE 754-2008 fully in hardware

The IEEE Standard for Floating-Point Arithmetic (IEEE 754)

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is the most widely-used standard for floating-point computation, and is followed by many hardware (CPU and FPU) and software implementations. Many computer languages allow or require that some or all arithmetic be carried out using IEEE 754 formats and operations. The current version is IEEE 754-2008, which was published in August 2008; it includes nearly all of the original IEEE 754-1985 (which was published in 1985) and the IEEE Standard for Radix-Independent Floating-Point Arithmetic (IEEE 854-1987).

The standard defines

* arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special 'not a number' values (NaNs)
* interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form
* rounding algorithms: methods to be used for rounding numbers during arithmetic and conversions
* operations: arithmetic and other operations on arithmetic formats
* exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)

The standard also includes extensive recommendations for advanced exception handling, additional operations (such as trigonometric functions), expression evaluation, and for achieving reproducible results.

The standard is derived from and replaces IEEE 754-1985, the previous version, following a seven-year revision process, chaired by Dan Zuras and edited by Mike Cowlishaw. The binary formats in the original standard are included in the new standard along with three new basic formats (one binary and two decimal). To conform to the current standard, an implementation must implement at least one of the basic formats as both an arithmetic format and an interchange format.
Contents
[hide]

* 1 Formats
o 1.1 Basic formats
o 1.2 Arithmetic formats
o 1.3 Interchange formats
* 2 Rounding algorithms
o 2.1 Roundings to nearest
o 2.2 Directed roundings
* 3 Operations
* 4 Exception handling
* 5 Recommendations
o 5.1 Alternate exception handling
o 5.2 Recommended operations
o 5.3 Expression evaluation
o 5.4 Reproducibility
* 6 Character representation
* 7 See also
* 8 Further reading
* 9 External links

[edit] Formats

Formats in IEEE 754 describe sets of floating-point data and encodings for interchanging them.

A given format comprises:

* Finite numbers, which may be either base 2 (binary) or base 10 (decimal). Each finite number is most simply described by three integers: s= a sign (zero or one), c= a significand (or 'coefficient'), q= an exponent. The numerical value of a finite number is
(−1)s × c × bq
where b is the base (2 or 10). For example, if the sign is 1 (indicating negative), the significand is 12345, the exponent is −3, and the base is 10, then the value of the number is −12.345.

* Two infinities: +∞ and −∞.

* Two kinds of NaN (quiet and signaling). A NaN may also carry a payload, intended for diagnostic information indicating the source of the NaN. The sign of a NaN has no meaning, but it may be predictable in some circumstances.

The possible finite values that can be represented in a given format are determined by the base (b), the number of digits in the significand (precision, p), and the exponent parameter emax:

* c must be an integer in the range zero through bp−1 (e.g., if b=10 and p=7 then c is 0 through 9999999)
* q must be an integer such that 1−emax ≤ q+p−1 ≤ emax (e.g., if p=7 and emax=96 then q is −101 through 90).

Hence (for the example parameters) the smallest non-zero positive number that can be represented is 1×10−101 and the largest is 9999999×1090 (9.999999×1096), and the full range of numbers is −9.999999×1096 through 9.999999×1096. The numbers closest to the inverse of these bounds (−1×10−95 and 1×10−95) are considered to be the smallest (in magnitude) normal numbers; non-zero numbers between these smallest numbers are called subnormal numbers.

Zero values are finite values with significand 0. These are signed zeros, the sign bit specifies if a zero is +0 (positive zero) or −0 (negative zero).
[edit] Basic formats

The standard defines five basic formats, named using their base and the number of bits used to encode them. A conforming implementation must fully implement at least one of the basic formats. There are three binary floating-point basic formats (which can be encoded using 32, 64 or 128 bits) and two decimal floating-point basic formats (which can be encoded using 64 or 128 bits). The binary32 and binary64 formats are the single and double formats of IEEE 754-1985.

The precision of the binary formats is one greater than the width of its significand, because there is an implied (hidden) 1 bit.
Name Common name Base Digits E min E max Notes
binary16 Half precision 2 10+1 -14 +15 storage, not basic
binary32 Single precision 2 23+1 -126 +127
binary64 Double precision 2 52+1 -1022 +1023
binary128 Quadruple precision 2 112+1 -16382 +16383
decimal32 10 7 -95 +96 storage, not basic
decimal64 10 16 -383 +384
decimal128 10 34 -6143 +6144

All the basic formats are available in both hardware and software implementations.
[edit] Arithmetic formats

A format that is just to be used for arithmetic and other operations need not have an encoding associated with it (that is, an implementation can use whatever internal representation it chooses); all that needs to be defined are its parameters (b, p, and emax). These parameters uniquely describe the set of finite numbers (combinations of sign, significand, and exponent) that it can represent.
[edit] Interchange formats

Interchange formats are intended for the exchange of floating-point data using a fixed-length bit-string for a given format.

For the exchange of binary floating-point numbers, interchange formats of length 16 bits, 32 bits, 64 bits, and any multiple of 32 bits ≥128 are defined. The 16-bit format is intended for the exchange or storage of small numbers (e.g., for graphics).

The encoding scheme for these binary interchange formats is the same as that of IEEE 754-1985: a sign bit, followed by w exponent bits that describe the exponent offset by a bias, and p−1 bits that describe the significand. The width of the exponent field for a k-bit format is computed as w = floor(4 log2(k))−13. The existing 64- and 128-bit formats follow this rule, but the 16- and 32-bit formats have more exponent bits (5 and 8) than this formula would provide (3 and 7, respectively).

As with IEEE 754-1985, there is some flexibility in the encoding of signaling NaNs.

For the exchange of decimal floating-point numbers, interchange formats of any multiple of 32 bits are defined.

The encoding scheme for the decimal interchange formats similarly encodes the sign, exponent, and significand, but uses a more complex approach to allow the significand to be encoded as a compressed sequence of decimal digits (using Densely Packed Decimal) or as a binary integer. In either case the set of numbers (combinations of sign, significand, and exponent) that may be encoded is identical, and signaling NaNs have a unique encoding (and the same set of possible payloads).
[edit] Rounding algorithms

The standard defines five rounding algorithms. The first two round to a nearest value; the others are called directed roundings:
[edit] Roundings to nearest

* Round to nearest, ties to even – rounds to the nearest value; if the number falls midway it is rounded to the nearest value with an even (zero) least significant bit, which occurs 50% of the time; this is the default algorithm for binary floating-point and the recommended default for decimal
* Round to nearest, ties away from zero – rounds to the nearest value; if the number falls midway it is rounded to the nearest value above (for positive numbers) or below (for negative numbers)

[edit] Directed roundings

* Round toward 0 – directed rounding towards zero (also called truncation)
* Round toward +∞ – directed rounding towards positive infinity
* Round toward −∞ – directed rounding towards negative infinity.

[edit] Operations

Required operations for a supported arithmetic format (including the basic formats) include:

* Arithmetic operations (add, subtract, multiply, divide, square root, fused-multiply-add, remainder, etc.)
* Conversions (between formats, to and from strings, etc.)
* Scaling and (for decimal) quantizing
* Copying and manipulating the sign (abs, negate, etc.)
* Comparisons and total ordering
* Classification and testing for NaNs, etc.
* Testing and setting flags
* Miscellaneous operations.

[edit] Exception handling

The standard defines five exceptions, each of which has a corresponding status flag that (except in certain cases of underflow) is raised when the exception occurs. No other action is required, but alternatives are recommended (see below).

The five possible exceptions are:

* Invalid operation (e.g., square root of a negative number)
* Division by zero
* Overflow (a result is too large to be represented correctly)
* Underflow (a result is very small (outside the normal range) and is inexact)
* Inexact.

These are the same five exceptions as were defined in IEEE 754-1985.
[edit] Recommendations
[edit] Alternate exception handling

The standard recommends optional exception handling in various forms, including traps (exceptions that change the flow of control in some way) and other exception handling models which interrupt the flow, such as try/catch. The traps and other exception mechanisms remain optional, as they were in IEEE 754-1985.
[edit] Recommended operations

A new clause in the standard recommends fifty operations, including log, power, and trigonometric functions, that language standards should define. These are all optional (none are required in order to conform to the standard). The operations include some on dynamic modes for attributes, and also a set of reduction operations (sum, scaled product, etc.). All are required to supply a correctly rounded result, but they do not have to detect or report inexactness.
[edit] Expression evaluation

The standard recommends how language standards should specify the semantics of sequences of operations, and points out the subtleties of literal meanings and optimizations that change the value of a result.
[edit] Reproducibility

The IEEE 754-1985 allowed many variations in implementations (such as the encoding of some values and the detection of certain exceptions). IEEE 754-2008 has tightened up many of these, but a few variations still remain (especially for binary formats). The reproducibility clause recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language), and describes what needs to be done to achieve reproducible results.
[edit] Character representation

The standard requires operations to convert between basic formats and external character sequence formats. Conversions to and from a decimal character format are required for all formats. Conversion to an external character sequence must be such that conversion back using round to even will recover the original number. There is no requirement to preserve the payload of a NaN or signaling NaN, and conversion from the external character sequence may turn a signaling NaN into a quiet NaN.

Correctly rounded results can be obtained converting to decimal and back again to the binary format using:

5 decimal digits for binary16
9 decimal digits for binary32
17 decimal digits for binary64
36 decimal digits for binary128

For other binary formats the required number of decimal digits is

1 + ceiling(p×log102)

where p is the number of significant bits in the binary format, e.g. 24 bits for binary32.

The decimal representation will be preserved using:

7 decimal digits for decimal32
16 decimal digits for decimal64
34 decimal digits for decimal128

Correct rounding is only guaranteed for these numbers of decimal digits plus 3. For instance a conversion from a decimal external sequence with 8 decimal digits is guaranteed to be correctly rounded when converted to binary16, but conversion of a sequence of 9 decimal digits is not.
[edit] See also

* IEEE 754-1985
* Intel 8087, an early implementation of the then-draft IEEE 754-1985
* Minifloat, low-precision binary floating-point formats following IEEE 754 principles
* half precision – single precision – double precision – quadruple precision
* IBM System z9, the first CPU to implement IEEE 754-2008 (using hardware microcode)
* z10, a CPU that implements IEEE 754-2008 fully in hardware
* POWER6, a CPU that implements IEEE 754-2008 fully in hardware

Double precision floating-point format

In computing, a double precision is a usually binary floating-point computer numbering format that occupies 8 bytes (64 bits in modern computers) in computer memory.

In IEEE 754-2008 the 64-bit base 2 format is officially referred to as binary64. It was called double in IEEE 754-1985.

One of the first programming languages to provide single- and double-precision floating-point data types was Fortran. Before the widespread adoption of IEEE 754-1985, the representation and properties of the double float data type depended on the computer manufacturer and computer model.

Double precision floating point provides a relative precision of about 16 decimal digits and magnitude range from about 10−308 to about 10+308. In computers that have 64-bit floating-point arithmetic units, most numerical computing is done in double-precision floating point, since the use of single-precision provides little speed advantage.[1][2]

Double precision is known as double in C, C++, C# and Java.[3]. In ECMAScript, it is the only Number type.
IEEE 754 floating point precisions

16-bit: Half (binary16)
32-bit: Single (binary32), decimal32
64-bit: Double (binary64), decimal64
128-bit: Quadruple (binary128), decimal128
Contents
[hide]

* 1 IEEE 754 double precision binary floating-point format: binary64
o 1.1 Exponent encoding
o 1.2 Double precision examples
* 2 See also
* 3 References
* 4 External links

[edit] IEEE 754 double precision binary floating-point format: binary64

The IEEE 754 standard specifies a binary64 as having:

* Sign bit: 1 bit
* Exponent width: 11 bits
* Significand precision: 53 bits (52 explicitly stored)

The format is written with the significand having an implicit lead bit of value 1, unless the exponent is stored with all zeros. Thus only 52 bits of the significand appearing in the memory format but the total precision is 53 bits (approximately 16 decimal digits, \log_{10}(2^{53}) \approx 15.955). The bits are laid out as follows:

IEEE 754 Double Floating Point Format.svg
[edit] Exponent encoding

The double precision binary floating-point exponent is encoded using an offset binary representation, with the zero offset being 1023; also known as exponent bias in the IEEE 754 standard.

* Emin = 001H−3FFH = −1022
* Emax = 7FEH−3FFH = 1023
* Exponent bias = 3FFH = 1023

Thus, as defined by the offset binary representation, in order to get the true exponent the offset of 1023 has to be subtracted from the stored exponent. The value of Emax is 1023 instead of 1024 because an exponent consisting of all 1's is considered a special case[4].

The stored exponents 000H and 7FFH are interpreted specially.
Exponent Significand zero Significand non-zero Equation
000H zero, −0 subnormal numbers (−1)signbit×2−1022× 0.significandbits
001H, ..., 7FEH normalized value (−1)signbit×2exponentbits−1023×1.significandbits
7FFH ±infinity NaN (quiet, signalling)

The minimum positive (subnormal) value is 2−1074 ≈ 5 × 10−324. The minimum positive normal value is 2−1022 ≈ 2.225 × 10−308. The maximum representable value is ≈ 1.79769 × 10308.
[edit] Double precision examples

These examples are given in bit representation, in hexadecimal, of the floating point value. This includes the sign, (biased) exponent, and significand.

3ff0 0000 0000 0000 = 1
3ff0 0000 0000 0001 = 1.0000000000000002, the next higher number > 1
3ff0 0000 0000 0002 = 1.0000000000000004
4000 0000 0000 0000 = 2
c000 0000 0000 0000 = −2

7fef ffff ffff ffff ≈ 1.7976931348623157 × 10308 (max double precision)

0000 0000 0000 0000 = 0
8000 0000 0000 0000 = −0

7ff0 0000 0000 0000 = infinity
fff0 0000 0000 0000 = -infinity

3fd5 5555 5555 5555 ≈ 1/3

By default, 1/3 rounds down, instead of up like single precision, because of the odd number of bits in the significand. So the bits beyond the rounding point are 0101... which is less than 1/2 of a unit in the last place.


Each of the 52 bits of the significand, bit 51 to bit 0, represents a value, starting at 1 and halves for each bit, as follows

bit 51 = 1
bit 50 = 0.5
bit 49 = 0.25
bit 48 = 0.125
bit 47 = 0.0625
.
.
bit 0 = ~0.0000000000000004440892 (~4.440892e-16)

In more detail:

Given the hexadecimal representation 3fd5 5555 5555 5555,
Sign = 0x0
Exponent = 0x3fd = 1021
Exponent Bias = 1023 (above)
Mantissa = 0x5 5555 5555 5555
Value = 2(Exponent − Exponent Bias) × 1.Mantissa – Note the Mantissa must not be converted to decimal here
= 2−2 × (0x15 5555 5555 5555 × 2−52)
= 2−54 × 0x15 5555 5555 5555
= 0.333333333333333314829616256247390992939472198486328125
≈ 1/3

String (computer science)


Formal theory


Let Σ be an alphabet, a non-empty finite set. Elements of Σ are called symbols or characters. A string (or word) over Σ is any finite sequence of characters from Σ. For example, if Σ = {0, 1}, then 0101 is a string over Σ.

The length of a string is the number of characters in the string (the length of the sequence) and can be any non-negative integer. The empty string is the unique string over Σ of length 0, and is denoted ε or λ.

The set of all strings over Σ of length n is denoted Σn. For example, if Σ = {0, 1}, then Σ2 = {00, 01, 10, 11}. Note that Σ0 = {ε} for any alphabet Σ.

The set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn,

\Sigma^{*} = \<span class=bigcup_{n \in \N} \Sigma^{n}" src="http://upload.wikimedia.org/math/b/8/4/b84d3acf4eab356d641b6c4fab13c556.png">

For example, if Σ = {0, 1}, Σ* = {ε, 0, 1, 00, 01, 10, 11, 000, 001, 010, 011, …}. Although Σ* itself is countably infinite, all elements of Σ* have finite length.

A set of strings over Σ (i.e. any subset of Σ*) is called a formal language over Σ. For example, if Σ = {0, 1}, the set of strings with an even number of zeros ({ε, 1, 00, 11, 001, 010, 100, 111, 0000, 0011, 0101, 0110, 1001, 1010, 1100, 1111, …}) is a formal language over Σ.

Concatenation and substrings

Concatenation is an important binary operation on Σ*. For any two strings s and t in Σ*, their concatenation is defined as the sequence of characters in s followed by the sequence of characters in t, and is denoted st. For example, if Σ = {a, b, …, z}, s = bear, and t = hug, then st = bearhug and ts = hugbear.

String concatenation is an associative, but non-commutative operation. The empty string serves as the identity element; for any string s, εs = sε = s. Therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers.

A string s is said to be a substring or factor of t if there exist (possibly empty) strings u and v such that t = usv. The relation "is a substring of" defines a partial order on Σ*, the least element of which is the empty string.

Lexicographical ordering

It is often necessary to define an ordering on the set of strings. If the alphabet Σ has a total order (cf. alphabetical order) one can define a total order on Σ* called lexicographical order. Note that since Σ is finite, it is always possible to define a well ordering on Σ and thus on Σ*. For example, if Σ = {0, 1} and 0 <>

String operations

A number of additional operations on strings commonly occur in the formal theory. These are given in the article on string operations.

Topology

Strings admit the following interpretation as nodes on a graph:

  • Fixed length strings can be viewed as nodes on a hypercube;
  • Variable length strings (of finite length) can be viewed as nodes on the k-ary tree, where k is the number of symbols in Σ;
  • Infinite strings can be viewed as infinite paths on the k-ary tree.

The natural topology on the set of fixed length strings or variable length strings is the discrete topology, but the natural topology on the set of infinite strings is the limit topology, viewing the set of infinite strings as the inverse limit of the sets of finite strings. This is the construction used for the p-adic numbers and some constructions of the Cantor set, and yields the same topology.

String datatypes

A string datatype is a datatype modeled on the idea of a formal string. Strings are such an important and useful datatype that they are implemented in nearly every programming language. In some languages they are available as primitive types and in others as composite types. The syntax of most high-level programming languages allows for a string, usually quoted in some way, to represent an instance of a string datatype; such a meta-string is called a literal or string literal.

String length

Although formal strings can have an arbitrary (but finite) length, the length of strings in real languages is often constrained to an artificial maximum. In general, there are two types of string datatypes: fixed length strings which have a fixed maximum length and which use the same amount of memory whether this maximum is reached or not, and variable length strings whose length is not arbitrarily fixed and which use varying amounts of memory depending on their actual size. Most strings in modern programming languages are variable length strings. Despite the name, even variable length strings are limited in length; although, generally, the limit depends only on the amount of memory available.

Character encoding

Historically, string datatypes allocated one byte per character, and although the exact character set varied by region, character encodings were similar enough that programmers could generally get away with ignoring this — groups of character sets used by the same system in different regions usually either had a character in the same place, or did not have it at all. These character sets were typically based on ASCII or EBCDIC.

Logographic languages such as Chinese, Japanese, and Korean (known collectively as CJK) need far more than 256 characters (the limit of a one 8-bit byte per-character encoding) for reasonable representation. The normal solutions involved keeping single-byte representations for ASCII and using two-byte representations for CJK ideographs. Use of these with existing code led to problems with matching and cutting of strings, the severity of which depended on how the character encoding was designed. Some encodings such as the EUC family guarantee that a byte value in the ASCII range will only represent that ASCII character, making the encoding safe for systems that use those characters as field separators. Other encodings such as ISO-2022 and Shift-JIS do not make such guarantees, making matching on byte codes unsafe. These encodings also were not "self synchronizing", so that locating character boundaries required backing up to the start of a string, and pasting two strings together could result in corruption of the second string (these problems were much less with EUC as any ASCII character did synchronize the encoding).

Unicode has simplified the picture somewhat. Most programming languages have a datatype for Unicode strings (usually UTF-16 as it was usually added before Unicode supplemental planes were introduced). Unicode's preferred byte stream format UTF-8 is designed not to have the problems described above for older multibyte encodings. Both UTF-8 and UTF-16 require the programmer to know that the fixed-size code units are different than the "characters", the main difficulty currently is incorrectly designed API's that attempt to hide this difference.

Implementations

Some languages like C++ implement strings as templates that can be used with any datatype, but this is the exception, not the rule.

Some languages, such as C++ and Ruby, normally allow the contents of a string to be changed after it has been created; these are termed mutable strings. In other languages, such as Java and Python, the value is fixed and a new string must be created if any alteration is to be made; these are termed immutable strings.

Strings are typically implemented as arrays of characters, in order to allow fast access to individual characters. A few languages such as Haskell implement them as linked lists instead.

Some languages, such as Prolog and Erlang, avoid implementing a dedicated string datatype at all, instead adopting the convention of representing strings as lists of character codes.

Representations

Representations of strings depend heavily on the choice of character repertoire and the method of character encoding. Older string implementations were designed to work with repertoire and encoding defined by ASCII, or more recent extensions like the ISO 8859 series. Modern implementations often use the extensive repertoire defined by Unicode along with a variety of complex encodings such as UTF-8 and UTF-16.

Most string implementations are very similar to variable-length arrays with the entries storing the character codes of corresponding characters. The principal difference is that, with certain encodings, a single logical character may take up more than one entry in the array. This happens for example with UTF-8, where single characters can take anywhere from one to four bytes. In these cases, the logical length of the string (number of characters) differs from the logical length of the array (number of bytes in use).

The length of a string can be stored implicitly by using a special terminating character; often this is the null character having value zero, a convention used and perpetuated by the popular C programming language[1]. Hence, this representation is commonly referred to as C string. The length of a string can also be stored explicitly, for example by prefixing the string with the length as a byte value — a convention used in Pascal; consequently some people call it a P-string.

In terminated strings, the terminating code is not an allowable character in any string.

The term bytestring usually indicates a general-purpose string of bytes — rather than strings of only (readable) characters, strings of bits, or such. Byte strings often imply that bytes can take any value and any data can be stored as-is, meaning that there should be no value interpreted as a termination value.

Here is an example of a null-terminated string stored in a 10-byte buffer, along with its ASCII representation:

F R A N K NUL k e f w
46 52 41 4E 4B 00 6B 66 66 77

The length of a string in the above example is 5 characters, but it occupies 6 bytes. Characters after the terminator do not form part of the representation; they may be either part of another string or just garbage. (Strings of this form are sometimes called ASCIZ strings, after the original assembly language directive used to declare them.)

Here is the equivalent (old style) Pascal string stored in a 10-byte buffer, along with its ASCII representation:

length F R A N K k e f w
05 46 52 41 4E 4B 6B 66 66 77

Both character termination and length codes limit strings: for example, C character arrays that contain Nul characters cannot be handled directly by C string library functions: strings using a length code are limited to the maximum value of the length code.

Both of these limitations can be overcome by clever programming, of course, but such workarounds are by definition not standard.

Historically, rough equivalents of the C termination method appear in both hardware and software. For example "data processing" machines like the IBM 1401 used a special word mark bit to delimit strings at the left, where the operation would start at the right. This meant that while the IBM 1401 had a seven-bit word in "reality", almost no-one ever thought to use this as a feature, and override the assignment of the seventh bit to (for example) handle ASCII codes.

It is possible to create data structures and functions that manipulate them that do not have the problems associated with character termination and can in principle overcome length code bounds. It is also possible to optimize the string represented using techniques from run length encoding (replacing repeated characters by the character value and a length) and Hamming encoding.

While these representations are common, others are possible. Using ropes makes certain string operations, such as insertions, deletions, and concatenations more efficient.

Vectors

While character strings are very common uses of strings, a string in computer science may refer generically to any vector of homogenously typed data. A string of bits or bytes, for example, may be used to represent data retrieved from a communications medium. This data may or may not be represented by a string-specific datatype, depending on the needs of the application, the desire of the programmer, and the capabilities of the programming language being used.

String processing algorithms

There are many algorithms for processing strings, each with various trade-offs. Some categories of algorithms include:

  • String searching algorithms for finding a given substring or pattern
  • String manipulation algorithms
  • Sorting algorithms
  • Regular expression algorithms
  • Parsing a string

Advanced string algorithms often employ complex mechanisms and data structures, among them suffix trees and finite state machines.

Character string oriented languages and utilities

Character strings are such a useful datatype that several languages have been designed in order to make string processing applications easy to write. Examples include the following languages:

  • awk
  • Icon
  • MUMPS
  • Perl
  • Rexx
  • Ruby
  • sed
  • SNOBOL
  • Tcl

Many UNIX utilities perform simple string manipulations and can be used to easily program some powerful string processing algorithms. Files and finite streams may be viewed as strings.

Some APIs like Multimedia Control Interface, embedded SQL or printf use strings to hold commands that will be interpreted.

Recent scripting programming languages, including Perl, Python, Ruby, and Tcl employ regular expressions to facilitate text operations.

Some languages such as Perl and Ruby support string interpolation, which permits arbitrary expressions to be evaluated and included in string literals.

Character string functions

String functions are used to manipulate a string or change or edit the contents of a string. They also are used to query information about a string. They are usually used within the context of a computer programming language.

The most basic example of a string function is the length(string) function, which returns the length of a string (not counting any terminator characters or any of the string's internal structural information) and does not modify the string. For example, length("hello world") returns 11.

There are many string functions which exist in other languages with similar or exactly the same syntax or parameters. For example in many languages the length function is usually represented as len(string). Even though string functions are very useful to a computer programmer, a computer programmer using these functions should be mindful that a string function in one language could in another language behave differently or have a similar or completely different function name, parameters, syntax, and results.



In computer science, the term integer is used to refer to a data type which represents some finite subset of the mathematical integers. These are also known as integral data types[1].

Value and representation

The value of a datum with an integral type is the mathematical integer that it corresponds to. The representation of this datum is the way the value is stored in the computer’s memory. Integral types may be unsigned (capable of representing only non-negative integers) or signed (capable of representing negative integers as well).

The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the memory bytes storing the bits varies; see endianness. The width or precision of an integral type is the number of bits in its representation. An integral type with n bits can encode 2n numbers; for example an unsigned type typically represents the non-negative values 0 through 2n−1.

There are three different ways to represent negative numbers in a binary numeral system. The most common is two’s complement, which allows a signed integral type with n bits to represent numbers from −2(n−1) through 2(n−1)−1. Two’s complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values, and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. The other possibilities are sign-magnitude and ones' complement. See Signed number representations for details.

Another, rather different, representation for integers is binary-coded decimal, which is still commonly used in mainframe financial applications and in databases.

Common integral data types

Bits Name Range (assuming two's complement for signed) Decimal digits Uses
4 nibble, semioctet
Unsigned: 0 to +15 2 Binary-coded decimal, single decimal digit representation.
8 byte, octet Signed: −128 to +127 3 ASCII characters, C/C++ char, C/C++ uint8_t, int8_t, Java byte, C# byte (unsigned), T-SQL tinyint, Delphi Byte, Shortint
Unsigned: 0 to +255 3
16 halfword, word, short, int Signed: −32,768 to +32,767 5 UCS-2 characters, C/C++ short, C/C++ int (minimum), C/C++ uint16_t, int16_t, Java short, C# short, Java char, Delphi Word, Smallint
Unsigned: 0 to +65,535 5
32 word, long, doubleword, longword Signed: −2,147,483,648 to +2,147,483,647 10 UCS-4 characters, Truecolor with alpha, C/C++ int (with some compilers)[2], C/C++ long (on Windows and 32-bit DOS and Unix), C/C++ uint32_t, int32_t, Java int, C# int, FourCC, Delphi Cardinal, Integer, LongWord, LongInt
Unsigned: 0 to +4,294,967,295 10
64 doubleword, longword, long long, quad, quadword, int64 Signed: −9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 19 C/C++ long (on 64-bit Unix), C/C++ long long, C/C++ uint64_t, int64_t, Java long, C# long, ulong, Delphi Int64
Unsigned: 0 to +18,446,744,073,709,551,615 20
128 octaword, double quadword Signed: −170,141,183,460,469,231,731,687,303,715,884,105,728 to +170,141,183,460,469,231,731,687,303,715,884,105,727 39 C only available as non-standard compiler-specific extension
Unsigned: 0 to +340,282,366,920,938,463,463,374,607,431,768,211,455 39
n n-bit integer
(general case)
Signed: ( − 2n − 1) to (2n − 1 − 1) \lceil (n-1) \log_{10}{2} \rceil Ada range -2**(n-1)..2**(n-1)-1
Unsigned: 0 to (2n − 1) \lceil n \log_{10}{2} \rceil Ada range 0..2**n-1, Ada mod 2**n

Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types but only a small, fixed set of widths.

The table above lists integral type widths that are supported in hardware by common processors. High level programming languages provide more possibilities. It is common to have a ‘double width’ integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (which can represent only the integers in a specified range).

Some languages, such as Lisp, REXX and Haskell, support arbitrary precision integers (also known as infinite precision integers or bignums). Other languages which do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's BigInteger class or Perl's "bigint" package. These use as much of the computer’s memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they too can only represent a finite subset of the mathematical integers. These schemes support very large numbers, for example one kilobyte of memory could be used to store numbers up to 2466 digits long.

A Boolean or Flag type is a type which can represent only two values: 0 and 1, usually identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access.

A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal.

Bytes and octets

The term byte initially meant ‘the smallest addressable unit of memory’. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits (‘bit-addressed machine’), or that could only address 16- or 32-bit quantities (‘word-addressed machine’). The term byte was usually not used at all in connection with bit- and word-addressed machines.

The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate.

In modern usage byte almost invariably means eight bits, since all other sizes have fallen into disuse; thus byte has come to be synonymous with octet.

Words

The term 'word' is used for a small group of bits which are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS.

As of 2008, practically all new desktop processors are of the x86-64 family and capable of using 64-bit words, they are however often used in 32-bit mode. Embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers.

One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as int a variable that will be used to store values greater than 216-1, the program will fail on computers with 16-bit integers. That variable should have been declared as long, which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers.

Character (computing)


In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.

Examples of characters include letters, numerical digits, and common punctuation marks (such as '.' or '-'). The concept also includes control characters, which do not correspond to symbols in a particular natural language, but rather to other bits of information used to process text in one or more languages. Examples of control characters include carriage return or tab, as well as instructions to printers or other devices that display or otherwise process text.

Characters are typically combined into strings.

Character encoding

Computers and communication equipment represent characters using a character encoding that assigns each character to something — an integer quantity represented by a sequence of bits, typically — that can be stored or transmitted through a network. Two examples of popular encodings are ASCII and the UTF-8 encoding for Unicode. While most character encodings map characters to numbers and/or bit sequences, Morse code instead represents characters using a series of electrical impulses of varying length.

Terminology

Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or API). Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph is used to describe a particular physical appearance of a character. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character.

With the advent and widespread acceptance of Unicode[1] and bit-agnostic encoding forms,[clarification needed], a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organisation, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things.

For example, the Hebrew letter aleph ("א") is often used by mathematicians to denote certain kinds of infinity, but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code points"), though they may be rendered identically. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.

The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.

Boolean data type

In computer science, the Boolean or logical data type is a primitive data type having one of two values: true or false, intended to represent the truth values of logic and Boolean algebra.

In programming languages that have a built-in Boolean data type, such as Pascal and Java, the comparison operators such as '>' and '≠' are usually defined to return a Boolean value. Also, conditional and iterative commands may be defined to test Boolean-valued expressions.

Languages without an explicit Boolean data type, like C and Lisp, may still represent truth values by some other data type. Lisp uses an empty list for false, and any other value for true. C uses an integer type, with false represented as the zero value, and true as any non-zero value (such as 1 or -1). Indeed, a Boolean variable may be regarded (and be implemented) as a numerical variable with a single binary digit (bit), which can store only two values.

Most programming languages, even those that do not have an explicit Boolean type, have support for Boolean algebra operations such as conjunction (AND, &, *), disjunction (OR, |, +), equivalence (EQV, =, ==), exclusive or/non-equivalence (XOR, NEQV, ^, !=), and not (NOT, ~, !).

In some languages, the Boolean data type is defined to include more than two truth values. For instance the ISO SQL 1999 standard defined a Boolean value as being either true, false, or unknown (SQL null). Although this convention defies the law of excluded middle, it is often useful in programming.

In the lambda calculus model of computing, Boolean values can be represented as Church booleans.

History

One of the earliest languages to provide an explicit Boolean data type was Algol 60 (1960) with values true and false and logical operators denoted by symbols '\wedge' (and), '\vee' (or)', \supset' (implies), '\equiv' (equivalence), and '\neg' (not). Due to input device limitations of the time, however, most compilers used alternative representations for the latter, such as AND or 'AND'. This approach ("Boolean is a separate built-in primitive data type") was adopted by many later languages, such as ALGOL 68 (1970) [1], Java, and C#.

The first version of FORTRAN (1957) and its successor FORTRAN II (1958) did not have logical values or operations; even the conditional IF statement took an arithmetic expression and branched to one of three locations according to its sign. FORTRAN IV (1962), however, followed the Algol 60 example by providing a Boolean data type (LOGICAL), truth literals (.TRUE. and .FALSE.), Boolean-valued numeric comparison operators (.EQ., .GT., etc.), and logical operators (.NOT., .AND., .OR.). In FORMAT statements, a specific control character ('L') was provided for the parsing or formatting of logical values.[2]

The Lisp programming language (1958) never had a built-in Boolean data type. Instead, conditional constructs like cond assume that the logical value "false" is represented by the empty list (), which is defined to be the same as the special atom nil or NIL; whereas any other s-expression is interpreted as "true". For convenience, most modern dialects of Lisp predefine the atom t to have value t, so that one can use t as a mnemonic notation for "true". This approach ("any value can be used as a Boolean value") was retained in most Lisp dialects (Common Lisp, Scheme, Emacs Lisp), and similar models were adopted by many scripting languages; although which values are interpreted as "false" and which are "true" vary from language to language. In Scheme, for example, the "false" value is an atom distinct from nil, so the latter is interpreted as "true". In Python, a numeric value of zero (integer or fractional), the null value (None), and empty containers (i.e. strings, lists, sets, etc.) are considered boolean false; all other values are considered boolean true by default. In Ruby programming language, on the other hand, only the null object and a special false object are "false", everything else (including the integer 0) is "true".

Some languages use a variant of this approach: they do have a distinct built-in Boolean data type (or at least built-in distinctive values for "false" and "true"), but any value is automatically converted to a Boolean value if used in a context that requires the latter. This is the approach used by PHP, JavaScript, and by Python since version 2.3 (which treats Boolean values as 0 or 1 in arithmetic contexts).

The initial standards for the C programming language (1972) provided no Boolean type; and, to this day, Boolean values are commonly represented by integers (ints) in C programs. The comparison operators ('>', '==', etc.) are defined to return a signed integer (int) result, either zero (for false) or 1 (for true). The same convention is assumed by the logical operators ('&&', '||', '!', etc.) and condition-testing statements ('if', 'while'). Thus logical values can be stored in integer variables, and used anywhere integers would be valid, including in indexing, arithmetic, parsing, and formatting. This approach ("Boolean values are just integers") was retained in all later versions of C. Some of its dialects, like C99 and Objective C, provide standard definitions of a Boolean type as a synonym of int and macros for "false" and "true" as 0 and 1, respectively. Visual Basic uses a similar approach. C++ has a separate Boolean data type ('bool'), but with automatic conversions from scalar and pointer values that are very similar to those of C. This approach was adopted also by many later languages, especially by some scripting ones such as Awk and Perl. One problem with this approach is that the tests if(t==TRUE){...} and if(t) are not equivalent.

The Pascal programming language (1978) introduced the concept of programmer-defined enumerated types. A built-in Boolean data type was then provided as a predefined enumerated type with values FALSE and TRUE. By definition, all comparisons, logical operations, and conditional statements applied to and/or yielded Boolean values. Otherwise, the Boolean type had all the facilities which were available for enumerated types in general — such as ordering and use as indices. On the other hand, the conversion between Booleans and integers (or any other types) still required explicit tests or function calls, as in Algol 60. This approach ("Boolean is an enumerated type") was adopted by most later languages which had enumerated types, such as Modula, Ada and Haskell.

After enumerated types ('enum's) were added to the ANSI version of C (1989), many C programmers got used defining their own Boolean types as such, for readability reasons. However, enumerated types are equivalent to integers according to the language standards; so the effective identity between Booleans and integers is still valid for C programs.

Special approaches

In recent versions of Python, user-defined objects may specify their truth value by providing a __bool__ method.[3]

JavaScript lacks a Boolean data type proper, but has a Boolean object class which can be used as a wrapper for storing and handling its two Boolean values. However, paradoxically, such a Boolean object will be automatically interpreted as "true" in Boolean contexts, even if its stored value is "false".

As of the 1999 standard, SQL specified a Boolean data type with four possible values: true, false, unknown or null. However, vendors could choose to equate the last two values.[4]. Because of this inconsistency most SQL implementations (with the notable exception of Postgresql[5]) use other data types (like bit, byte, and character) to simulate Boolean values.