A few CPU manufacturers have assigned bit numbers the opposite way (which is not the same as different endangers). By contrast, the three most significant bits (MSB's) stay unchanged (000 to 000).
Because of this volatility, the least significant bits are frequently employed in pseudorandom number generators, stenographic tools, hash functions and checksums. In digital steganography, sensitive messages may be concealed by manipulating and storing information in the least significant bits of an image or a sound file.
This allows the storage or transfer of digital information to remain concealed. The meaning is parallel to the above: it is the byte (or octet) in that position of a multi-byte number which has the least potential value.
To avoid this ambiguity, the less abbreviated terms “ls bit” or “ls byte” may be used. The unsigned binary representation of decimal 149, with the MSB highlighted.
To avoid this ambiguity, the less abbreviated terms MSIT or MS byte are often used. Most significant bit first means that the most significant bit will arrive first: hence e.g. the hexadecimal number 0×12, 00010010 in binary representation, will arrive as the sequence 0 0 0 1 0 0 1 0.
The Least significant bit first means that the least significant bit will arrive first: hence e.g. the same hexadecimal number 0×12, again 00010010 in binary representation, will arrive as the (reversed) sequence 0 1 0 0 1 0 0 0. The FORTRAN BEST function uses LSB 0 numbering.
In this era, bit groupings in the instruction stream were often referred to as syllables or slab, before the term byte became common. The international standard IEC 80000-13 codified this common meaning.
Modern architectures typically use 32- or 64- bit words, built of four or eight bytes. The term byte was coined by Werner Buchwald in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction.
It is a deliberate respelling of bite to avoid accidental mutation to bite. Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeper on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM.
Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31. Early computers used a variety of four- bit binary-coded decimal (BCD) representations and the six- bit codes for printable graphic patterns common in the U.S. Army (FIELD ATA) and Navy.
These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s.
ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight- bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six- bit binary-coded decimal (EBCDIC) representations used in earlier card punches.
The prominence of the System/360 led to the ubiquitous adoption of the eight- bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. The development of eight- bit microprocessors in the 1970s popularized this storage size.
Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a few operations on the four- bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four- bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit.
The term octet is used to unambiguously specify a size of eight bits. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers.
The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format as the upper-case character B. In the International System of Quantities (IS), B is the symbol of the be, a unit of logarithmic power ratio named after Alexander Graham Bell, creating a conflict with the IEC specification.
However, little danger of confusion exists, because the be is a rarely used unit. It is used primarily in its decade fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one tenth of a byte, the decimate, and other fractions, are only used in derived units, such as transmission rates.
The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French and Romanian, and is also combined with metric prefixes for multiples, for example KO and Mo. The usage of the term octal(e) for eight bits is no longer common.
More than one system exists to define larger units based on the byte. Systems based on powers of 10 reliably use standard SI prefixes (' kilo ', mega ', gig ', ...) and their corresponding symbols (k, M, G, ...).
The IEC standard defines eight such multiples, up to 1 zettabyte (YB), equal to 1000 8 bytes. A system of units based on powers of 2 in which 1 kilobyte (KiB) is equal to 1024 (i.e., 2 10) bytes is defined by international standards IEC 80000-13 and supported by national and international standards bodies (BIPM, IEC, NIST).
The IEC standard defines eight such multiples, up to 1 Fortbyte (Rib), equal to 1024 8 bytes. An alternate system of nomenclature for the same units, in which 1 kilobyte (KB) is equal to 1024 bytes, 1 megabyte (MB) is equal to 1024 2 bytes and 1 gigabyte (GB) is equal to 1024 3 bytes (but not higher multiples) is defined by a 1990s JE DEC standard.
The JE DEC convention is prominently used by the Microsoft Windows operating system, and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone, AT&T, Orange and Telstra. Percentage difference between decimal and binary interpretations of the unit prefixes grows with increasing storage supercomputer memory has a binary architecture making a definition of memory units based on powers of 2 most practical.
The use of the metric prefix kilo for binary multiples arose as a convenience, because 1024 is approximately 1000. This definition was popular in early decades of personal computing, with products like the Tendon 5 1 4 -inch DD floppy format (holding 368,640 bytes) being advertised as “360 KB”, following the 1024-byte convention.
The Sugar SA-400 5 1 4 -inch floppy disk held 109,375 bytes unformatted, and was advertised as “110 Byte”, using the 1000 convention. Likewise, the 8 -inch DEC RX01 floppy (1975) held 256,256 bytes formatted, and was advertised as “256k”.
Other disks were advertised using a mixture of the two definitions: notably, 3 1 2 -inch HD disks advertised as “1.44 MB” in fact have a capacity of 1,440 KiB, the equivalent to 1.47 MB or 1.41 MiB. In December 1998, the IEC addressed such multiple usages and definitions by creating prefixes such as kiwi, EBI, Gobi, etc., to unambiguously denote powers of 1024.
These prefixes are now part of the International System of Quantities. The IEC further specified that the kilobyte should only be used to refer to 1000 bytes.
The IEC adopted the proposal and published the standard in January 1999. In 1999, Donald Knuth suggested calling the kilobyte a “large kilobyte” (KGB).
Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled.
The C and C++ programming languages define byte as an addressable unit of data storage large enough to hold any member of the basic character set of the execution environment (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 126.96.36.199.1).
Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. This means every bit in memory is part of a byte.
Java's primitive byte data type is always defined as consisting of 8 bits and being a signed data type, holding values from 128 to 127. .NET programming languages, such as C#, define both an unsigned byte and a signed byte, holding values from 0 to 255, and 128 to 127, respectively.
^ Black, Gerrit Anne ; Brooks, Jr., Frederick Phillips ; Buchwald, Werner (1962), “4: Natural Data Units” (PDF), in Buchwald, Werner (ed. 39–40, LCC 61-10466, archived from the original (PDF) on 2017-04-03, retrieved 2017-04-03, Terms used here to describe the structure imposed by the machine design, in addition to bit, are listed below.
Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program.
^ Beer, Robert William (1959), “A proposal for a generalized card code of 256 characters”, Communications of the ACM, 2 (9): 19–23, DOI : 10.1145/368424.368435 ^ Posted, J. Retrieved 28 August 2021. Octet An eight bit byte.
Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long. Figure 2 shows the Shift Matrix to be used to convert a 60- bit word, coming from Memory in parallel, into characters, or 'bytes' as we have called them, to be sent to the Adder serially.
The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath.
Assume that it is desired to operate on 4 bit decimal digits, starting at the right. An analogous matrix arrangement is used to change from serial to parallel operation at the output of the adder.
The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch. A byte was described as consisting of any number of parallel bits from one to six.
The possibility of going to 8 bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter. The first published reference to the term occurred in 1959 in a paper Processing Data in Bits and Pieces by G A Black, F P Brooks Jr and W Buchwald in the IRE Transactions on Electronic Computers, June 1959, page 121.
The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch), edited by W Buchwald, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows: Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units.
System/360 took over many of the Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing.
1956 Summer: Gerrit Black, Fred Brooks, Werner Buchwald, John Cooke and Jim Romaine join the Stretch team. 1956 July : In a report Werner Buchwald lists the advantages of a 64- bit word length for Stretch.
With present applications, 1, 4, and 6 bits are the really important cases. However, the LINK Computer can be equipped to edit out these gaps and to permit handling of bytes which are split between words.
The resultant gaps can be edited out later by programming ^ Raymond, Eric Steven (2017) . Especially when we started to think about word processing, which would require both upper and lower case.
But long before that, when I headed software operations for CIE. Bull in France in 1965–66, I insisted that 'byte' be deprecated in favor of octet '. It is justified by new communications methods that can carry 16, 32, 64, and even 128 bits in parallel.
But some foolish people now refer to a '16- bit byte' because of this parallel transfer, which is visible in the UNICODE set. ^ Black, Gerrit Anne ; Brooks, Jr., Frederick Phillips ; Buchwald, Werner (June 1959).
The word byte was coined around 1956 to 1957 at MIT Lincoln Laboratories within a project called SAGE (the North American Air Defense System), which was jointly developed by Rand, Lincoln Labs, and IBM. In that era, computer memory structure was already defined in terms of word size.
After having spent many years in Asia, I returned to the U.S. and was bemused to find out that the word byte was being used in the new microcomputer technology to refer to the basic addressable memory unit. According to his son, Dooley wrote to him: “On good days, we would have the XD-1 up and running and all the programs doing the right thing, and we then had some time to just sit and talk idly, as we waited for the computer to finish doing its thing.
On one such occasion, I coined the word “byte”, they (Jules Schwartz and Dick Beeper) liked it, and we began using it amongst ourselves. “Erlang DES Cortes “Byte” I'm Rah men her Were binary Codes” (in German).
Origin of the term “byte”, 1956, archived from the original on 2017-04-10, retrieved 2017-04-10, A question-and-answer session at an ACM conference on the history of programming languages included this exchange: JOHN GOODENOUGH : You mentioned that the term “byte” is used in JOVIAL. JULES SCHWARTZ (inventor of JOVIAL): As I recall, the/FSQ-31, a totally different computer than the 709, was byte oriented.
I don't recall for sure, but I'm reasonably certain the description of that computer included the word “byte,” and we used it. Werner Buchwald coined the word as part of the definition of STRETCH, and the/FSQ-31 picked it up from STRETCH, but Werner is very definitely the author of that word.
^ Definition of kilobyte from Oxford Dictionaries Online Archived 2006-06-25 at the Payback Machine. ^ “Internet Mobile Access”.
“Defendant Western Digital Corporation's Brief in Support of Plaintiff's Motion for Preliminary Approval”. Orin Safer v. Western Digital Corporation.
^ Kilobytes Megabytes Gigabytes Terabytes (Stanford University) ^ Pearson, Melissa J. ^ Klein, Jack (2008), Integer Types in C and C++, archived from the original on 2010-03-27, retrieved 2015-06-18 ^ Cline, Marshall.