Why Are Binary Coding Schemes Necessary?

104 3

    Ambiguity

    • One of the main reasons for using the binary number system for logic circuits is that every condition must be completely true or completely false; it cannot be partially true or partially false. This means that it’s easy to design simple, stable electronic circuits that switch back and forth between true and false without ambiguity. Furthermore, it’s possible to design a circuit that remains in one state indefinitely -- or, at least, until it’s deliberately switched to the other state -- making it possible for computers to remember sequences of events.

    Binary Coding

    • All but the very earliest computers have used a binary coding scheme to deal with whole numbers, or integers. A binary coding scheme represents an integer based on powers of 2, rather than powers of 10. Each binary digit is coded as a bit, with a value of “1” or “0”, and binary digits can be strung together, end-to-end, to represent more complicated signals. Three binary digits can represent any value from “000” (“0” in decimal) to “111” (“7” in decimal) or eight possible combinations in total. Binary coding schemes allow the circuitry for performing binary arithmetic to be faster and simpler than that for decimal coding schemes.

    Conversion

    • In order to be useful, computers must allow users to input integers in human-readable, decimal form and display output in the same form. Computers don’t have any built-in capability to convert integers from decimal to binary and back again, but the conversion can be performed quite easily using binary arithmetic. Computer programs that translate high-level programming languages into machine code, known as compilers, insert the code required to perform the conversion at the appropriate point.

    Common Sizes

    • The size of the largest integer that can be represented by a binary coding scheme depends on the total number of binary digits used to represent it. To keep the arithmetic circuitry as simple as possible, most modern computers use a small number of fixed size binary coding schemes. Common sizes include 32-bit coding schemes, which are capable of representing integers up to 9 digits, and 64-bit coding schemes, which are capable of representing integers up to 18 digits.

Subscribe to our newsletter
Sign up here to get the latest news, updates and special offers delivered directly to your inbox.
You can unsubscribe at any time

Leave A Reply

Your email address will not be published.

"Technology" MOST POPULAR

PTH

alias command

How to Boot From a Removable Disk

How to Make a GM Account

Open Your Opera 11 Browser

"Technology" Lastest Articles