Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods

Digital Codes

Digital Codes: Data processed and stored by computers can be numerical (e.g., accounting records, spreadsheets, and stock quotations) or text (e.g., letters, memos, reports, and books). As previously discussed, the signals used to represent computerized data are digital, rather than analog. Even before the advent of computers, digital codes were used to represent data.

Early Digital Codes

The first digital codes was created by the inventor of the telegraph, Samuel Morse. The Morse code was originally designed for wired telegraph communication but was later adapted for radio communication. It consists of a series of “dots” and “dashes” that represent letters of the alphabet, numbers, and punctuation marks. This on/off code is shown in Fig. 11-1. A dot is a short burst of RF energy, and a dash is a burst of RF that is three times longer than a dot. The dot and dash on periods are separated by dot-length spaces or off periods. With special training, people can easily send and receive messages at speeds ranging from 15 to 20 words per minute to 70 to 80 words per minute.

The earliest radio communication was also carried out by using the Morse code of dots and dashes to send messages. A hand-operated telegraph key turned the carrier of a transmitter off and on to produce the dots and dashes. These were detected at the receiver and mentally converted by an operator back to the letters and numbers making up the message. This type of radio communication is known as continuous-wave (CW) transmission.

Another early binary data code was the Baudot (pronounced baw-dough) code used in the early teletype machine, a device for sending and receiving coded signals over a communication link. With teletype machines, it was no longer necessary for operators to learn Morse code. Whenever a key on the typewriter keyboard is pressed, a unique code is generated and transmitted to the receiving machine, which recognizes and then prints the corresponding letter, number, or symbol. The Baudot code used 5 bits to represent letters, numbers, and symbols.

The Baudot code is not used today, having been supplanted by codes that can represent more characters and symbols.

Modern Binary Codes

For modern data communication, information is transmitted by using a system in which the numbers and letters to be represented are coded, usually by way of a keyboard, and the binary word representing each character is stored in a computer memory. The message can also be stored on magnetic tape or disk. The following sections describe some widely used codes for transmission of digitized data.

Digital Codes | Hartley’s Law | ASCII | Asynchronous  |  Encoding Methods

American Standard Code for Information Interchange

The most widely used data communication code is the 7-bit binary code known as the American Standard Code for Information Interchange (abbreviated ASCII and pronounced ass-key), which can represent 128 numbers, letters, punctuation marks, and other symbols (see Fig. 11-2). With ASCII, a sufficient number of code combinations are available to represent both uppercase and lowercase letters of the alphabet.

The first ASCII codes listed in Fig. 11-2 have two- and three-letter designations. These codes initiate operations or provide responses for inquiries. For example, BEL or 0000111 will ring a bell or a buzzer; CR is carriage return; SP is a space, such as that between words in a sentence; ACK means “acknowledge that a transmission was received’’; STX and ETX are start and end of text, respectively; and SYN is a synchronization word that provides a way to get transmission and reception in step with each other. The meanings of all the letter codes are given at the bottom of the table.

Digital Codes | Hartley’s Law | ASCII | Asynchronous  |  Encoding Methods

Hexadecimal Values

Binary codes are often expressed by using their hexadecimal, rather than decimal, values. To convert a binary code to its hexadecimal equivalent, first divide the code into 4-bit groups, starting at the least significant bit on the right and working to the left. (Assume a leading 0 on each of the codes.) The hexadecimal equivalents for the binary codes for the decimal numbers 0 through 9 and the letters A through F are given in Fig. 11-3. Two examples of ASCII-to-hexadecimal conversion are as follows:

  1. The ASCII code for the number 4 is 0110100. Add a leading 0 to make 8 bits and
    then divide into 4-bit groups: 00110100 = 0011 0100 = hex 34.
  2. The letter w in ASCII is 1110111. Add a leading 0 to get 01110111; 01110111 =
    0111 0111 = hex 77.

Extended Binary Coded Decimal Interchange Code

The Extended Binary Coded Decimal Interchange Code (EBCDIC, pronounced ebb-see-dick), developed by IBM, is an 8-bit code similar to ASCII allowing a maximum of 256 characters to be represented. Its primary use is in IBM and IBM-compatible computing systems and equipment. It is not as widely used as ASCII.

Principles of Digital Transmission

As indicated , data can be transmitted in two ways: parallel and serial.

Serial Transmission

Parallel data transmission is not practical for long-distance communication. Data transfers in long-distance communication systems are made serially; each bit of a word is transmitted one after another (see Fig. 11-3). The figure shows the ASCII form for the letter M (1001101) being transmitted 1 bit at a time. The LSB is transmitted first, and the MSB last. The MSB is on the right, indicating that it was transmitted later in time. Each bit is transmitted for a fixed interval of time t. The voltage levels representing each bit appear on a single data line one after another until the entire word has been transmitted. For example, the bit interval may be 10 μs, which means that the voltage level for each bit in the word appears for 10 μs. It would therefore take 70 μs to transmit a 7-bit ASCII word.

Expressing the Serial Data Rate

The speed of data transfer is usually indicated as number of bits per second (bps or b/s). Some data rates take place at relatively slow speeds, usually a few hundred or several thousand bits per second. However, in some data communication systems, such as the Internet and local-area networks, bit rates as high as hundreds of billions of bits per second are used.

Digital Codes | Hartley’s Law | ASCII | Asynchronous  |  Encoding Methods

The speed of serial transmission is, of course, related to the bit time of the serial data. The speed in bits per second, denoted by bps, is the reciprocal of the bit time t, or bps = 1/t. For example, assume a bit time of 104.17 μs. The speed is bps = 1/104.17 x 10-6 =9600 bps.

If the speed in bits per second is known, the bit time can be found by rearranging the formula: t = 1/bps. For example, the bit time at 230.4 kbps (230,400 bps) is t = 1/230,400 = 4.34 x 10-6 = 4.34 μs.

Another term used to express the data speed in digital communication systems is baud rate. Baud rate is the number of signaling elements or symbols that occur in a given unit of time, such as 1 s. A signaling element is simply some change in the binary signal transmitted. In many cases, it is a binary logic voltage level change, either 0 or 1, in which case, the baud rate is equal to the data rate in bits per second. In summary, the baud rate is the reciprocal of the smallest signaling interval.

Bit rate = baud rate x bit per symbol
Bit rate = baud rate x log2 S

where S =number of states per symbol

The symbol or signaling element can also be one of several discrete signal amplitudes, frequencies, or phase shifts, each of which represents 2 data bits or more. Several unique modulation schemes have been developed so that each symbol or baud can represent multiple bits. The number of symbol changes per unit of time is no higher than the straight binary bit rate, but more bits per unit time are transmitted. Multiple symbol changes can be combined to further increase transmission speed. For example, a popular form of modulation known as quadrature amplitude modulation (QAM) combines multiple amplitude levels with multiple phase shifts to produce many bits per baud. (QAM is discussed later.) Each symbol is a unique amplitude level combined with a unique phase shift that corresponds to a group of bits. Multiple amplitude levels may also be combined with different frequencies in FSK to produce higher bit rates. As a result, higher bit rates can be transmitted over telephone lines or other severely bandwidth-limited communication channels that would not ordinarily be able to handle them. Several of these modulation methods are discussed later.

Assume, e.g., a system that represents 2 bits of data as different voltage levels. With 2 bits, there are 22 = 4 possible levels, and a discrete voltage is assigned to each.

00 0 V
01 1 V
10 2 V
11 3 V

In this system, sometimes called pulse-amplitude modulation (PAM), each of the four symbols is one of four different voltage levels. Each level is a different symbol representing 2 bits. Assume, e.g., that it is desired to transmit the decimal number 201, which is binary 11001001. The number can be transmitted serially as a sequence of equally spaced pulses that are either on or off [see Fig. 11-4(a)]. If each bit interval is 1 μs, the bit rate is 1/1 x 10-6 = 1,000,000 bps (1 Mbps). The baud rate is also 1 million bps.

Using the four-level system, we could also divide the word to be transmitted into 2-bit groups and transmit the appropriate voltage level representing each. The number 11001001 would be divided into these groups: 11 00 10 01. Thus the transmitted signal would be voltage levels of 3, 0, 2 and 1 V, each occurring for a fi xed interval of, say, 1 μs [see Fig. 11-4(b)]. The baud rate is still 1 million because there is only one symbol or level per time interval (1 μs). However, the bit rate is 2 million bps—double the baud rate—because each symbol represents 2 bits. We are transmitting 2 bits per baud. You will sometimes see this referred to as bits per hertz or bits/Hz. The total transmission time is also shorter. It would take 8 μs to transmit the 8-bit binary word but only 4 μs to transmit the four-level signal. The bit rate is greater than the baud rate because multiple levels are used.

Digital Codes | Hartley’s Law | ASCII | Asynchronous  |  Encoding Methods

The PAM signal in Fig. 11-4(b) can be transmitted over a cable. In wireless applications, this signal would first modulate a carrier before being transmitted. Any type of modulation can be used, but PSK is the most common.

A good example of modern PAM is the system used in U.S. digital high-definition television (HDTV). The video and audio to be transmitted are put into serial digital format, then converted to 8-level PAM. Each level can represent one of eight 3-bit combinations from 000 to 111. See Fig. 11-5. The symbols occur at a 10,800 rate, producing a net bit rate of 10,800 x 3 = 32.4 Mbps. The PAM signal is then used to amplitude modulate the transmitter carrier. Part of the lower sideband is suppressed to save spectrum.

Digital Codes | Hartley’s Law | ASCII | Asynchronous  |  Encoding Methods
Digital Codes | Hartley’s Law | ASCII | Asynchronous  |  Encoding Methods

The modulation is called 8VSB for an 8-level vestigial sideband. The signal is transmitted in a standard 6-MHz-wide TV channel.

Because of the sequential nature of serial data transmission, naturally it takes longer to send data in this way than it does to transmit it by parallel means. However, with a high-speed logic circuit, even serial data transfers can take place at very high speeds. Currently that data rate is as high as 10 billion bits per second (10 Gbps) on copper wire cable and up to 100 billion bits per second (100 Gbps) on fiber-optic cable. So although serial data transfers are slower than parallel transfers, they are fast enough for most communication applications and require only a single cable pair or fiber.

Asynchronous Transmission

In asynchronous transmission each data word is accompanied by start and stop bits that indicate the beginning and ending of the word. Asynchronous transmission of an ASCII character is illustrated in Fig. 11-6. When no information is being transmitted, the communication line is usually high, or binary 1. In data communication terminology, this is referred to as a mark. To signal the beginning of a word, a start bit, a binary 0 or space, as shown in the figure, is transmitted. The start bit has the same duration as all other bits in the data word. The transmission from mark to space indicates the beginning of the word and allows the receiving circuits to prepare themselves for the reception of the remainder of the bits.

After the start bit, the individual bits of the word are transmitted. In this case, the 7-bit ASCII code for the letter U, 1010101, is transmitted. Once the last code bit has been transmitted, a stop bit is included. The stop bit may be the same duration as all other bits and again is a binary 1 or mark. In some systems, 2 stop bits are transmitted, one after the other, to signal the end of the word.

Most low-speed digital transmission (the 1200- to the 56,000-bps range) is asynchronous. This technique is extremely reliable, and the start and stop bits ensure that the sending and receiving circuits remain in step with each other. The minimum separation between character words is 1 stop plus 1 start bit, as Fig. 11-7 shows. There can also be time gaps between characters or groups of characters, as the illustration shows, and thus the stop “bit’’ may be of some indefinite length.

The primary disadvantage of asynchronous communication is that the extra start and stop bits effectively slow down data transmission. This is not a problem in low-speed applications such as those involving certain printers and plotters. But when huge volumes of information must be transmitted, the start and stop bits represent a significant percentage of the bits transmitted. We call that percentage the overhead of transmission. A 7-bit ASCII character plus start and stop bits is 9 bits. Of the 9 bits, 2 bits are not data. This represents 2/9=0.222, or 22.2 percent inefficiency or overhead. Removing the start and stop bits and stringing the ASCII characters end to end allow many more data words to be transmitted per second.

Synchronous Transmission

The technique of transmitting each data word one after another without start and stop bits, usually in multiword blocks, is referred to as synchronous data transmission. To maintain synchronization between transmitter and receiver, a group of synchronization bits is placed at the beginning of the block and the end of the block. Fig. 11-8 shows one arrangement. Each block of data can represent hundreds or even thousands of 1-byte characters. At the beginning of each block is a unique series of bits that identifies the beginning of the block. In Fig. 11-8, two 8-bit synchronous (SYN) codes signal the start of a transmission. Once the receiving equipment finds these characters, it begins to receive the continuous data, the block of sequential 8-bit words or bytes. At the end of the block, another special ASCII code character, ETX, signals the end of transmission. The receiving equipment looks for the ETX code; detection of this code is how the receiving circuit recognizes the end of the transmission. An error detection code usually appears at the very end of the transmission.

Example 11.1 A block of 256 sequential 12-bit data words is transmitted serially in 0.016 s. Calculate(a) the time duration of 1 word, (b) the time duration of 1 bit, and (c) the speed of transmission in bits per second.

Encoding Methods

Whether digital signals are being transmitted by baseband methods or broadband methods (see Sec. 1-4), before the data is put on the medium, it is usually encoded in some way to make it compatible with the medium or to facilitate some desired operation connected with the transmission. The primary encoding methods used in data communication are summarized below.

Nonreturn to Zero

the nonreturn to zero (NRZ) method of encoding, the signal remains at the binary level assigned to it for the entire bit time. Fig. 11-9(a), which shows unipolar NRZ, is a slightly different version of Fig. 11-4. The logic levels are 0 and +5 V. When a binary 1 is to be transmitted, the signal stays at +5 V for the entire bit interval. When a binary 0 is to be sent, the signal stays at 0 V for the total bit time. In other words, the voltage does not return to zero during the binary 1 interval. In unipolar NRZ, the signal has only a positive polarity. A bipolar NRZ signal has two polarities, positive and negative, as shown in Fig. 11-9(b). The voltage levels are +12 and -12 V. The popular RS-232 serial computer interface uses bipolar NRZ, where a binary 1 is a negative voltage between -3 and -25 V and a binary 0 is a voltage between +3 and +25 V.

The NRZ method is normally generated inside computers, at low speeds, when the asynchronous transmission is being used. It is not popular for synchronous transmission because there is no voltage or level change when there are long strings of sequential binary 1s or 0s. If there is no signal change, it is difficult for the receiver to determine just where one-bit ends and the next one begins. If the clock is to be recovered from the transmitted data in a synchronous system, there must be more frequent changes, preferably one per bit. NRZ is usually converted to another format, such as RZ or Manchester, for synchronous transmissions.

Return to Zero

In return to zero (RZ) encoding [see Fig. 11-9(c) and (d)], the voltage level assigned to a binary 1 level returns to zero during the bit period. Unipolar RZ is illustrated in Fig. 11-9(c). The binary 1 level occurs for 50 percent of the bit interval, and the remaining bit interval is zero. Only one polarity level is used. Pulses occur only when a binary 1 is transmitted; no pulse is transmitted for a
binary 0.

Bipolar RZ is illustrated in Fig. 11-9(d). A 50 percent bit interval +3-V pulse is transmitted during a binary 1, and a -3-V pulse is transmitted for a binary 0. Because there is one clearly discernible pulse per bit, it is extremely easy to derive the clock from the transmitted data. For that reason, bipolar RZ is preferred over unipolar RZ. A popular variation of the bipolar RZ format is called alternative mark inversion (AMI) [see Fig. 11-9(e)]. During the bit interval, binary 0s are transmitted as no pulse. Binary 1s, also called marks, are transmitted as alternating positive and negative pulses. One binary 1 is sent as a positive pulse, the next binary 1 as a negative pulse, the next as a positive pulse, and so on.

Manchester

Manchester encoding also referred to as bi-phase encoding, can be unipolar or bipolar. It is widely used in LANs. In the Manchester system, a binary 1 is transmitted first as a positive pulse, for one-half of the bit interval, and then as a negative pulse, for the remaining part of the bit interval. A binary 0 is transmitted as a negative pulse for the first half of the bit interval and a positive pulse for the second half of the bit interval [see Fig. 11-9(f)]. The fact that there is a transition at the center of each 0 or 1 bit makes clock recovery very easy. However, because of the transition in the center of each bit, the frequency of a Manchester-encoded signal is two times an NRZ signal, doubling the bandwidth requirement.

Choosing a Coding Method

The choice of an encoding method depends on the application. For synchronous transmission, RZ and Manchester are preferred because the clock is easier to recover. Another consideration is average dc voltage buildup on the transmission line. When unipolar modes are used, a potentially undesirable average dc voltage builds up on the line because of the charging of the line capacitance. To eliminate this problem, bipolar methods are used, where the positive pulses cancel the negative pulses, and the dc voltage is averaged to zero. Bipolar RZ or Manchester is preferred if
dc buildup is a problem.

DC buildup is not always a problem. In some applications, the average dc value is used for signaling purposes. An example is an Ethernet LAN, which uses the direct current to detect when two or more stations are trying to transmit at the same time. Other encoding methods are also used. The encoding schemes used to encode the serial data recorded on hard disks, CDs, and solid-state drives, for example. Other schemes used in networking will be discussed later.

Transmission Efficiency

Transmission efficiency—i.e., the accuracy and speed with which information, whether it is voice or video, analog or digital, is sent and received over communication media— is the basic subject matter of a field known as information theory. Information theorists seek to determine mathematically the likelihood that a given amount of data being transmitted under a given set of circumstances (e.g., medium, bandwidth, speed of transmission, noise, and distortion) will be transmitted accurately.

Hartley’s Law

The amount of information that can be sent in a given transmission is dependent on the bandwidth of the communication channel and the duration of the transmission.

A bandwidth of only about 3 kHz is required to transmit voice so that it is intelligible and recognizable. However, because of the high frequencies and harmonics produced by musical instruments, a bandwidth of 15 to 20 kHz is required to transmit music with full fidelity. Music inherently contains more information than voice, and so requires greater bandwidth. A picture signal contains more information than a voice or music signal. Therefore, greater bandwidth is required to transmit it. A typical TV signal contains both voice and picture; therefore, it is allocated 6 MHz of spectrum space.


Stated mathematically, Hartley’s law is

C = 2B

Here C is the channel capacity expressed in bits per second and B is the channel bandwidth. It is also assumed that there is a total absence of noise in the channel. When noise becomes an issue, Hartley’s law is expressed as


C = (B log 2) (1 +S/N)


where S/N is the signal-to-noise ratio in power. These concepts are expanded upon in the coming sections.

The greater the bandwidth of a channel, the greater the amount of information that can be transmitted in a given time. It is possible to transmit the same amount of information over a narrower channel, but it must be done over a longer time. This general concept is known as Hartley’s law, and the principles of Hartley’s law also apply to the transmission of binary data. The greater the number of bits transmitted in a given time, the greater the amount of information that is conveyed. But the higher the bit rate, the wider the bandwidth needed to pass the signal with minimum distortion. Narrowing the bandwidth of a channel causes the harmonics in the binary pulses to be filtered out, degrading the quality of the transmitted signal and making error-free transmission more difficult.

Transmission Media and Bandwidth

The two most common types of media used in data communication are wire cable and radio. Two types of wire cable are used, coaxial and twisted-pair (see Fig. 11-10). The coaxial cable shown in Fig. 11-10(a) has a center conductor surrounded by an insulator over which is a braided shield. The entire cable is covered with a plastic insulation.

A twisted-pair cable is two insulated wires twisted together. The one shown in Fig. 11-10(b) is an unshielded twisted-pair (UTP) cable, but a shielded version is also available. Coaxial cable and shielded twisted-pair cables are usually preferred, as they provide some protection from noise and cross talk. Cross talk is the undesired transfer of signals from one unshielded cable to another adjacent one caused by inductive or capacitive coupling.

The bandwidth of any cable is determined by its physical characteristics. All wire cables act as low-pass filters because they are made up of the wire that has inductance, capacitance, and resistance. The upper cutoff frequency of a cable depends upon the cable type, its inductance and capacitance per foot, its length, the sizes of the conductor, and the type of insulation.

Coaxial cables have a wide usable bandwidth, ranging from 200 to 300 MHz for smaller cables to 500 MHz to 50 GHz for larger cables. The bandwidth decreases drastically with length. Twisted-pair cable has a narrower bandwidth, from a few kilohertz to over 800 MHz. Again, the actual bandwidth depends on the length and other physical characteristics. Special processing techniques have extended that speed to 100 GHz over short distances (,100 m).

The bandwidth of a radio channel is determined by how much spectrum space is allotted to the application by the FCC. At the lower frequencies, limited bandwidth is available, usually several kilohertz. At higher frequencies, wider bandwidths are available, from hundreds of kilohertz to many megahertz.

As discussed, binary pulses are rectangular waves made up of a fundamental sine wave plus many harmonics. The channel bandwidth must be wide enough to pass all the harmonics and preserve the waveshape. Most communication channels or media act as low-pass filters. Voice-grade telephone lines, e.g., act as a low-pass filter with an upper cutoff frequency of about 3000 Hz. Harmonics higher in frequency than the cutoff are filtered out, resulting in signal distortion. Eliminating the harmonics rounds the signal off (see Fig. 11-11).

If the filtering is particularly severe, the binary signal is essentially converted to its fundamental sine wave. If the cutoff frequency of the cable or channel is equal to or less than the fundamental sine wave frequency of the binary signal, the signal at the receiving end of the cable or radio channel will be a greatly attenuated sine wave at the signal fundamental frequency. However, the data is not lost, assuming that the S/N ratio is high enough. The information is still transmitted reliably, but in the minimum possible bandwidth. The sine wave signal shape can easily be restored to a rectangular wave at the receiver by amplifying it to offset the attenuation of the transmission medium and then squaring it with a Schmitt-trigger comparator or other wave -shaping circuit.


The upper cutoff frequency of any communication medium is approximately equal to the channel bandwidth. It is this bandwidth that determines the information capacity of the channel according to Hartley’s law. The channel capacity C, expressed in bits per second, is twice the channel bandwidth B, in hertz:

C = 2B

The bandwidth B is usually the same as the upper cutoff (3-dB down) frequency of the channel. This is the maximum theoretical limit, and it assumes that no noise is present. For example, the maximum theoretical bit capacity for a 10-kHz bandwidth channel is C = 2B = 2(10,000) = 20,000 bps.

You can see why this is so by considering the bit time in comparison to the period of the fundamental sine wave. A 20,000-bps (20-kbps) binary signal has a bit period of t = 1/20,000 = 50 x 10-6 = 50 μs.

It takes 2 bit intervals to represent a full sine wave with alternating binary 0s and 1s, one for the positive alternation and one for the negative alternation (see Fig.11-11). The 2 bit intervals make a period of 50 + 50 = 100 μs. This sine wave period translates to a sine wave frequency of f = 1/t = 1/100 μs = 1/100 x 10-6 =10,000 Hz (10 kHz), which is exactly the cutoff frequency or bandwidth of the channel.

Ideally, the shape of the binary data should be preserved as much as possible. Although the data can usually be recovered if it is degraded to a sine wave, recovery is far more reliable if the rectangular waveshape is maintained. This means that the channel must be able to pass at least some of the lower harmonics contained in the signal. As a general rule of thumb, if the bandwidth is roughly 5 to 10 times the data rate, the binary signal is passed with little distortion. For example, to transmit a 230.4-kbps serial data signal, the bandwidth should be at least 5 x 230.4 kHz (1.152 MHz) to 10 x 230.4 kHz (2.304 MHz).

The encoding method used also affects the required bandwidth for a given signal. For NRZ encoding, the bandwidth required is as described above. However, the bandwidth requirement for RZ is twice that for NRZ. This is so because the fundamental frequency contained in a rectangular waveform is the reciprocal of the duration of one cycle of the highest-frequency pulse, regardless of the duty cycle. The dashed lines in Fig. 11-9 show the highest fundamental frequencies for NRZ, RZ, RZ-AMI, and Manchester. AMI has a lower fundamental frequency than does RZ. The rate for Manchester encoding is twice that for NRZ and AMI.

As an example, assume an NRZ bit interval of 100 ns, which results in a bits per second-data rate of 1/t = 1/100 ns = 1/100 x 10-9 = 10 Mbps.

Alternating binary 1s and 0s produces a fundamental sine wave period of twice the bit time, or 200 ns, for a bandwidth of 1/t = 1/200 x 10-9 = 5 MHz. This is the same as that computed with the formula

B= C/2 = 10 Mbps/2= 5 MHz

Looking at Fig. 11-9, you can see that the RZ and Manchester pulses occur at a faster rate, the cycle time being 100 ns. The RZ and Manchester pulses are one-half the bit time of 100 ns, or 50 ns. The bit rate or channel capacity associated with this time is 1/50 ns = 1/50 x 10-9 = 20 Mbps. The bandwidth for this bit rate is C/2 =20 Mbps/2 = 10 MHz.

Thus the RZ and Manchester encoding schemes require twice the bandwidth. This tradeoff of bandwidth for some benefit, such as ease of clock recovery, may be desirable in certain applications.

Multiple Coding Levels

Channel capacity can be modified by using multiple-level encoding schemes that permit more bits per symbol to be transmitted. Remember that it is possible to transmit data using symbols that represent more than just 1 bit. Multiple voltage levels can be used, as illustrated earlier in Fig. 11-4(b). Other schemes, such as using different phase shifts for each symbol, are used. Consider the equation

C = 2B log2 N

where N is the number of different encoding levels per time interval. The implication is that for a given bandwidth, the channel capacity, in bits per second, will be greater if more than two encoding levels are used per time interval.

Refer to Fig. 11-4, where two levels or symbols (0 or 1) were used in transmitting the binary signal. The bit or symbol time is 1 μs. The bandwidth needed to transmit this 1,000,000-bps signal can be computed from C = 2B, or B = C/2. Thus a minimum bandwidth of 1,000,000 bps/2 = 500,000 Hz (500 kHz) is needed.

The same result is obtained with the new expression. If C = 2B log2 N, then
B = C/(2 log2 N).

The logarithm of a number to the base 2 can be computed with the expression

log2 N = log10 N/log10 2= log10 N/0.301
log2 N = 3.32 log10 N

where N is the number whose logarithm is to be calculated. The base-10 or common logarithm can be computed on any scientific calculator. With two coding levels (binary0 and 1 voltage levels), the bandwidth is B = C/2 log2 N = 1,000,000 bps/2(1) = 500,000 Hz.
Note that log2 2 for a binary signal is simply 1.

Now we continue, using C = 2B log2 N. Since log2 N = log2 2 = 1,

C = 2B(1) = 2B

Now let’s see what a multilevel coding scheme does. Again, we start with B =C/(2 log2 N). The channel capacity is 2,000,000 bps, as shown in Fig. 11-4(b), because each symbol (level) interval is 1 μs long. But here the number of levels N = 4. Therefore,2 bits are transmitted per symbol. The bandwidth is then (2,000,000 bps) /2 log2 4. Since log2 4 = 3.32 log10 4 = 3.32 /(0.602) = 2,

B = 2,000,000/2(2) = 2,000,000/4 = 500,000 Hz = 500 kHz

By using a multilevel (four-level) coding scheme, we can transmit at twice the speed in the same bandwidth. The data rate is 2 Mbps with four levels of coding in a 500-kHz bandwidth compared to 1 Mbps with only two symbols (binary). To transmit even higher rates within a given bandwidth, more voltages levels can be used, where each level represents 3, 4, or even more bits per symbol. The multilevel approach need not be limited to voltage changes; frequency changes and phase changes can also be used. Even greater increases in speed can be achieved if changes in voltage levels are combined with changes in phase or frequency.

Impact of Noise in the Channel

Another important aspect of information theory is the impact of noise on a signal. As discussed in earlier chapters, increasing bandwidth increases the rate of transmission but also allows more noise to pass, and so the choice of bandwidth is a tradeoff. The relationship between channel capacity, bandwidth, and noise is summarized in what is known as the Shannon-Hartley theorem:

C = B log2 (1 +S/N)


where C = channel capacity, bps
B = bandwidth, Hz
S/N = signal-to-noise ratio

Assume, e.g., that the maximum channel capacity of a voice-grade telephone line with a bandwidth of 3100 Hz and an S/N of 30 dB is to be calculated. First, 30 dB is converted to a power ratio. If dB = 10 log P, where P is the power ratio, then P = antilog (dB/10). Antilogs are easily computed on a scientifi c calculator.

A 30-dB S/N ratio translates to a power ratio of

P = log-1 30/10 = log-1 3 5=1000

The channel capacity is then

C = B log2 (1 +S/N) =3100 log2 (1 + 1000) = 3100 log2 1001

The base-2 logarithm of 1001 is

log2 1001 = 3.32 log10 1001 = 3.32(3) = 9.97 or about 10

Therefore, the channel capacity is

C= 3100(10) = 31,000 bps

A bit rate of 31,000 bps is surprisingly high for such a narrow bandwidth. In fact, it appears to conflict with what we learned earlier, i.e., that maximum channel capacity is twice the channel bandwidth. If the bandwidth of the voice-grade line is 3100 Hz, then the channel capacity is C = 2B = 2(3100) = 6200 bps. That rate is for a binary (two-level) system only, and it assumes no noise. How, then, can the Shannon-Hartley theorem predict a channel capacity of 31,000 bps when noise is present?

The Shannon-Hartley expression says that it is theoretically possible to achieve a 31,000-bps channel capacity on a 3100-Hz bandwidth line. What it doesn’t say is that multilevel encoding is needed to do so. Going back to the basic channel capacity expression C = 2B log2 N, we have a C of 31,000 bps and a B of 3100 Hz. The number of coding or symbol levels has not been specified. Rearranging the formula, we have

log2 N = C/2B = 31,000/2(3100) = 31,000/6200 = 5

Therefore,

N = antilog2 5

The antilog of a number is simply the value of the base raised to the number, in this case, 25 or 32.

Thus a channel capacity of 31,000 can be achieved by using a multilevel encoding scheme, one that uses 32 different levels or symbols per interval, instead of a two-level (binary) system. The baud rate of the channel is still C = 2B = 2(3100) =6200 Bd. But because a 32-level encoding scheme has been used, the bit rate is 31,000 bps. As it turns out, the maximum channel capacity is very difficult to achieve in practice. Typical systems limit the channel capacity to one-third to one-half the maximum to ensure more reliable transmission in the presence of noise.

Example 11.2

The bandwidth of a communication channel is 12.5 kHz. The S/N ratio is 25 dB. Calculate (a) the maximum theoretical data rate in bits per second, (b) the maximum theoretical channel capacity, and (c) the number of coding levels N needed to achieve the maximum speed. [For part (c), use the yx key on a scientific calculator.]

Watch Video Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods

( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )
FSK | PSK | DPSK | QPSK | QAM | Spectral Efficiency | Modem Concepts
( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )

( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )PCM ( Pulse Code Modulation ) | T-Carrier Systems| Duplexing ( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )

( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )IF Amplifiers | RF Input Amplifiers | Squelch Circuits | Controlling Gain ( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )

( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )Receiver and Transceiver | AM | FM | SW Radio | SDR | Wi-Fi ( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )

( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )Digital to Analog Converters | Analog to Digital Converters ( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )

( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )Click here to Learn more ( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )

( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )Click Here to Learn More ( Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods )

Reference : Electronic communication by Louis Frenzel

Leave a Reply

%d bloggers like this: