Protocols | OSI Model | Error Detection | Redundancy | Convolutional

Protocols

Protocols are rules and procedures used to ensure compatibility between the sender and receiver of serial digital data regardless of the hardware and software being used. They are used to identify the start and end of a message, identify the sender and the receiver, state the number of bytes to be transmitted, state a method of error detection, and for other functions. Various protocols, and various levels of protocols, are used in data communication.

The simplest form of the protocols is the asynchronous transmission of data using a start bit and a stop bit (refer to Fig. 11-6) framing a single character, with a parity bit between the character bit and the stop bit. The parity bit is part of the protocols, but may or may not be used. In data communication, however, a message is more than one character. As discussed previously, it is composed of blocks, groups of letters of the alphabet, numbers, punctuation marks, and other symbols in the desired sequence. In synchronous data communication applications, the block is the basic transmission unit.

To identify a block, one or more special characters are transmitted prior to the block and after the block. These additional characters, which are usually represented by 7- or 8-bit codes, perform a number of functions. Like the start and stop bits on a character, they signal the beginning and end of the transmission. But they are also used to identify a specific block of data, and they provide a means for error checking and detection.

Some of the characters at the beginning and end of each block are used for handshaking purposes. These characters give the transmitter and receiver status information. Fig. 11-58 illustrates the basic handshaking process. For example, a transmitter may send a character indicating that it is ready to send data to a receiver. Once the receiver has identified that character, it responds by indicating its status, e.g., by sending a character representing “busy’’ back to the transmitter. The transmitter will continue to send its ready signal until the receiver signals back that it is not busy, or is ready to receive.

At that point, data transmission takes place. Once the transmission is complete, some additional handshaking takes place. The receiver acknowledges that it has received the information. The transmitter then sends a character indicating that the transmission is complete, which is usually acknowledged by the receiver.

A common example of the use of such control characters is the XON and XOFF protocols used between a computer and some other device. XON is usually the ASCII character DC1, and XOFF is the ASCII character DC3. A device that is ready and able to receive data will send XON to the computer. If the device is not able to receive data, it sends XOFF. When the computer detects XOFF, it immediately stops sending data until XON is again received.

Protocols | OSI Mode l Error Detection | Redundancy | Convolutional
Protocols | OSI Mode l Error Detection | Redundancy | Convolutional

Asynchronous Protocols

Three popular protocols used for asynchronous ASCII-coded data transmission between personal computers, via modem, are Xmodem, Kermit, and MPN. These protocols are no longer widely used, but an example with Xmodem will illustrate the process.

In Xmodem, the data transmission procedure begins with the receiving computer transmitting a negative acknowledge (NAK) character to the transmitter. NAK is a 7-bit ASCII character that is transmitted serially back to the transmitter every 10 s until the transmitter recognizes it. Once the transmitter recognizes the NAK character, it begins sending a 128-byte block of data, known as a frame (packet) of information (see Fig. 11-59). The frame begins with a start-of-header (SOH) character, which is another ASCII character meaning that the transmission is beginning. This is followed by a header, which usually consists of two or more characters preceding the actual data block, which give auxiliary information. In Xmodem, the header consists of 2 bytes designating the block number. In most messages, several blocks of data are transmitted and each is numbered sequentially. The first byte is the block number in binary code. The second byte is the complement of the block number; i.e., all bits have been inverted. Then the 128-byte block is transmitted. At the end of the block, the transmitting computer sends a checksum byte, which is the BCC, or binary sum of all the binary information sent in the block. (Keep in mind that each character is sent along with its start and stop bits since Xmodem is an asynchronous protocols.)

The receiving computer looks at the block data and also computes the checksum. If the checksum of the received block is the same as that transmitted, it is assumed that the block was received correctly. If the block was received correctly, the receiving computer sends an acknowledge (ACK) character—another ASCII code—back to the transmitter. Once ACK is received by the transmitter, the next block of data is sent. When a block has been received incorrectly because of interference or equipment problems, the checksums will not match and the receiving computer will send a NAK code back to the transmitter. A transmitter that has received NAK automatically responds by sending the block again. This process is repeated until each block, and the entire message has been sent without errors.

When the entire message has been sent, the transmitting computer sends an end-of-transmission (EOT) character. The receiving computer replies with an ACK character, terminating the communication.

Synchronous Protocols

Protocols used for synchronous data communication are more complex than asynchronous protocols. However, like the asynchronous Xmodem and Kermit systems, they use various control characters for signaling purposes at the beginning and ending of the block of data to be transmitted.

Bisync Protocols

An early synchronous protocols is IBM’s Bisync protocols. It usually begins with the transmission of two or more ASCII sync (SYN) characters (see Fig. 11-60). These characters signal the beginning of the transmission and are also used to initialize the clock timing circuits in the receiving modem. This ensures proper synchronization of the data transmitted a bit at a time.

Protocols | OSI Mode l Error Detection | Redundancy | Convolutional

After the SYNC characters, a start-of-header (SOH) character is transmitted. The header is a group of characters that typically identifies the type of message to be sent, the number of characters in a block (usually up to 256), and a priority code or some specific routing destination. The end of the header is signaled by a start-of-text (STX) character. At this point, the desired message is transmitted, 1 byte at a time. No start and stop bits are sent. The 7- or 8-bit words are simply strung together one after another, and the receiver must sort them into individual binary words that are handled on a parallel basis farther along on the receiving circuit in the computer.

At the end of a block, an end-of-transmission-block (ETB) character is transmitted. If the block is the last one in a complete message, an end-of-text (ETX) character transmitted. An end-of-transmission (EOT) character, which signals the end of the transmission, is followed by an error detection code, usually a 1- or 2-byte BCC.

SDLC Protocols

One of the most flexible and widely used synchronous protocols is the synchronous data link control (SDLC) protocols (see Fig. 11-61). SDLC is used in networks that are interconnections of multiple computers. All frames begin and end with a flag byte with the code 01111110 or hex 7E, which is recognized by the receiving computer. A sequence of binary 1s starts the clock synchronous process. Next comes an address byte that specifies a specific receiving station. Each station on the network is assigned an address number. The address hex FF indicates that the message to follow is to be sent to all stations on the network. A control byte following the address allows the programmer or user to specify how the data will be sent and how it will be dealt with at the receiving end. It allows the user to specify the number of frames, how the data will be received, and so on.

The data block (all codes are EBCDIC, not ASCII) comes next. It can be any length, but 256 bytes is typical. The data is followed by a frame-check sequence (FCS), a 16-bit CRC. A flag ends the frame.

A variation of the SDLC system, which permits interface between a larger number of different software and hardware configurations, is called high-level data link control (HDLC). Its format is similar to that shown in Fig. 11-61. It may also use ASCII data and often has a 32-bit CRC/FCS.

The Open Systems Interconnection Model (OSI Model)

As you have seen, there are many types and variations of protocols. If there is to be widespread compatibility between different systems, then some industrywide standardization is necessary. The ability of one hardware-software configuration to communicate with another, different system is known as interoperability. Only if all manufacturers and users adopt the same standards can true interoperability be achieved. One organization that has attempted to standardize data communication procedures is the International Organization for Standardization. It has established a framework, or hierarchy, that defines how data can be communicated. This hierarchy, known as the open systems interconnection (OSI) model, is designed to establish general interoperability guidelines for developers of communication systems and protocols. Even though the OSI model is not implemented by all manufacturers, the functions of each level in the OSI model outline must be accomplished by each protocols. In addition, the OSI model serves as a common reference for all protocols.

Protocols | OSI Mode l Error Detection | Redundancy | Convolutional
Protocols | OSI Mode l Error Detection | Redundancy | Convolutional

The OSI hierarchy is made up of seven levels, or layers (see Fig. 11-62). Each layer is defined by software (or, in one case, hardware) and is clearly distinct from the other layers. These layers are not really protocols themselves, but they provide a way to define and partition protocols to make data transfers in a standardized way. Each layer is designed to handle messages that it receives from a lower layer or an upper layer. Each layer also sends messages to the layer above or below it according to specific guidelines. Different protocols accomplish each layer and refer to the tasks they perform by referencing the OSI model. Tasks are referred to as layer 1, layer 2, etc. tasks.


As shown in the figure, the highest level is the application layer, which interfaces with the user’s application program. The lowest level is the physical layer, where the electronic hardware, interfaces, and transmission media are defi ned.

Layer 1: Physical layer.

The physical connections and electrical standards for the communication system are defined here. This layer specifies interface characteristics, such as binary voltage levels, encoding methods, data transfer rates, and the like.

Layer 2: Datalink

This layer defines the framing information for the block of data. It identifies any error detection and correction methods as well as any synchronizing and control codes relevant to communication. The data link layer includes basic protocols, such as HDLC and SDLC.

Layer 3: Network

This layer determines network configuration and the route the transmission can take. In some systems, there may be several paths for the data to traverse. The network layer determines the specific data routing and switching methods that can occur in the system, e.g., selection of a dial-up line, a private leased line, or some other dedicated path.

Layer 4: Transport

Included in this layer are multiplexing if any; error recovery; partitioning of data into smaller units so that they can be handled more efficiently, and addressing and flow control operations.

Layer 5: Session

This layer handles such things as management and synchronization of the data transmission. It typically includes network log-on and log-off procedures, as well as user authorization, and determines the availability of the network for processing and storing the data to be transmitted.

Layer 6: Presentation

This layer deals with the form and syntax of the message. It defines data formatting, encoding and decoding, encryption and decryption, synchronization, and other characteristics. It defines any code translations required and sets the parameters for any graphics operations.

Layer 7: Applications

This layer is the overall general manager of the network or the communication process. Its primary function is to format and transfer files between the communication message and the user’s applications software.

The basic process is that information is added or removed as data is transmitted from one layer to another (see Fig. 11-63). Assume, e.g., that the applications program you are using contains some data that you wish to send to another computer. That data will be transmitted in the form of some kind of serial packet or frame, or a sequence of packets.

The applications layer attaches the packet to some kind of header or preamble before sending it to the next level. At the presentation level, more headers and other information are added. At each of the lower levels, headers and related information are appended until the data message is almost completely encased in a much larger packet. Finally, at the physical level, the data is transmitted to the other system. As Fig. 11-63 shows, the message that is actually sent may contain more header information than actual data.

At the receiving end, the header information gets stripped off at various levels as the data is transferred to successive levels. The headers tell the data where to go and what to do next. The data comes in at the physical level and goes up through the various layers until it gets to the applications layer, where it is finally used.

Note that it is not necessary to use all seven layers. Many modern data communication applications need only the fi rst two or three layers to fully defi ne a given data exchange. The primary benefit of the OSI standard is that if it is incorporated into data communication equipment and software, compatibility between systems and equipment is more likely to be achieved. As more and more computers and networks are interconnected, e.g., on the Internet, true interoperability becomes more and more important.

Protocols | OSI Mode l Error Detection | Redundancy | Convolutional

Error Detection and Correction

When high-speed binary data is transmitted over a communication link, whether it is a cable or radio, errors will occur. These errors are changes in the bit pattern caused by interference, noise, or equipment malfunctions. Such errors will cause incorrect data to be received. To ensure reliable communication, schemes have been developed to detect and correct bit errors.

The number of bit errors that occur for a given number of bits transmitted is referred to as the bit error rate (BER). The bit error rate is similar to a probability in that it is the ratio of the number of bit errors to the total number of bits transmitted. If there is 1 error for 100,000 bits transmitted, the BER is 1 :100,000 = 10-5. The bit error rate depends on the equipment, the environment, and other considerations. The BER is an average over a very large number of bits. The BER for a given transmission depends on specific conditions. When high speeds of data transmission are used in a noisy environment, bit errors are inevitable. However, if the S/N ratio is favorable, the number of errors will be extremely small. The main objective in error detection and correction is to maximize the probability of 100 percent accuracy.

Example 11.3

Data is transmitted in 512-byte blocks or packets. Eight sequential packets are transmitted. The system BER is 2: 10,000 or 2 x 10-4. On average, how many errors can be expected in this transmission?

Protocols | OSI Mode l Error Detection | Redundancy | Convolutional

The process of error detection and correction involves adding extra bits to the data characters to be transmitted. This process is generally referred to as channel encoding. The data to be transmitted is processed in a way that creates the extra bits and adds them to the original data. At the receiver, these extra bits help in identifying any errors that occur in transmission caused by noise or other channel effects.

A key point about channel encoding is that it takes more time to transmit the data because of the extra bits. For example, to transmit 1 byte of data, the encoding process may add 3 extra bits for a total of 11 bits. These extra bits are called overhead in that they extend the time of transmission. If the bit time is 100 ns, then it takes 800 ns to send the data bits but 1100 ns to send the encoded data. The extra overhead bits add 37.5 percent more time to the transmission. As a result, to maintain the desired data rate, the overall clock speed must be increased or the lower data rate must be accepted. While the extra bits decrease the overall efficiency of transmission, remember that the benefit is more reliable data transmission because errors can be detected and/or corrected. Speed is traded off for higher-quality data transmission.

Channel encoding methods fall into to two separate categories, error detection codes and error correction codes. Error detection codes just detect the errors but do not take any corrective action. They simply let the system know that an error has occurred. Typically, these codes simply initiate retransmission until the data is received correctly. The other form of channel coding is error correction or forward error correction (FEC). These coding schemes eliminate the time-wasting retransmission and actually initiate self-correcting action.

Error Detection

Many different methods have been used to ensure reliable error detection, including redundancy, special coding and encoding schemes, parity checks, block checks, and cyclical redundancy check.


Redundancy

The simplest way to ensure the error-free transmission is to send each character or each message multiple times until it is properly received. This is known as redundancy. For example, a system may specify that each character is transmitted twice in succession. Entire blocks or messages can be treated in the same way. These retransmission techniques are referred to as automatic repeat request (ARQ).


Encoding Methods

Another approach is to use an encoding scheme like the RZAMI described earlier, whereby successive binary 1 bits in the bitstream are transmitted with alternating polarity. If an error occurs somewhere in the bitstream, then 2 or more binary 1 bit with the same polarity are likely to be transmitted successively. If the receiving circuits are set to recognize this characteristic, single-bit errors can be
detected.


The turbo codes and trellis codes are another example of the use of special coding to detect errors. These codes develop unique bit patterns from the data. Since many bit patterns are invalid in trellis and turbo codes, if a bit error occurs, one of the invalid codes will appear, signaling an error that can then be corrected. These codes are covered later in this section.

Parity

One of the most widely used systems of error detection is known as parity, in which each character transmitted contains one additional bit, known as a parity bit. The bit may be a binary 0 or binary 1, depending upon the number of 1s and 0s in the character itself. Two systems of parity are normally used, odd and even. Odd parity means that the total number of binary 1 bits in the character, including the parity bit, is odd. Even parity means that the number of binary 1 bits in the character, including the parity bit, is even. Examples of odd and even parity are indicated below. The seven left-hand bits are the ASCII character, and the right-hand bit is the parity bit.

Odd parity: 10110011
00101001
Even parity: 10110010
00101000

The parity of each character to be transmitted is generated by a parity generator circuit. The parity generator is made up of several levels of exclusive OR (X-OR) circuits, as shown in Fig. 11-50. Normally the parity generator circuit monitors the shift register in a UART in the computer or modem. Just before transmitting the data in the register by shifting it out, the parity generator circuit generates the correct parity value, inserting it as the last bit in the character. In an asynchronous system, the start bit comes first, followed by the character bits, the parity bit, and finally one or more stop bits (see Fig. 11-51).

At the receiving modem or computer, the serial data word is transferred into a shift register in a UART. A parity generator in the receiving UART produces the parity on the received character. It is then compared to the received parity bit in an XOR circuit, as shown in Fig. 11-52. If the internally generated bit matches the transmitted and received parity bit, it is assumed that the character was transmitted correctly. The output of the XOR will be 0, indicating no error. If the received bit does not match the parity bit generated from the received data word, the XOR output will be 1, indicating an error. The system signals the detection of a parity error to the computer. The action taken will depend on the result desired: The character may be retransmitted, an entire block of data
may be transmitted, or the error may simply be ignored.

The individual-character parity method of error detection is sometimes referred to as the vertical redundancy check (VRC). To display characters transmitted in a data communication system, the bits are written vertically (see Fig. 11-53). The bit at the bottom is the parity, or VRC, bit for each vertical word. Horizontal redundancy checks are discussed later.

Parity checking is useful only for detecting single-bit errors. If two or more bit errors occur, the parity circuit may not detect it. If an even number of bit changes occur, the parity circuit will not give a correct indication.

Cyclical Redundancy Check

The cyclical redundancy check (CRC) is a mathematical technique used in synchronous data transmission that effectively catches 99.9 percent or more of transmission errors. The mathematical process implemented by CRC is essentially a division. The entire string of bits in a block of data is considered to be one giant binary number that is divided by some preselected constant. CRC is expressed by the equation

M(x)/G(x) = Q(x) + R(x)

where M(x) is the binary block of data, called the message function, and G(x) is the generating function. The generating function is a special code that is divided into the binary message string. The outcome of the division is a quotient function Q(x) and a remainder function R(x). The quotient resulting from the division is ignored; the remainder is known as the CRC character and is transmitted along with the data.

For the convenience of calculation, the message and generating functions are usually expressed as an algebraic polynomial. For example, assume an 8-bit generating function of 10000101. The bits are numbered such that the LSB is 0 and the MSB is 7.

7 6 5 4 3 2 1 0
1 0 0 0 0 1 0 1

The polynomial is derived by expressing each bit position as a power of x, where the power is the number of the bit position. Only those terms in which binary 1s appear in the generating function are included in the polynomial. The polynomial resulting from the above number is

G(x) = x7 + x2 + x0 or G(x) = x7 + x2 + 1

The CRC mathematical process can be programmed by using a computer’s instruction set. It can also be computed by a special CRC hardware circuit consisting of several shift registers into which XOR gates have been inserted at specific points (see Fig. 11-54). The data to be checked is fed into the registers serially. There is no output since no output is retained. The data is simply shifted in 1 bit at a time; when the data has all been transmitted, the contents will be the remainder of the division or the desired CRC character. Since a total of 16 flip-flops are used in the shift register, the CRC is 16 bits long and can be transmitted as two sequential 8-bit bytes. The CRC is computed as the data is transmitted, and the resulting CRC is appended to the end of the block. Because CRC is used in synchronous data transmission, no start and stop bits are involved.


At the receiving end, the CRC is computed by the receiving computer and compared to the received CRC characters. If the two are alike, the message has been correctly received. Any difference indicates an error, which triggers retransmission or some other form of corrective action. CRC is probably the most widely used error detection scheme in synchronous systems. Both 16- and 32-bit CRCs are used. Parity methods are used primarily in asynchronous systems.

Error Correction

As stated previously, the easiest way to correct transmission errors—to retransmit any character or block of data that has an error in it—is time-consuming and wasteful. A number of efficient error correction schemes have been devised to complement the parity and BCC methods described above. The process of detecting and correcting errors at the receiver so that retransmission is not necessary is called forward error correction (FEC). There are two basic types of FEC: block codes and convolutional codes.

Block-Check Character

The block-check character (BCC) is also known as a horizontal or longitudinal redundancy check (LRC). It is the process of logically adding, by exclusive-ORing, all the characters in a specific block of transmitted data. Refer to Fig. 11-53. To add the characters, the top bit of the first vertical word is exclusive-ORed with the top bit of the second word. The result of this operation is exclusive-ORed with the top bit of the third word, and so on until all the bits in a particular horizontal row have been added. There are no carries to the next bit position. The final bit value for each horizontal row then becomes one bit in a character known as the block-check character (BCC), or the block-check sequence (BCS). Each row of bits is done in the same way to produce the BCC. All the characters transmitted in the text, as well as any control or other characters, are included as part of the BCC. Exclusive-ORing all bits of all characters is the same as binary addition without a carry of the codes.

The BCC is computed by circuits in the computer or modem as the data is transmitted, and its length is usually limited to 8 bits, so that carries from one-bit position to the next are ignored. It is appended to the end of a series of bytes that make up the message to be transmitted. At the receiving end, the computer computes its own version of the BCC on the received data and compares it to the received BCC. Again, the two should be the same.

When both the parity on each character and the BCC are known, the exact location of a faulty bit can be determined. The individual character parity bits and the BCC bits provide a form of coordinate system that allows a particular bit error in one character to be identified. Once it is identifi ed, the bit is simply complemented to correct it. The VRC identifies the character containing the bit error, and the LRC identifi es the bit that contains the error.

Assume, e.g., that a bit error occurs in the fourth vertical character from the left in Fig. 11-53. The fourth bit down from the top should be 0, but because of noise, it is received as a 1. This causes a parity error. With odd parity and a 1 in the fourth bit, the parity bit should be 0, but it is a 1.

Next, the logical sum of the bits in the fourth horizontal row from the top will be incorrect because of the bit error. Instead of 0, it will be 1. All the other bits in the BCC will be correct. It is now possible to pinpoint the location of the error because both the vertical column where the parity error occurred and the horizontal row where the BCC error occurred are known. The error can be corrected by simply complementing (inverting) the bit from 1 to 0. This operation can be programmed in software or implemented in hardware. One important characteristic of BCC is that multiple errors may not be detected. Therefore, more sophisticated techniques are needed.

Hamming Code

A popular FEC is the Hamming code. Hamming was a researcher at Bell Labs who discovered that if extra bits were added to a transmitted word, these extra bits could be processed in such a way that bit errors could be identified and corrected. These extra bits, like several types of parity bits, are known as Hamming bits, and together they form a Hamming code. To determine exactly where the error is, a suffi cient number of bits must be added. The minimum number of Hamming bits iscomputed with the expression

2n ≥ m + n + 1

where m = number of bits in data word
n = number of bits in Hamming code

Assume, e.g., an 8-bit character word and some smaller number of Hamming bits (say, 2). Then


2n ≥ m + n + 1
2n ≥ 8 + 2 + 1
4 ≥ 11

Two Hamming bits are insuffi cient, and so are three. When n 5 4,

24 ≥ 8 + 4 + 1
16 ≥ 13

Thus 4 Hamming bits must be transmitted along with the 8-bit character. Each character requires 8 + 4 = 12 bits. These Hamming bits can be placed anywhere within the data string. Assume the placement shown below, where the data bits are shown as a 0 or 1 and the Hamming bits are designated with an H. The data word is 01101010.

One way to look at Hamming codes is simply as a more sophisticated parity system, where the Hamming bits are parity bits derived from some of but not all the data bits. Each Hamming bit is derived from different groups of the data bits. (Recall that parity bits are derived from the data by XOR circuits.) One technique used to determine the Hamming bits is discussed below.

At the transmitter, a circuit is used to determine the Hamming bits. This is done by first expressing the bit positions in the data word containing binary 1s as a 4-bit binary number (n = 4 is the number of Hamming bits). For example, the first binary 1 data bit appears at position 2, so its position code is just the binary code for 2, or 0010. The other data bit positions with a binary 1 are 5 = 0101, 8 = 1000, and 10 = 1010. Next, the transmitter circuitry logically adds (XORs) these codes.

Position code 2 0010
Position code 5 0101
XOR sum 0111
Position code 8 1000
XOR sum 1111
Position code 10 1010
XOR sum 0101

This final sum is the Hamming code bits from left to right. Position code 12 is 0, position code 9 is 1, position code 6 is 0, and position code 3 is 1. These bits are interested in their proper position. The complete 12-bit transmitted word is

The Hamming bits are shown in boldface type. Now assume that an error occurs in bit position 10. The binary 1 is received as a binary 0. The received word is

The receiver recognizes the Hamming bits and treats them a code word, in this case, 0101. The circuitry then adds (XORs) this code with the bit number of each position in the word containing a binary 1, positions 2, 5, and 8.

The Hamming code is then added to the binary numbers representing each position with a 1.

Hamming code 0101
Position code 2 0010
XOR sum 0111
Position code 5 0101
XOR sum 0010
Position code 8 1000
XOR sum 1010

This final sum is a code that identifies the bit position of the error, in this case bit 10 (1010). To correct the bit, it is simply complemented from 0 to 1. If there are no bits in error, then the XOR sum at the receiver will be zero.

Note that the Hamming code method does not work if an error occurs in one of the Hamming bits itself.

For the Hamming code method of error detection and correction to work when 2 or more bit errors occur, more Hamming bits must be added. This increases overall transmission time, storage requirements at the transmitter and receiver, and the complexity of the circuitry. The benefit, of course, is that errors are reliably detected at the sending end. The transmitter never has to resend the data, which may in fact be impossible in some applications. Not all applications require such rigid data correction practices.

Reed Solomon Code

One of the most widely used forward error correction codes is the Reed Solomon (RS) code. Like Hamming codes, it adds extra parity bits to the block of data being transmitted. It uses a complex mathematical algorithm beyond the scope of this book to determine the codes. The beauty of the RS code is that it permits multiple errors to be detected and corrected. For example, a popular form of the RS code is designated RS (255,223). A block of binary data contains a total of 255 bytes; 223 bytes is the actual data and 32 bytes is parity bits computed by the RS algorithm. With this arrangement, the RS code can detect and correct errors in up to 16 corrupted bytes. An RS encoder is used on the data to be transmitted. At the receiver, the recovered data is sent to an RS decoder that corrects any errors. The encoders and decoders can be implemented with software, but hardware ICs are also available. Some common applications of the RS FEC are in music and data compact disks (CDs), cell phones, digital TV, satellite communication, and xDSL and cable TV modems.

Interleaving

Interleaving is a method used in wireless systems to reduce the effects of burst errors. Most errors in wireless transmission are caused by bursts of noise that destroy a single bit or multiple sequential bits. If we take the bits and interleave them, we have a better chance of recognizing and recovering the lost bits.

One common way of doing this is to first use an error-correcting scheme such as the Hamming code to encode the data. The data and the Hamming bits are then stored in consecutive memory locations. For example, assume four 8-bit words consisting of the data and the Hamming bits. The data words if transmitted sequentially would look like this.

12345678 12345678 12345678 12345678

Then instead of transmitting the encoded words one at a time, all the first bits of each word are transmitted, followed by all the second bits, followed by all the third bits, and so on. The result would look like this.

1111 2222 3333 4444 5555 6666 7777 8888

Now if a burst error occurs, the result may look like this.

1111 2222 3333 4444 5555 4218 7777 8888

At the receiver, the de-interleaving circuits would attempt to recreate the original data, producing


12345478 12345278 12345178 12345878


Now with only 1 bit in each word in error, a Hamming decoder would detect and correct the bit.

Convolutional Codes

Convolutional encoding creates additional bits from the data as do Hamming and Reed Solomon codes, but the encoded output is a function of not only the current data bits but also previously occurring data bits. Like other forms of FEC, the encoding process adds extra bits that are derived from the data itself. This is a form of redundancy that leads to greater reliability in the transmission of the data. Even if errors occur, the redundant bits allow the errors to be corrected.

Convolutional codes are beyond the scope of this text, but essentially what they do is to pass the data to be transmitted through a special shift register like that shown in Fig. 11-55. As the serial data is shifted through the shift register flip-flops, some of the flip-flop outputs are XORed together to form two outputs. These two outputs are the convolutional code, and this is what is transmitted. There are numerous variations of this scheme, but note in every case the original data itself is not transmitted. Instead, two separate streams of continuously encoded data are sent. Since each output code is different, the original data can more likely be recovered at the receiver by an inverse process. One of the more popular convolutional codes is the trellis code which is widely used in dial-up computer modems. The Viterbi code is another that is widely used in high-speed data access via satellites.

Another type of convolutional code uses feedback. These are called recursive codes because the output of the shift register is combined with the input code to produce the output streams. Fig. 11-56 is an example. Recursion means taking the output from a process and applying it back to the input. This procedure has been further developed to create a new class of convolutional codes called turbo codes. The turbo code is a combination of two concurrent recursive coding processes where one is derived directly from the data and the other is derived from the data that has been interleaved first. See Fig. 11-57. The result is a far more robust FEC that catches virtually all errors. Most forms of wireless data transmission today use some form of convolutional coding to ensure the robustness of the transmission path.

Low-Density Parity Check

Low-density parity check (LDPC) is an error-correcting code scheme that was invented by R. G. Gallager at MIT in the 1960s but was not widely implemented because of its complex parallel computational process. It was impractical to implement except on large-scale computers. However, LDPC was recently rediscovered and today is readily implemented with logic in an FPGA or a fast processor. With LDPC it is possible to create codes that can reduce the probability of an error to as small as possible and stay within the bounds of Shannon’s limit.

As with many other FEC codes, the encoding and decoding processes are beyond the scope of this book. Just be aware that LDPC is replacing other popular FEC codes in a variety of communications systems. These include Wi-Fi in the IEEE 802.11n and 802.11ac standards, 10-gigabit Ethernet over twisted-pair cable, G.hn power line communications standard, the European digital TV standard DVB-T2, WiMAX 802.16e, and several satellite systems.

CODING GAIN

Forward error-correction codes were developed to improve the BER in noisy communications channels. By exposing a block of data to be transmitted to a coding process, additional bits are added that permit errors to occur but provide a means of identifying and correcting the errors, thereby greatly improving the probability of an error-free transmission. Such coding has the same effect as improving the signal-to-noise ratio (SNR) of the transmission channel. The effect is the same as if the transmit power was increased. This effect is called coding gain and is usually expressed in dB. One formal definition of coding gain is the power increase needed to maintain the same BER as that achieved without coding. Or coding gain is the gain in SNR for a given BER using a specific coding method before modulation. Coding gains of several dB are possible with FEC.

Watch Video

Spread Spectrum | Wideband Modulation | Broadband Modem ( Protocols | OSI Mode l Error Detection | Redundancy | Convolutional )

Multiplexer | Demultiplexer | FDM | TDM | PAM | Applications ( Protocols | OSI Mode l Error Detection | Redundancy | Convolutional )

Digital Codes | Hartley’s Law | ASCII | Asynchronous | Encoding Methods ( Protocols | OSI Mode l Error Detection | Redundancy | Convolutional )

FSK | PSK | DPSK | QPSK | QAM | Spectral Efficiency | Modem ( Protocols | OSI Mode l Error Detection | Redundancy | Convolutional )

PCM ( Pulse Code Modulation ) | T-Carrier Systems| Duplexing ( Protocols | OSI Mode l Error Detection | Redundancy | Convolutional )

Receiver and Transceiver | AM | FM | SW Radio | SDR | Wi-Fi ( Protocols | OSI Mode l Error Detection | Redundancy | Convolutional )

Click here to learn more

Click Here to Learn

Reference : Electronic communication by Louis Frenzel

Leave a Reply

%d bloggers like this: