Development of an integrated access network based on Ethernet and Wi-Fi technologies. Ethernet technology Networking using ethernet 1000base t technology

Ethernet technology template, written in the IEEE 802.3 dock. This is the only description of a MAC layer format frame. In the Ethernet network, only one type of frame of the link layer is implemented, the header of which is a set of headers of the MAC and LLC sublayers, which is some kind.

  • Ethernet DIX / Ethernet II, appeared in 1980 as a result of joint robots of three firms Xerox, Intel and Digital, which introduced version 802.3 as an international standard;
  • The committee adopted 802.3 and revised it slightly. This is how 802.3 / LLC, 802.3 / 802.2 or Novell 802.2;
  • Raw 802.3 or Novell 802.3- designed to speed up the work of their protocol stack in Ethernet networks;
  • Ethernet SNAP is the result of the 802.2 committee which has been brought to a common standard and has become flexible for future possible addition of fields;

Today, network hardware and software can handle all frame formats, and frame recognition works automatically which reduces and one of the. Frame formats are shown in Figure 1.

Picture 1

802.3 / LLC frame

The header of this frame combines the header fields of the IEEE 802.3 and 802.2 frames. The 802.3 standard consists of:

  • Preamble field- called the field of sync bytes - 10101010. In Manchester coding, this code is modified in the physical medium into a signal with a frequency of 5 MHz.
  • Starting frame delimiter- is one byte 10101011. This field indicates that the next byte is the first byte of the frame header.
  • Destination address- This field can be 6 or 2 bytes long. Typically this field is used for a 6 byte MAC address.
  • Source address Is a field that contains 6 or 2 bytes of the MAC address of the sender node. The first bit is always - 0.
  • Length- a field that has a size of 2 bytes, and contains the length of the data field in the frame.
  • Data field- the field can be from 0 to 1500 bytes. But if suddenly the data occupies less than 46 bytes, then the field is used placeholder which pads the field to 46 bytes.
  • Placeholder field- Provides filling of the data field, if its weight is less than 46 bytes. Required for the collision detection mechanism to work correctly.
  • Frame check sequence field- this field contains a control sum of 4 bytes. CRC-32 algorithm is used /

This frame is a MAC sublayer frame, its data field contains an LLC sublayer frame with removed flags at the end and beginning of the frame that is transmitted through.

Raw 802.3 / Novell 802.3 frame

This frame used to be a network layer protocol in MetWare OS. But now that the need to identify the upper layer protocol has disappeared, the frame has been encapsulated in the MAC frame of the LLC frame.

Ethernet DIX / Ethernet II frame

This frame has a structure that is similar to that of Ras 802.3. But the 2-byte length field here has protocol type field assignments. Indicates the type of upper layer protocol that nested its packet in the data field of this frame. These frames are distinguished by the length of the field, if the value is less than 1500, then this is the length field, if more, then the type.

Ethernet SNAP frame

The frame appeared as a result of eliminating the inconsistency in the encoding of the protocol types. The protocol is also used in the IP protocol when encapsulating the following networks: Token Ring, FDDI, 100VC-AnyLan. But when transmitting IP packets over Ethernet, the protocol uses Ethernet DIX frames.

IPX protocol

This protocol can use all four Ethernet frame types. It defines the type by checking the absence or presence of the LLC field. Also behind the DSAP / SSAP fields. If the field value is 0xAA, then this is a SNAP frame, otherwise it is 802.3 / LLC.

In Ethernet networks, frames of 4 different formats are used at the link layer. This is due to the long history of the development of Ethernet technology, which existed before the adoption of the IEEE 802 standards, when the LLC sublayer was not separated from the general protocol and, accordingly, the LLC header was not used.

Differences in frame formats can lead to incompatibility between hardware and network software that is designed to work with only one Ethernet frame standard. However, today almost all network adapters, network adapter drivers, bridges / switches and routers can work with all Ethernet technology frame formats used in practice, and the frame type is recognized automatically.

Below is a description of all four types of Ethernet frames (here, a frame means the entire set of fields that are related to the link layer, that is, the fields of the MAC and LLC layers). One and the same frame type can have different names, so below for each frame type are given several of the most common names:

    802.3 / LLC frame (802.3 / 802.2 frame or Novell 802.2 frame);

    Raw 802.3 frame (or Novell 802.3 frame);

    Ethernet DIX frame (or Ethernet II frame);

    Ethernet SNAP frame.

The formats for all of these four types of Ethernet frames are shown in Fig. 10.3.

802.3 / LLC frame

The 802.3 / LLC frame header is the result of the concatenation of the frame header fields defined in the IEEE 802.3 and 802.2 standards.

The 802.3 standard defines eight header fields (Figure 10.3; the preamble field and the start frame delimiter are not shown in the figure).

    Preamble field consists of seven sync bytes 10101010. In Manchester coding, this combination is represented in the physical medium by a periodic waveform with a frequency of 5 MHz.

    Start-of-frame-delimiter (SFD) consists of one byte 10101011. The occurrence of this bit pattern is an indication that the next byte is the first byte of the frame header.

    Destination Address (DA) can be 2 or 6 bytes long. In practice, 6 byte addresses are always used.

    Source Address (SA) - it is a 2 or 6 byte field containing the address of the node that sent the frame. The first bit of the address is always 0.

    Length (Length, L) - 2-byte field that defines the length of the data field in the frame.

    Data field can contain from 0 to 1500 bytes. But if the length of the field is less than 46 bytes, then the next field - the padding field - is used to pad the frame to the minimum allowable value of 46 bytes.

    Padding consists of as many padding bytes as possible to provide a minimum data field length of 46 bytes. This ensures that the collision detection mechanism works correctly. If the length of the data field is sufficient, then the padding field does not appear in the frame.

    Frame Check Sequence (PCS) consists of 4 bytes containing the checksum. This value is calculated using the CRC-32 algorithm.

The 802.3 frame is a MAC sublayer frame, therefore, in accordance with the 802.2 standard, an LLC sublayer frame is inserted into its data field with the start and end flags removed. The LLC frame format has been described above. Since the LLC frame has a header length of 3 (in LLC1 mode) or 4 bytes (in LLC2 mode), the maximum data field size is reduced to 1497 or 1496 bytes.

Figure 10.3. Ethernet frame formats

An 802.3 Raw frame, also referred to as a Novell 802.3 frame, is shown in Figure 1-4. 10.3. As you can see from the figure, this is an 802.3 MAC sublayer frame, but without the LLC sublayer nested frame. For a long time, Novell did not use LLC frame service fields in its NetWare operating system due to the lack of the need to identify the type of information embedded in the data field - there was always an IPX protocol packet there, which for a long time was the only network layer protocol in NetWare.

Frame Ethernet DIX / Ethernet II

The Ethernet DIX frame, also called Ethernet II frame, has a structure (see Figure 10.3) that is the same as the 802.3 Raw frame. However the 2 byte field Length (L) 802.3 Raw frame per frame Ethernet DIX used as the protocol type field. This field, now called Type (T) or EtherType, serves the same purpose as the DSAP and SSAP fields of the LLC frame — to indicate the type of upper layer protocol that nested its packet in the data field of that frame.

Frame Ethernet SNAP

To eliminate the inconsistency in the encoding of the types of protocols whose messages are embedded in the data field of Ethernet frames, the 802.2 committee carried out work to further standardize Ethernet frames. The result is an Ethernet SNAP (SNAP - Subnetwork Access Protocol) frame. The Ethernet SNAP frame (see Figure 10.3) is an extension of the 802.3 / LLC frame by introducing an additional SNAP header consisting of two fields: OUI and Type. The Type field consists of 2 bytes and repeats the format and purpose of the Type field of the Ethernet II frame (that is, it uses the same protocol code values). The Organizationally Unique Identifier (OUI) field identifies the identifier of the organization that controls the protocol codes in the Type field. The SNAP header provides compatibility with the protocol codes in Ethernet II frames and provides a universal protocol coding scheme. Protocol codes for 802 technologies are controlled by the IEEE, which has an OUI of 000000. If other protocol codes for a new technology are required in the future, it is sufficient to specify a different identifier of the organization assigning these codes, and the old code values ​​will remain in effect (in combined with another OUI).

Ethernet technology in its rapid development has long stepped over the level of local networks. She got rid of collisions, got full duplex and gigabit speeds. A wide range of cost-effective solutions allows you to safely implement Ethernet on backbones. According to experts, the global market for carrier-grade Ethernet - the modest office networking technology used today in mainstream telecommunications networks - is booming. As widespread as Ethernet is, analysts say it is still ahead.


Ethernet technology in its rapid development, it has long stepped over the level of local networks. She got rid of collisions, got full duplex and gigabit speeds. A wide range of cost-effective solutions allows you to safely implement Ethernet on backbones.

Metro Ethernet is under construction
according to a three-level hierarchical scheme and includes a core, an aggregation level and an access level. The core of the network is built on high-performance switches and provides high-speed traffic transmission. The aggregation layer is also created on the switches and provides aggregation of access layer connections, service implementation, and statistics collection. Depending on the scale of the network, the core and the level of aggregation can be combined. The links between switches can be built on the basis of various high-speed technologies, most often Gigabit Ethernet and 10-Gigabit Ethernet. In doing so, it is necessary to take into account the requirements for network recovery in case of failure and the structure of the kernel. In the core and at the aggregation level, redundancy of switch components, as well as topological redundancy, are provided, which allows the continued provision of services in the event of single link and node failures. A significant reduction in recovery time can only be achieved through the use of link layer technology. Support for EAPS technology, Extreme Networks' proprietary protocol designed to support a loop-free topology and rebuild in the event of an Ethernet ring disruption. Networks using EAPS have all the benefits of SONET / SDH and Resilient Packet Ring (RPR) networks including 50ms topology recovery time.

The access layer is built in a ring or star-shaped scheme on Metro Ethernet switches for connecting corporate clients, office buildings, as well as home and SOHO clients. At the access level, a full range of security measures is implemented to ensure the identification and isolation of customers, and the protection of the operator's infrastructure.

Ethernet technology overview

Ethernet(ezernet, from Lat. aether - ether) - packet technology of computer networks.

Ethernet standards define wiring and electrical signals at the physical layer, packet format and media access control protocols at the link layer of the OSI model. Ethernet is mainly described by the IEEE 802.3 standards. Ethernet became the most widespread LAN technology in the mid-90s of the last century, supplanting technologies such as Arcnet, FDDI and Token ring.

The standard of the first versions (Ethernet v1.0 and Ethernet v2.0) specifies that a coaxial cable is used as a transmission medium, later it became possible to use a twisted pair cable and an optical cable. Access control method - multiple access with carrier sense and collision detection (CSMA / CD, Carrier Sense Multiply Access with Collision Detection), data rate 10 Mbit / s, packet size from 72 to 1526 bytes, data encoding methods are described. The number of nodes in one shared network segment is limited by the limit value of 1024 workstations (physical layer specifications can set stricter limits, for example, no more than 30 workstations can be connected to a thin coaxial segment, and no more than 100 to a thick coaxial segment). However, a network built on a single shared segment becomes ineffective long before the maximum number of nodes is reached.

In 1995, the IEEE 802.3u Fast Ethernet standard was adopted at 100 Mbps, and later the IEEE 802.3z Gigabit Ethernet standard was adopted at 1000 Mbps. Now you can work in full duplex mode.

Frame format

There are several Ethernet frame formats.

Initial Variant I (no longer applicable).
Ethernet Version 2 or Ethernet frame II, also called DIX (abbreviation of the first letters of the developers of DEC, Intel, Xerox) is the most common and is used to this day. Often used directly by the Internet Protocol.

Novell- internal modification of IEEE 802.3 without LLC (Logical Link Control).
IEEE 802.2 LLC frame.
IEEE 802.2 LLC / SNAP frame.
Optionally, an Ethernet frame can contain an IEEE 802.1Q tag to identify the VLAN to which it is addressed and an IEEE 802.1p tag to indicate priority.
Some Hewlett-Packard Ethernet cards used an IEEE 802.12 frame that conforms to the 100VG-AnyLAN standard.
Different frame types have different format and MTU value.

Ethernet varieties

There are several technology options depending on the data transfer rate and transmission medium. Regardless of the method of transmission, the network protocol stack and programs work the same in almost all of the following options.

This section briefly describes all officially existing varieties. For some reason, in addition to the basic standard, many manufacturers recommend using other proprietary media - for example, fiber optic cable is used to increase the distance between network points. Most Ethernet cards and other devices have support for multiple baud rates using autosensing for speed and duplex to achieve the best possible connection between two devices. If auto-sensing does not work, the speed adjusts to the partner, and the half-duplex transmission mode is activated. For example, the presence of a 10/100 Ethernet port in a device means that it can work with 10BASE-T and 100BASE-TX technologies, and the 10/100/1000 Ethernet port supports 10BASE-T, 100BASE-TX, and 1000BASE standards. -T.

Early Ethernet modifications

Xerox Ethernet- the original technology, 3Mbit / s, existed in two versions, Version 1 and Version 2, the frame format of the latest version is still widely used.

0BROAD36 - not widespread. One of the first standards to allow long distance work. Used broadband modulation technology similar to that used in cable modems. A coaxial cable was used as a data transmission medium.

1BASE5- also known as StarLAN, was the first modification of Ethernet technology to use twisted pair. It worked at a speed of 1 Mbit / s, but did not find commercial use.

10 Mbps Ethernet

10BASE5, IEEE 802.3 (also called “Thick Ethernet”) was the original development of a 10 Mbps technology. Following an early IEEE standard, it uses a 50 ohm coaxial cable (RG-8) with a maximum segment length of 500 meters.

10BASE2, IEEE 802.3a (called "Thin Ethernet") - an RG-58 cable is used, with a maximum segment length of 200 meters, computers were connected to one another, a T-connector is needed to connect the cable to a network card, and the cable must have a BNC- connector. Terminators are required at each end. For many years this standard has been the main standard for Ethernet technology.

StarLAN 10 - The first design to use twisted pair cable for data transmission at 10 Mbps. Later, it evolved into the 10BASE-T standard.

10BASE-T, IEEE 802.3i - 4 twisted pair cables (twisted pair) of Category-3 or Category-5 are used for data transmission. The maximum segment length is 100 meters.

FOIRL - (acronym for Fiber-optic inter-repeater link). Basic standard for Ethernet technology using optical cable for data transmission. The maximum data transmission distance without repeater is 1 km.

10BASE-F, IEEE 802.3j - The main term for a family of 10 Mbit / s ethernet standards using fiber optic cables up to 2 kilometers away: 10BASE-FL, 10BASE-FB, and 10BASE-FP. Of the above, only 10BASE-FL is widely used.

10BASE-FL (Fiber Link) - An improved version of the FOIRL standard. The improvement concerned an increase in the segment length up to 2 km.

10BASE-FB (Fiber Backbone) - Now an unused standard, it was intended for combining repeaters into a backbone.

10BASE-FP (Fiber Passive) - Passive star topology that does not require repeaters - has never been used.

Fast Ethernet (100 Mbps) (Fast Ethernet)

100BASE-T - General term for one of the three 100 Mbps ethernet standards, using twisted-pair cable as the transmission medium. The segment length is up to 200-250 meters. Includes 100BASE-TX, 100BASE-T4 and 100BASE-T2.

100BASE-TX, IEEE 802.3u - The development of 10BASE-T technology, a star topology is used, a twisted pair cable of category-5 is used, in which 2 pairs of conductors are actually used, the maximum data transfer rate is 100 Mbps.

100BASE-T4 - 100 Mbps ethernet over Cat-3 cable. All 4 pairs are involved. Now it is practically not used. Data transmission is in half duplex mode.

100BASE-T2 - Not used. 100 Mbps ethernet over Category-3 cable. Only 2 pairs are used. Full duplex transmission mode is supported, when signals propagate in opposite directions on each pair. The transfer rate in one direction is 50 Mbit / s.

100BASE-FX - 100 Mbps ethernet over fiber optic cable. The maximum segment length is 400 meters in half duplex mode (for guaranteed collision detection) or 2 kilometers in full duplex mode over multimode fiber and up to 32 kilometers over single mode.

Gigabit Ethernet

1000BASE-T, IEEE 802.3ab - 1 Gbps Ethernet standard. A twisted pair of category 5e or category 6 is used. All 4 pairs are involved in data transmission. The data transfer rate is 250 Mbps over one pair.

1000BASE-TX, - 1 Gbps Ethernet standard using only Category 6 twisted pair cable. Practically not used.

1000Base-X is a generic term for Gigabit Ethernet technology that uses fiber optic cable as the transmission medium, and includes 1000BASE-SX, 1000BASE-LX, and 1000BASE-CX.

1000BASE-SX, IEEE 802.3z - 1 Gbps Ethernet technology, uses multimode fiber with a signal transmission range without repeater up to 550 meters.

1000BASE-LX, IEEE 802.3z - 1 Gbps Ethernet technology, uses multimode fiber with a signal transmission range without repeater up to 550 meters. Optimized for long distance using single mode fiber (up to 10 kilometers).

1000BASE-CX - Gigabit Ethernet technology for short distances (up to 25 meters), uses a special copper cable (Shielded Twisted Pair (STP)) with a characteristic impedance of 150 ohms. Replaced by 1000BASE-T standard, and is not used now.

1000BASE-LH (Long Haul) - 1 Gbps Ethernet technology, uses a single-mode optical cable, the signal transmission range without a repeater is up to 100 kilometers.

10 Gigabit Ethernet

The new 10 Gigabit Ethernet standard includes seven physical media standards for LAN, MAN and WAN. It is currently covered by the IEEE 802.3ae amendment and should be included in the next revision of the IEEE 802.3 standard.

10GBASE-CX4 - 10 Gigabit Ethernet technology for short distances (up to 15 meters) using CX4 copper cable and InfiniBand connectors.

10GBASE-SR - 10 Gigabit Ethernet technology for short distances (up to 26 or 82 meters, depending on the cable type) using multimode fiber. It also supports distances up to 300 meters using new multimode fiber (2000 MHz / km).

10GBASE-LX4 - Uses wavelength division multiplexing to support distances from 240 to 300 meters over multimode fiber. Also supports distances up to 10 kilometers when using single mode fiber.

10GBASE-LR and 10GBASE-ER - these standards support distances up to 10 and 40 kilometers, respectively.

10GBASE-SW, 10GBASE-LW, and 10GBASE-EW - These standards use a physical interface that is speed and data format compatible with the OC-192 / STM-64 SONET / SDH interface. They are similar to the 10GBASE-SR, 10GBASE-LR and 10GBASE-ER standards respectively, as they use the same cable types and transmission distances.

10GBASE-T, IEEE 802.3an-2006 - adopted in June 2006 after 4 years of development. Uses shielded twisted pair cable. Distances - up to 100 meters.


The development of multimedia technologies has led to the need to increase the capacity of communication lines. In this regard, the Gigabit Ethernet technology was developed, providing data transmission at a speed of 1 Gbit / s. In this technology, as in Fast Ethernet, the continuity with the Ethernet technology has been preserved: frame formats have practically not changed, survived access method CSMA/ CD in half duplex mode. At the logical level, coding is used 8 B/10 B... Since the transmission speed increased 10 times compared to Fast Ethernet, it was necessary or reduce the network diameter to 20 - 25 m, or increase the minimum frame length... In Gigabit Ethernet technology, they chose the second path, increasing the minimum frame length to 512 bytes instead of 64 bytes in Ethernet and Fast Ethernet technology. The diameter of the net is 200 m, just like Fast Ethernet. Increasing the frame length can be done in two ways. The first method involves filling the data field of a short frame with symbols of forbidden code combinations, and there will be network overhead. According to the second method, it is allowed to transmit several short frames in a row with a total length of up to 8192 byte.

Today's Gigabit Ethernet networks are typically switch-based and operate in full duplex mode. In this case, one speaks not about the diameter of the network, but about the length of the segment, which is determined by the technical means of the physical layer, first of all, by the physical medium of data transmission. Gigabit Ethernet provides for the use of:

    single mode fiber optic cable; 802.3 z

    multimode fiber optic cable; 802.3 z

    balanced UTP cable category 5; 802.3 ab

    coaxial cable.

When transmitting data over a fiber-optic cable, either LEDs operating at a wavelength are used as emitters 830 nm, or lasers - at a wavelength 1300 nm. According to this standard 802.3 z defined two specifications 1000 Base- SX and 1000 Base- LX... The maximum segment length implemented on a multimode 62.5 / 125 cable of the 1000Base-SX specification is 220 m, and on a 50/125 cable - no more than 500 m. The maximum segment length implemented on a single-mode 1000Base-LX specification is 5000 m. The segment length on a coaxial cable does not exceed 25 m.

To use existing Category 5 balanced UTP cables, a standard has been developed 802.3 ab... Since in Gigabit Ethernet technology data must be transmitted at a speed of 1000 Mbit / s, and twisted pair of category 5 has a bandwidth of 100 MHz, it was decided to transmit data in parallel over 4 twisted pairs and use UTP category 5 or 5e with a bandwidth of 125 MHz. Thus, for each twisted pair, it is necessary to transfer data at a speed of 250 Mbit / s, which is 2 times higher than the capabilities of UTP category 5e. To eliminate this contradiction, the 4D-PAM5 code with five potential levels (-2, -1, 0, +1, +2) is used. Each pair of wires simultaneously transmits and receives data at a speed of 125 Mbit / s in each direction. In this case, collisions occur, in which signals of a complex shape of five levels are formed. The separation of the input and output streams is carried out using hybrid decoupling schemes H(Figure 5.4). As such schemes are used signal processors... To isolate the received signal, the receiver subtracts its own transmitted signal from the total (transmitted and received) signal.

Thus, Gigabit Ethernet technology provides high-speed data exchange and is mainly used for data transfer between subnets, as well as for the exchange of multimedia information.

Rice. 5.4. Data transmission over 4 pairs of UTP category 5

The IEEE 802.3 standard recommends that Gigabit Ethernet fiber should be backbone. Timeslots, frame format, and transmission are common to all 1000 Mbps versions. The physical layer is determined by two signal coding schemes (Figure 5.5). Scheme 8 B/10 B used by for optical fiber and copper shielded cables. For balanced cables UTP pulse amplitude modulation is used (code PAM5 ). Technology 1000 BASE- X uses boolean coding 8 B/10 B and line coding ( NRZ).

Figure 5.5. Gigabit Ethernet Technology Specifications

Signals NRZ transmitted over fiber using either shortwave ( short- wavelength), or long-wave ( long- wavelength) sources of light. LEDs with a wavelength of 850 nm for transmission over multimode optical fiber (1000BASE-SX). This less expensive option is used for short distance transmission. Long-wave laser sources ( 1310 nm) use single mode or multimode optical fiber (1000BASE-LX). Single-mode fiber laser sources are capable of transmitting information over a distance of up to 5000 m.

In point-to-point connections ( point- to- point) for transmission ( Tx) and reception ( Rx), separate fibers are used, therefore full duplex connection. Gigabit Ethernet technology allows you to install only single repeater between the two stations. Below are the parameters of 1000BASE technologies (Table 5.2).

Table 5.2

Comparative characteristics of Gigabit Ethernet specifications

Gigabit Ethernet networks are built around switches where the full duplex distance is limited only by the environment and not by the round trip time. In this case, as a rule, the topology " star" or " extended star”And problems are determined by logical topology and data flow.

The 1000BASE-T standard uses almost the same UTP cable as the 100BASE-T and 10BASE-T standards. A 1000BASE-T UTP cable is the same as a 10BASE-T and 100BASE-TX cable, except that a Category 5e cable is recommended. With a cable length of 100 m, 1000BASE-T equipment is operating at the limit of its capabilities.

ETHERNET TECHNOLOGY

Ethernet is the most widely used standard for local area networks today.

When they say Ethernet, they usually mean any of the variants of this technology. More narrowly, Ethernet is a networking standard based on the experimental Ethernet Network that Xerox developed and implemented in 1975. The access method was tried even earlier: in the second half of the 60s, various options for random access to a common radio environment, collectively called Aloha, were used in the radio network of the University of Hawaii. In 1980, DEC, Intel, and Xerox jointly developed and published the Ethernet version II standard for a coaxial cable network, which was the latest version of the proprietary Ethernet standard. Therefore, the proprietary version of the Ethernet standard is called the Ethernet DIX or Ethernet P.

Based on the Ethernet DIX standard, the IEEE 802.3 standard was developed, which in many respects coincides with its predecessor, but there are still some differences. While the IEEE 802.3 standard differentiates between MAC and LLC layers, the original Ethernet combined both layers into a single data link layer. Ethernet DIX defines an Ethernet Configuration Test Protocol that IEEE 802.3 does not. The frame format is also slightly different, although the minimum and maximum frame sizes in these standards are the same. Often, in order to distinguish Ethernet, as defined by the IEEE standard, and proprietary Ethernet DIX, the former is called 802.3 technology, and the proprietary Ethernet name is left without additional designations.

Depending on the type of physical medium, the IEEE 802.3 standard has various modifications - 10Base-5, 10Base-2, 10Base-T, 10Base-FL, 10Base-FB.

In 1995, the Fast Ethernet standard was adopted, which in many respects is not an independent standard, as evidenced by the fact that its description is simply an additional section to the main 802.3 standard - the 802.3u section. Similarly, the 1998 Gigabit Ethernet standard is described in the 802.3z section of the main document.

For transmission of binary information over cable for all variants of the physical layer of Ethernet technology, providing a throughput of 10 Mbit / s, the Manchester code is used.

All types of Ethernet standards (including Fast Ethernet and Gigabit Ethernet) use the same media separation method - the CSMA / CD method.

Ethernet Addressing

To identify the recipient of information in Ethernet technologies, 6-byte MAC-addresses are used.

The MAC address format provides the ability to use specific multicast addressing modes in the Ethernet network and, at the same time, exclude the possibility of two stations that would have the same address within the same local network.

The physical address of an Ethernet network consists of two parts:

  • Vendor codes
  • Individual device identifier

A special organization within the IEEE is engaged in the distribution of permitted encodings of this field at the request of manufacturers of network equipment. Various forms can be used to write the MAC address. The most commonly used form is hexadecimal, in which pairs of bytes are separated by "-" characters:

E0-14-00-00-00

In Ethernet and IEEE 802.3 networks, there are three main modes of forming the destination address:

  • Unicast - individual address;
  • Multicast - multicast address;
  • Broadcast - broadcast address.

The first addressing mode (Unicast) is used when the source station addresses the transmitted packet to only one recipient of the data.

A sign of using the Multicast addressing mode is the presence of 1 in the least significant bit of the most significant byte of the equipment manufacturer's identifier.

C-CC-CC-CC

A frame whose DA field content belongs to the Multicast type will be received and processed by all stations that have a corresponding Vendor Code value - in this case, these are Cisco network devices. Given Multicast - the address is used by the network devices of this company to interact in accordance with the rules of the Cisco Discovery Protocol (CDP).

An Ethernet and IEEE 802.3 station can also use Broadcast addressing mode. The address of the Broadcast destination station is encoded with a special value:

FF-FF-FF-FF-FF-FF

When using this address, the transmitted packet will be received by all stations that are in this network.

CSMA / CD Access Method

Ethernet networks use a media access method called carrier-sense-multiply-access with collision detection (CSMA / CD) ...

The CSMA / CD protocol defines the nature of the interaction of workstations in a network with a single common data transmission medium for all devices. All stations have equal conditions for data transmission. There is no specific sequence in which stations can access the medium for transmission. It is in this sense that the environment is accessed randomly. The implementation of random access algorithms seems to be a much simpler task than the implementation of deterministic access algorithms. Since in the latter case, either a special protocol is required that controls the operation of all network devices (for example, the token circulation protocol inherent in Token Ring and FDDI networks), or a special dedicated device - a master hub, which, in a certain sequence, would provide all the rest of the station with the ability to transmit (Arcnet networks , 100VG AnyLAN).

However, a network with random access has one, perhaps, the main drawback - it is not entirely stable network operation under heavy load, when a sufficiently long time can pass before a given station is able to transmit data. This is due to collisions that arise between stations that started transmitting at the same time or almost simultaneously. In the event of a collision, the transmitted data does not reach the recipients, and the transmitting stations have to resume transmission again - the coding methods used in Ethernet do not allow the signals of each station to be separated from the general signal. (Z Note that this fact is reflected in the “Base (band)” component present in the names of all physical protocols of Ethernet technology (for example, 10Base-2,10Base-T, etc.). Baseband network means a baseband network in which messages are sent digitally over a single channel, without frequency division.)

Collision is a normal situation in Ethernet networks. For a collision to occur, it is not necessary for several stations to start transmitting absolutely simultaneously, such a situation is unlikely. It is much more likely that a collision occurs due to the fact that one node starts transmitting earlier than the other, but the signals of the first simply do not have time to reach the second node by the time the second node decides to start transmitting its frame. That is, collisions are a consequence of the distributed nature of the network.

The set of all stations on the network, the simultaneous transmission of any pair of which leads to a collision, is called a collision domain or collision domain.

Collisions can cause unpredictable delays in the propagation of frames over the network, especially when the network is heavily loaded (many stations are trying to simultaneously transmit within the collision domain,> 20-25) and with a large collision domain diameter (> 2 km). Therefore, when building networks, it is advisable to avoid such extreme operating modes.

The problem of constructing a protocol capable of resolving collisions in the most optimal way, and optimizing the network operation at high loads, was one of the key issues at the stage of the standard formation. Initially, three main approaches were considered as candidates for the implementation of the algorithm of random access to the environment: non-constant, 1-constant and p-constant (Figure 11.2).

Figure 11.2. Multiple random access (CSMA) algorithms and collision back off

Nonpersistent algorithm. With this algorithm, the station wishing to transmit is guided by the following rules.

1. Listens to the medium, and if the medium is free (ie, if there is no other transmission or there is no collision signal) it transmits, otherwise - the medium is busy - go to step 2;

2. If the environment is busy, it waits for a random (according to a certain probability distribution curve) time and returns to step 1.

Using a random wait value in a busy environment reduces the likelihood of collisions. Indeed, suppose otherwise that two stations are going to transmit almost simultaneously, while the third is already transmitting. If the first two would not have a random waiting time before the start of the transmission (in case the environment turned out to be busy), but only listened to the environment and waited for it to become free, then after the third station stopped transmitting, the first two would start transmitting simultaneously, which would inevitably lead to collisions. Thus, random waiting eliminates the possibility of such collisions. However, the inconvenience of this method is manifested in the inefficient use of the channel bandwidth. Since it may happen that by the time the medium becomes free, the station wishing to transmit will continue to wait for some random time before deciding to listen to the medium, since it had already listened to the medium, which turned out to be busy. As a result, the channel will be idle for a while, even if only one station is waiting for transmission.

1-persistent algorithm... To reduce the time when the environment is not busy, a 1-persistent algorithm could be used. With this algorithm, the station wishing to transmit is guided by the following rules.

1. Listens to the medium, and if the medium is idle, transmits, otherwise, go to step 2;

2. If the medium is busy, it continues to listen to the medium until the medium is free, and as soon as the medium is released, it immediately starts transmitting.

Comparing the non-persistent and 1-persistent algorithms, we can say that in the 1-persistent algorithm the station wishing to transmit behaves more "selfishly". Therefore, if two or more stations are waiting for transmission (waiting until the environment is free), a collision, one might say, will be guaranteed. After the collision, the stations begin to think about what to do next.

P-persistent algorithm. The rules for this algorithm are as follows:

1. If the environment is free, the station with the probability p immediately starts transmission or with probability (1- p ) waits for a fixed time interval T. The interval T is usually taken equal to the maximum propagation time of the signal from end to end;

2. If the medium is busy, the station continues listening until the medium is free, then goes to step 1;

3. If the transmission is delayed by one interval T, the station returns to step 1.

And here the question arises of choosing the most effective value of the parameter p ... The main problem is how to avoid instability at high loads. Consider a situation in which n stations intend to transmit frames while transmission is already in progress. At the end of the transmission, the expected number of stations that will try to transmit will be equal to the product of the number of stations willing to transmit by the transmission probability, that is np ... If np > 1, then on average several stations will try to transmit at once, which will cause a collision. Moreover, as soon as a collision is detected, all stations will go back to step 1, which will cause a second collision. In the worst case, new stations willing to betray may be added to n , which will further exacerbate the situation, leading eventually to continuous collision and zero throughput. To avoid such a disaster, the work np should be less than one. If the network is susceptible to the occurrence of conditions when many stations simultaneously wish to transmit, then it is necessary to reduce p ... On the other hand, when p become too small, even a single station can wait on average (1- p )/p intervals T before transmitting. So if p = 0.1 then the average idle time prior to the transfer will be 9T.

CSMA / CD Collision Resolution Multiple Medium Access Protocol embodied the ideas of the above algorithms and added an important element - collision resolution. Since a collision destroys all frames transmitted at the time of its formation, then there is no point in stations to continue further transmission of their frames, as soon as they (stations) have detected collisions. Otherwise, there would be a significant loss of time when transmitting long frames. Therefore, for timely detection of collisions, the station listens to the environment throughout its own transmission. Here are the basic rules of the CSMA / CD algorithm for the transmitting station (Figure 11.3):

1. The station about to transmit is listening to the environment. And transmits if the environment is free. Otherwise (that is, if the environment is busy) proceeds to step 2. When transmitting several frames in a row, the station maintains a certain pause between frame transmission - the interframe interval, and after each such pause before sending the next frame, the station again listens to the environment (return to the beginning step 1);

2. If the environment is busy, the station continues to listen on the environment until the environment becomes free, and then immediately starts transmitting;

3. Each station transmitting listens to the environment, and if a collision is detected, it does not immediately stop transmitting, but first transmits a short special collision signal - a jam-signal, informing other stations about the collision, and stops transmitting;

4. After transmitting the jam-signal, the station stops talking and waits for some arbitrary time in accordance with the rule of binary exponential delay and then returns to step 1.

To be able to transmit a frame, the station must ensure that the shared medium is free. This is accomplished by listening to the fundamental of the signal, also called carrier-sense (CS). A sign of an unoccupied environment is the absence of a carrier frequency on it, which with the Manchester coding method is equal to 5-10 MHz, depending on the sequence of ones and zeros transmitted at the moment.

After the end of the frame transmission, all network nodes must withstand a technological pause (Inter Packet Gap) of 9.6 μs (96 bt). This pause, also called interframe spacing, is needed to bring the network adapters to their original state and also to prevent a single station from taking over the media exclusively.

Figure 11.3. Block diagram of the CSMA / CD algorithm (MAC level): when transmitting a frame by a station

Jam signal (jamming - literally jamming). The transmission of a jam signal guarantees that more than one frame will not be lost, since all nodes that transmitted frames before the collision occurred, having received a jam signal, will interrupt their transmissions and become silent in anticipation of a new attempt to transmit frames. The Jam signal must be of sufficient length to reach the most distant stations in the collision domain, taking into account the additional safety margin (SF) delay on possible repeaters. The content of the jam signal is not critical, except that it should not match the CRC field of the partially transmitted frame (802.3), and the first 62 bits should represent an alternation of ‘1’ and ‘0’ with a start bit ‘1’.

Figure 11.4. Random Access Method CSMA / CD

Figure 11.5 illustrates the collision detection process for a bus topology (thin or thick coaxial cable (10Base5 and 10Base2, respectively).

At the moment in time the node A(DTE A) starts transmission, naturally listening to its own transmitted signal. At the moment in time when the frame has almost reached the node B(DTE B), this node, not knowing that a transmission is already in progress, starts transmitting itself. At a point in time, a node B detects a collision (the constant component of the electrical signal in the monitored line increases). After that the node B transmits a jam signal and stops transmission. At the moment of time, the collision signal reaches the node A, then A also transmits a jam signal and stops transmission.

Figure 11.5. Collision detection when using the CSMA / CD scheme

According to the IEEE 802.3 standard, a node cannot transmit very short frames, or in other words, conduct very short transmissions. Even if the data field is not filled to the end, a special additional field appears that extends the frame to a minimum length of 64 bytes, excluding the preamble. Channel time ST (slot time) is the minimum time during which a node is obliged to transmit, to occupy a channel. This time corresponds to the transmission of a frame of the minimum allowable size, accepted by the standard. Channel time is related to the maximum allowable distance between network nodes - the diameter of the collision domain. Lets say the above example implements a worst-case scenario where the stations A and B removed from each other at the maximum distance. Time, signal propagation from A before B denote by. Knot A starts transmitting at time zero. Knot B starts transmitting at a moment in time and detects a collision after an interval after the start of its transmission. Knot A detects a collision at a point in time. In order for the frame emitted A, was not lost, it is necessary that the node A did not stop transmitting to this moment, since then, having detected a collision, the node A will know that its frame has not arrived and will try to transmit it again. Otherwise, the frame will be lost. The maximum time after which from the moment of the beginning of the transfer the node A can still detect a collision equals - this time is called double turnover time PDV (Path Delay Value, PDV)... More generally, PDV defines the total delay associated both with the delay due to the finite segment length and with the delay arising from the processing of frames at the physical layer of intermediate repeaters and end nodes of the network. For further consideration, it is also convenient to use another unit of time measurement: bit time bt (bit time). A time of 1 bt corresponds to the time it takes to transmit one bit, i.e. 0.1 μs at 10 Mbps.

Clear recognition of collisions by all stations on the network is a prerequisite for the correct operation of the Ethernet network. If any transmitting station does not recognize the collision and decides that the data frame was transmitted by it correctly, then this data frame will be lost. Due to the overlap of signals during a collision, the information of the frame will be distorted, and it will be rejected by the receiving station (possibly due to a mismatch of the checksum). Most likely, the garbled information will be retransmitted by some higher-level protocol, such as a transport or connection-oriented application protocol. But the retransmission of the message by the upper layer protocols will occur at a much longer time interval (sometimes even several seconds) compared to the microsecond intervals that the Ethernet protocol operates. Therefore, if collisions are not reliably recognized by the nodes of the Ethernet network, this will lead to a noticeable decrease in the useful bandwidth of this network.

For reliable collision detection, the following relationship must be met:

T min> = PVD,

where T min is the transmission time of the minimum frame length, and PDV is the time it takes for the collision signal to propagate to the farthest network node. Since in the worst case, the signal must pass twice between the stations of the network most distant from each other (an undistorted signal passes in one direction, and on the way back, the signal already distorted by the collision propagates), that is why this time is called double turnover time (Path Delay Value, PDV).

If this condition is met, the transmitting station must have time to detect the collision caused by its transmitted frame, even before it finishes transmitting this frame.

Obviously, the fulfillment of this condition depends, on the one hand, on the length of the minimum frame and the network bandwidth, and on the other hand, on the length of the network cable system and the speed of signal propagation in the cable (for different types of cable, this speed is somewhat different).

All parameters of the Ethernet protocol are selected in such a way that during normal operation of network nodes, collisions are always clearly recognized. When choosing the parameters, of course, the above relationship was taken into account, linking the minimum frame length and the maximum distance between stations in the network segment.

In the Ethernet standard, it is accepted that the minimum length of the frame data field is 46 bytes (which, together with the service fields, gives the minimum frame length of 64 bytes, and together with the preamble - 72 bytes or 576 bits).

When transmitting large frames, for example 1500 bytes, a collision, if it occurs at all, is detected almost at the very beginning of the transmission, no later than the first 64 transmitted bytes (if a collision did not occur at this time, then later it will not arise, since all stations are listening to the line and, "hearing" the transmission, they will be silent). Since the jam signal is much shorter than the full frame size, when using the CSMA / CD algorithm, the amount in the idle used channel capacity is reduced to the time required for collision detection. Early collision detection leads to more efficient channel utilization. Late collision detection, inherent in more extended networks, when the collision domain is several kilometers in diameter, which reduces the efficiency of the network. Based on a simplified theoretical model of the behavior of a busy network (assuming a large number of simultaneously transmitting stations and a fixed minimum length of transmitted frames for all stations), it is possible to express the network performance U in terms of the PDV / ST ratio:

where is the base of the natural logarithm. Network performance is affected by the size of the frames being broadcast and the diameter of the network. Performance in the worst case (when PDV = ST) is about 37%, and in the best case (when PDV is much less than ST) tends to 1. Although the formula is derived in the limit of a large number of stations trying to transmit simultaneously, it does not take into account the peculiarities of the truncated binary exponential delay algorithm, considered below, and is not valid for a network heavily congested with collisions, for example, when there are more than 15 stations wishing to transmit.

Truncated binary exponential delay(truncated binary exponential backoff). The CSMA / CD algorithm, adopted in the IEEE 802.3 standard, is the closest to the 1-constant algorithm, but it has an additional element - a truncated binary exponential delay. When a collision occurs, the station counts the number of times a collision occurs in a row when sending a packet. Since repeated collisions indicate a high load on the environment, the MAC tries to increase the delay between retrying frame transmissions. The corresponding procedure for increasing time intervals obeys the rule truncated binary exponential delay.

A random pause is selected according to the following algorithm:

Pause = Lx (delay interval),

where (backoff interval) = 512 bit intervals (51.2 μs);

L is an integer selected with equal probability from the range, where N is the retry number of the given frame: 1,2, ..., 10.

After the 10th attempt, the interval from which the pause is selected does not increase. Thus, a random pause can range from 0 to 52.4 ms.

If 16 consecutive attempts to transmit a frame cause a collision, then the transmitter should stop trying and discard this frame.

The CSMA / CD algorithm using truncated binary exponential latency is recognized as the best among the many random access algorithms and provides efficient network operation at both low and medium loads. At large loads, two disadvantages should be noted. First, with a large number of collisions, station 1, which is going to send a frame for the first time (before that has not tried to transmit frames), has an advantage over station 2, which has already tried unsuccessfully to transmit a frame several times, encountering collisions. Because station 2 waits for a significant amount of time before subsequent attempts, according to the binary exponential delay rule. Thus, irregular transmission of frames can occur, which is undesirable for time-dependent applications. Secondly, under heavy workload, the efficiency of the network as a whole decreases. Estimates show that with the simultaneous transmission of 25 stations, the total bandwidth is reduced by about 2 times. But the number of stations in the collision domain can be larger, since not all of them will simultaneously access the environment.

Receiving a frame (fig. 11.6)

Figure 11.6. Block diagram of the CSMA / CD algorithm (MAC level): when a frame is received by a station

The receiving station or other network device, for example, a hub or switch, first synchronizes with the preamble and then converts the Manchester code into binary form (at the physical layer). Next, the binary stream is processed.

At the MAC level, the remaining preamble bits are cleared and the station reads the destination address and compares it to its own. If the addresses match, then the frame fields except for the preamble, SDF and FCS are buffered and a checksum is calculated, which is compared to the check sequence field of the FCS frame (using the CRC-32 cyclic sum method). If they are equal, then the contents of the buffer are passed to the higher layer protocol. Otherwise, the frame is discarded. The occurrence of a collision when receiving a frame is detected either by a change in the electrical potential if a coaxial segment is used, or by the fact of receiving a defective frame, an incorrect checksum if a twisted pair or optical fiber is used. In both cases, the received information is discarded.

From the description of the access method, it can be seen that it is probabilistic in nature, and the probability of successfully obtaining a common environment at its disposal depends on the network congestion, that is, on the intensity of the need for frame transmission in the stations. When developing this method in the late 70s, it was assumed that the data transfer rate of 10 Mbit / s is very high compared to the needs of computers for mutual data exchange, so the network load will always be small. This assumption is sometimes true to this day, but there are already real-time multimedia applications that are very busy on Ethernet segments. In this case, collisions occur much more often. With significant collision rates, the usable throughput of the Ethernet network drops sharply, as the network is almost constantly busy with retrying frame transmissions. To reduce the intensity of collisions, you need to either reduce traffic by reducing, for example, the number of nodes in a segment or replacing applications, or increase the speed of the protocol, for example, switch to Fast Ethernet.

It should be noted that the CSMA / CD access method does not at all guarantee a station that it will ever be able to access the medium. Of course, with a low network load, the probability of such an event is small, but with a network utilization rate approaching 1, such an event becomes very likely. This shortcoming of the random access method is a price to pay for its extreme simplicity that made Ethernet the least expensive technology. Other access methods - Token Ring and FDDI token access, the Demand Priority method of 100VG-AnyLAN networks - are free from this drawback.

As a result of taking into account all factors, the ratio between the minimum frame length and the maximum possible distance between network stations was carefully selected, which ensures reliable collision detection. This distance is also called the maximum network diameter.

With an increase in the frame rate, which takes place in new standards based on the same CSMA / CD access method, for example Fast Ethernet, the maximum distance between network stations decreases in proportion to the increase in the transfer rate. In the Fast Ethernet standard, it is about 210 meters, and in the Gigabit Ethernet standard, it would be limited to 25 meters, if the developers of the standard did not take some measures to increase the minimum packet size.

Table 11.1 shows the values ​​of the basic parameters of the 802.3 standard frame transmission procedure, which do not depend on the implementation of the physical medium. It is important to note that each variant of the physical environment of Ethernet technology adds to these constraints its own, often more stringent constraints, which must also be met and which will be discussed below.

Table 11.1.Ethernet MAC Layer Parameters

Options The values
Bit rate 10 Mbps
Grace period 512 bt
Interframe Gap (IPG) 9.6 μs
Maximum number of transmission attempts
Maximum number of increasing pause range
Jam sequence length 32 bit
Maximum frame length (without preamble) 1518 bytes
Minimum frame length (without preamble) 64 bytes (512 bits)
Preamble length 64 bit
Minimum length of random pause after collision 0 bt
Maximum length of random pause after collision 524000 bt
Maximum distance between network stations 2500m
Maximum number of stations in the network

Ethernet frame formats

The Ethernet technology standard described in the IEEE 802.3 document describes a single MAC layer frame format. Since the MAC layer frame must include the LLC layer frame described in the IEEE 802.2 document, according to the IEEE standards, only one version of the link layer frame can be used in the Ethernet network, the header of which is a combination of the MAC and LLC sublayer headers.

Nevertheless, in practice in Ethernet networks at the link layer, frames of 4 different formats (types) are used. This is due to the long history of the development of Ethernet technology, which existed before the adoption of the IEEE 802 standards, when the LLC sublayer was not separated from the general protocol and, accordingly, the LLC header was not used.

In 1980, a consortium of three firms Digital, Intel and Xerox submitted to the 802.3 committee their proprietary version of the Ethernet standard (which, of course, described a certain frame format) as a draft international standard, but the 802.3 committee adopted a standard that differs in some details from DIX offers. The differences were also in the frame format, which gave rise to the existence of two different types of frames in Ethernet networks.

Another frame format emerged as a result of Novell's efforts to speed up its protocol stack over Ethernet.

And finally, the fourth frame format is the result of the work of the 802: 2 committee to bring previous frame formats to some common standard.

Differences in frame formats can lead to incompatibility between hardware and network software that is designed to work with only one Ethernet frame standard. However, today almost all network adapters, network adapter drivers, bridges / switches and routers can work with all Ethernet technology frame formats used in practice, and the frame type is recognized automatically.

Below is a description of all four types of Ethernet frames (here, a frame means the entire set of fields that are related to the link layer, that is, the fields of the MAC and LLC layers). One and the same frame type can have different names, so below for each frame type are given several of the most common names:

  • 802.3 / LLC frame (802.3 / 802.2 frame or Novell 802.2 frame);
  • Raw 802.3 frame (or Novell 802.3 frame);
  • Ethernet DIX frame (or Ethernet II frame);
  • Ethernet SNAP frame.

The formats for all of these four types of Ethernet frames are shown in Fig. 11.7.

802.3 / LLC frame

The 802.3 / LLC frame header is the result of the concatenation of the frame header fields defined in the IEEE 802.3 and 802.2 standards.

The 802.3 standard defines eight header fields (Figure 11.7; preamble field and start frame delimiter not shown in the figure).

  • Preamble field consists of seven sync bytes 10101010. In Manchester coding, this combination is represented in the physical medium by a periodic waveform with a frequency of 5 MHz.
  • Start-of-frame-delimiter (SFD) consists of one byte 10101011. The occurrence of this bit pattern is an indication that the next byte is the first byte of the frame header.
  • Destination Address (DA) can be 2 or 6 bytes long. In practice, 6 byte addresses are always used. The first bit of the most significant byte of the destination address is an indication of whether the address is individual or group. If it is 0, then the address is individual (unicast), a if 1, then this multicast address. If the address consists of all ones, that is, it has a hexadecimal representation of 0xFFFFFFFFFFFF, then it is intended for all nodes on the network and is called broadcast address.

In the IEEE Ethernet standards, the least significant bit of a byte is displayed in the leftmost position of the field, and the most significant bit in the rightmost position. This non-standard way of displaying the order of bits in a byte corresponds to the order in which bits are transmitted on the communication line by the Ethernet transmitter. The standards of other organizations, for example RFC IETF, ITU-T, ISO, use the traditional byte representation, where the least significant bit is considered the rightmost bit of the byte, and the most significant bit is the leftmost one. However, the byte order remains traditional. Therefore, when reading the standards published by these organizations, as well as reading data displayed on the screen by the operating system or protocol analyzer, the values ​​of each byte of the Ethernet frame must be mirrored in order to get a correct representation of the meaning of the bits of this byte in accordance with the IEEE documents. For example, a multicast address in IEEE notation of the form 1000 0000 0000 0000 1010 0111 1111 0000 0000 0000 0000 0000 or in hexadecimal notation 80-00-A7-FO-00-00 will most likely be displayed by the protocol analyzer in the traditional form as 01-00-5E-0F-00-00.

  • Source Address (SA) - it is a 2 or 6 byte field containing the address of the node that sent the frame. The first bit of the address is always 0.
  • Length (Length, L) - 2-byte field that defines the length of the data field in the frame.
  • Data field can contain from 0 to 1500 bytes. But if the length of the field is less than 46 bytes, then the next field - the padding field - is used to pad the frame to the minimum allowable value of 46 bytes.
  • Padding consists of as many padding bytes as possible to provide a minimum data field length of 46 bytes. This ensures that the collision detection mechanism works correctly. If the length of the data field is sufficient, then the padding field does not appear in the frame.
  • Frame Check Sequence (PCS) consists of 4 bytes containing the checksum. This value is calculated using the CRC-32 algorithm. After receiving a frame, the workstation performs its own checksum calculation for this frame, compares the received value with the value of the checksum field, and thus determines whether the received frame is corrupted.

The 802.3 frame is a MAC sublayer frame, therefore, in accordance with the 802.2 standard, an LLC sublayer frame is inserted into its data field with the start and end flags removed. The LLC frame format has been described above. Since the LLC frame has a header length of 3 (in LLC1 mode) or 4 bytes (in LLC2 mode), the maximum data field size is reduced to 1497 or 1496 bytes.

Figure 11.7. Ethernet frame formats


Similar information.