The I-cache and D-cache memories a,b and a,b may comprise high-speed static random access memories SRAM that are addressable by their respective processors a,b. Each cache memory is configured to store a predetermined amount of data or instructions, e. Those skilled in the art will appreciate that the memories a,b and a,b may be organized using one or more cache arrangements known in the art, including directly mapped cache, associatively mapped cache, set-associatively mapped cache, etc.
The main memory comprises a plurality of storage locations that are addressable by the processors a,b and the network interfaces via the system controller The memory comprises a form of random access memory RAM that is generally cleared by a power cycle or other reboot operation e.
It will be apparent to those skilled in the art that the memory may also comprise other memory means, including various computer-readable media, for storing program instructions and data structures pertaining to the operation of the intermediate network node The router operating system , portions of which are typically resident in the memory and executed by the processor b , functionally organizes the intermediate network node by, inter alia, invoking network operations that support other software processes executing on the intermediate node.
Illustratively, the operating system manages the contents of a set of routing information , e. The operating system may update the contents of the routing information based on network topology information it exchanges with other intermediate network nodes. For instance, the network topology information may be transported by conventional link-state data packets, such as OSPF packets, that are addressed to the intermediate network node and processed by the router operating system The forwarding engine , portions of which are typically resident in the memory and executed by the processor a , may perform routing operations on data packets received at the network interfaces The forwarding engine implements a directed forwarding graph which it uses to render forwarding decisions for the received data packets.
The forwarding graph includes a plurality of graph nodes, each associated with a corresponding forwarding operation. For instance, the graph may include a first graph node that retrieves data packets for the forwarding engine to process; a second graph node may perform Ethernet decoding operations, e. Accordingly, each graph node is associated with a corresponding set of forwarding instructions, e. In accordance with the present invention, the forwarding engine is configured to process a plurality of packets concurrently, e.
More specifically, when the vector reaches a graph node, the graph node's forwarding instructions are loaded into the I-cache a and are repeatedly executed for each packet in the vector. As a result, the conventional number of I-cache misses and D-cache misses that occur when executing the forwarding instructions occur on a per-vector basis, and not on a per-packet basis as in previous implementations.
Consequently, the forwarding graph's total number of cache misses per packet is significantly reduced, e. Such measured statistics may include, inter alia, the rate at which packets are input to the forwarding graph and the average time required to process packets through the graph. For instance, the forwarding engine first may identify a range of time intervals which yield meaningful sample sizes, then randomly select a sampling interval within the identified range of intervals.
The forwarding engine uses its measured statistics to control the rate at which data packets are processed in the forwarding graph In a preferred embodiment, the forwarding engine adjusts one or more forwarding-graph parameters to prevent the average latency through the graph from exceeding a predetermined target latency, e.
For example, the forwarding engine may use its measured statistics to adaptively select the number of packets per vector, i. That is, each graph node may be associated with a corresponding timer interval which defines the maximum amount of time e. For example, each buffer may be configured to store up to a fixed amount, e. The interface that received the data then enqueues a buffer pointer on its associated ingress ring a , e.
The enqueued buffer pointer stores a value that may be used to locate the buffer in the memory This process is continued until the entire data packet has been received by the network interface and stored in one or more data buffers Next, the forwarding engine detects, e. Thereafter, the engine dequeues the packet's buffer pointers from the network interface's ingress ring a and processes the packet in accordance with the forwarding graph Such a situation may arise if the received packet is a network management packet, such as an OSPF packet.
Having rendered a forwarding decision for the received data packet , the forwarding engine enqueues the packet's buffer pointers on an appropriate egress ring b corresponding to a network interface coupled to the packet's next destination.
The graph includes a plurality of graph nodes - , each node corresponding to a different set of forwarding operations that may be performed by the processor a executing the forwarding engine In accordance with an illustrative embodiment, each graph node is configured to process a plurality of data packets concurrently, rather than one packet at a time. Suppose a network interface receives one or more data packets The network interface stores the packet data in one or more data buffers , e. The forwarding engine instructs the processor a to periodically poll the contents of the ingress ring a.
As a result, the processor detects the enqueued buffer pointer s on the ingress ring, thus indicating that the received packet s is ready to be processed. Next, the processor a loads instructions, e. More specifically, the processor loads the graph-node instructions into its I-cache a if they are not already stored therein and then sequentially executes the instructions. Notably, the order in which the buffer pointers are enqueued on the ingress ring a coincides with the order in which their referenced data packets were received at the network interfaces Thus, as the processor a dequeues buffer pointers from the head of the ingress ring a , the processor can use the dequeued pointers to retrieve the data packets in the same order in which they were received.
The data packets are then added to a vector in a manner that preserves the relative order in which the packets were received. For instance, the data packet referenced by the vector's first packet pointer is received prior to the packet referenced by the vector's second pointer , and so forth. The vector contains one or more packet pointers , e.
Preferably, the vector may store up to a predetermined maximum number of packet pointers, e. As used herein, each buffer chain comprises an ordered list of one or more data buffers which collectively store a data packet received by the intermediate network node Accordingly, each packet pointer stores a value referencing the memory location of the first data buffer in its corresponding list of data buffers.
As shown, each data buffer includes a software-context portion and a packet-data portion The software context, which is preferably one cache line e. For instance, the software context may include a pointer whose value references the memory location of an adjacent buffer in a buffer chain In addition, the software context may store other information including, but not limited to, the amount of packet data stored in the buffer , the location of packet-header information in the buffer chain , etc. The packet-data portion is configured to store a predetermined amount of packet data, e. Referring again to FIG.
Namely, each data packet in the vector may be associated with a corresponding data value in the AUX data Therefore, when the vector contains N data packets, an associated set of N data values are contained in the vector's set of AUX data.
Packet Flow Sequence in PAN-OS
Preferably, each data value stores a relatively small of data, e. The AUX data values may include, inter alia, various packet-related information, such as packet-type identifiers, hash keys, and so forth.
Advantageously, the AUX data may be used to reduce the number of data read operations that are performed by the processor a executing the forwarding engine , and thus may reduce the number of cache misses that occur in the D-cache a. Data is usually retrieved by the processor in units of cache lines, e.
To retrieve an AUX data value conventionally, the processor has to retrieve at least one full cache line worth of data from the main memory , even if the AUX value is significantly smaller in size than a full cache line. Thus, retrieval of N relatively small AUX data values in prior implementations would result in a relatively large number of data read operations to the memory In contrast, according to the illustrative embodiment, the AUX data may be passed between graph nodes and stored directly in the D-cache a.
As such, forwarding instructions at the receiving graph node can access the set of AUX data values from the cache memory without having to retrieve larger blocks of data, e. In this way, the processor a may significantly reduce the number of data read operations it performs as compared with the previous implementations. In the illustrative embodiment, the AUX data is organized as a separate vector which is passed between graph nodes in the forwarding graph In this arrangement, the AUX data values may be more easily processed by a vector processor not shown which may be coupled to the processor a and adapted to execute graph-node forwarding instructions.
Those skilled in the art will appreciate that the AUX data alternatively may be concatenated or otherwise incorporated into the vector Yet other embodiments may store the packets' AUX data in different memory locations, such as in the packets' software-context portions , or may not utilize AUX data at all.
When a full vector's worth of packets, e. Here, it is noted that the interval of the timer may be selected by the forwarding engine so as to prevent excessive processing delays at the graph node For instance, if data packets are received at the network interfaces at a relatively high rate, e.
After the vector and AUX data have been dispatched to the graph node , forwarding instructions for the graph node are loaded into the I-cache a and executed by the processor a. The forwarding instructions direct the processor a to perform Ethernet-decode operations for packets in the vector As shown, the Ethernet-decode operations examine each packet in the vector and determine whether the packet's Ethernet-type field stores a value corresponding to an IPv4 datagram 0x or a PPPoE PDU 0x Illustratively, the AUX data values dispatched from the graph node may store the packets' Ethernet-type field values so as to reduce the number of data read operations that are performed by instructions at the graph node Often, most packets received at the intermediate network node are of the same packet type, e.
However, some of the data packets in this example may be directed from the node to a graph node not shown configured to process PPPoE PDUs. Therefore, after being processed at the graph node , the vector is essentially partitioned into two subsets, i. For simplicity, it is assumed that only these two types of data packets are processed by the forwarding graph Clearly, other embodiments may employ forwarding graphs that are configured to process other types of data packets. The two subsets of packets may be stored in respective vectors V 1 and V 2 , which are routed to appropriate subsequent graph nodes in the forwarding graph In addition, each of the vectors and may be associated with corresponding sets e.
- ashtabula county death index 1867.
- how to track my kids cell phone.
- Forwarding engine ip checksum error counter.
The AUX data and need not contain the same type of data values. For instance, while the AUX data values may store Ethernet-type field values, the AUX data and may store different data values, such as hash-key identifiers, etc. Operationally, a timer associated with the graph node starts running once the vector is received at the graph node The packets contained in the vector are processed and assigned to an appropriate output vector or The graph node continues to receive and process vectors until one or both of the vectors and contains a full vector's worth of packets, e.
When any of these events occurs, the vectors and are dispatched to their next, subsequent graph nodes. For example, suppose the timer expires before a full vector or is acquired at the graph node In this case, the partially-filled vectors and are dispatched to their appropriate subsequent graph nodes.
It is important to note that any data packets that have not yet been processed at the graph node , and thus have not been assigned to either vector or , are not dispatched along with the vectors and and their associated AUX data and The vector and its associated AUX data are received at the graph node which is configured to perform IPv4 validation operations for the vector's contained is IPv4 data packets. A set of forwarding instructions associated with the graph node is loaded in the I-cache a , and the instructions are executed by the processor a.
In addition, a node timer begins running. Once the processor finishes validating the IPv4 packets in the vector , or the timer expires, a vector of processed packets is transferred to the graph node whose associated forwarding instructions perform IPv4 forwarding operations. A set of AUX data also may be transferred to the graph node Having dispatched the vector and the AUX data , the forwarding engine loads instructions associated with the graph node into the I-cache a and starts running a node timer associated with the graph node The graph-node instructions direct the processor a to render forwarding decisions for the packets contained in the received vector For example, the instructions may instruct the processor a to perform an address lookup for each of the vector's data packets, e.
In this way, address lookups for multiple IPv4 packets in the vector may rely on the same MTRIE data, thereby reducing the number of data read operations that the processor a has to perform. The result of the MTRIE lookups may direct the processor to locate routing information which is then used by the processor to render forwarding decision for the vector's packets. After a forwarding decision has been made for a data packet, the packet is added to an output vector When forwarding decisions have been made for each of the data packets in the vector , or the timer expires, the vector and its associated AUX data are dispatched to the graph node The packet is then delivered to one of a plurality of forwarding engines 42 , 44 for outbound flow step One forwarding engine 42 is associated with packet forwarding via the primary IP route while the other forwarding engine 44 is associated with packet forwarding via the secondary NBMA route.
Routing decisions by route calculation and control 36 are made based on the addresses associated with a destination host node IP address and use methods not disclosed in the present invention to determine a route over one of a plurality of NBMA and BMA networks. Optionally, while the discovery and storage of additional address information step is occurring with regard to a particular packet, subsequent incoming packets may be forwarded directly to the route calculation component step G, above. In accordance with a further embodiment of the invention a routing node may apply a time limit to the retention of the addresses in fields 2 and 3 of the Table This time limit is used to accommodate any changes in addresses that may occur periodically and to limit the size of the table.
In accordance with another aspect of the invention, nodes not employing the invention may perform packet inspection, identification, route calculation and forwarding regardless of additional address information in the packet header Option Data field. Other modifications will be apparent to those skilled in the art and, therefore, the invention is defined in the claims. Effective date : Year of fee payment : 4. Year of fee payment : 8. Year of fee payment : In a packet communications network comprising primary packet and secondary communication networks, a method and apparatus are disclosed for using a primary path connection to exchange additional address information between routing nodes so that the routing nodes can discover secondary path connections between routing nodes that may be used for future communication.
If a packet merits additional address information, additional address information relating to a secondary communication network address for a source routing node is included in the packet header of an outgoing packet. It is determined whether an incoming packet includes additional address information and if so, the additional address information is stored so that it is associated with the primary network address of the source host node of the packet.
The primary network addressing information, and any secondary network addressing information, is passed to the route calculation component of the routing node. The route calculation component makes the routing decisions and the packet is forwarded to a forwarding engine. Step D The header of the packet entering routing node 46 at the processor 30 is examined to determine whether additional address information relating to the source host node may be learned step Step H The IP address of the destination host node is passed to route calculation and control 36 step Step I The packet is then delivered to one of a plurality of forwarding engines 42 , 44 for outbound flow step We claim: 1.
A method for inclusion of additional address information in a packet header of an outgoing packet at a routing node of a primary packet communication network,. B if step A determines that said source host node is zero hops away, adding to said packet header said additional address information;. C forwarding said destination host node primary network address to said route calculation component;.
E forwarding said packet to one of said plurality of forwarding engines for outbound flow.
The method of claim 1 , wherein step A comprises examining said packet header for a Time to Live value. The method of claim 1 , wherein step A comprises determining whether said source host node is present in a table of nodes that are zero hops away. The method of claim 1 further comprising the following steps if step A determines that said source host node is zero hops away,:. A1 determining whether said additional address information has been included in previously forwarded packets having said destination address;.
A2 if step A1 determines said additional address information has been included in previously forwarded packets having said destination address, advancing to step C ;.
Forwarding engine ip checksum error counter
A3 if step A1 determines said additional address information has not been included in previously forwarded packets having said destination address, signifying that said information has been included. The method of claim 4 wherein said signifying step comprises storing said destination host node primary network address in a table of destinations and said determining step comprises determining whether said destination host node primary network address is present in said table of destinations.
The method of claim 5 , wherein if a presence of said destination node primary network address in said table of destinations is determined, said method further comprises forwarding associated additional address information from said table of destinations to said route calculation component. The method of claim 1 , wherein step B further comprises a step of adding address information indicators to said packet header. The method of claim 1 , wherein said primary packet communication network is the Internet. A method for discovery within an incoming packet and storage of additional address information relating to a path connection over a secondary communication network at a routing node of a primary packet communication network,.
The method of claim 9 , wherein step b is accomplished by determining whether said incoming packet has address information indicators in said packet header. The method of claim 9 , wherein step c is performed only if a determination can be made that said additional address information has not been previously stored in said table associated with said source host node primary network address.
The method of claim 9 , wherein if said packet header is determined to include said additional address information, step b further comprises the following steps:. The method of claim 12 , wherein step b1 and b2 are performed only if a determination can be made that said additional address information has not been previously stored in said table associated with said source host node primary network address.
The method of claim 9 , wherein said primary packet communication network is the Internet. The method of claim 9 , further comprising a step of forwarding subsequent packets directly to a route calculation component if said discovery and storage of additional address information relating to a path connection over a secondary communication network is occurring with regard to said incoming packet. USB1 en. Methods and apparatus for data communication using a hybrid transport switching protocol. Architecture for transport of multiple services in connectionless packet-based communication networks.
Method and apparatus for segmentation and reassembly of data packets in a communication switch. Method and apparatus for providing control information in a system using distributed communication routing. Method for transmission of information between nodes of a network and network using said method.
US7961636B1 - Vectorized software packet forwarding - Google Patents
Method for supporting foreign protocols across backbone network by combining and transmitting list of destinations that support second protocol in first and second areas to the third area. Method and apparatus for routing packets in networks having connection-oriented subnetworks. Router using multiple hop redirect messages to enable bridge like data forwarding.
Router for high-speed packet communication between terminal apparatuses in different LANs. USB2 en. System and method for collecting and evaluating statistics to establish network connections. USA en. Combining multilink and IP per-destination load balancing over a multilink bundle. JPB2 en. CAC en. EPB1 en. System and method for using domain names to route data sent to a destination on a network.
Methods and apparatus for interconnecting local area networks with wide area backbone networks. Methods and apparatus for packet classification with multi-level data structure. L2 mismatch timeouts —Number of malformed or short packets that caused the incoming packet handler to discard the frame as unreadable.
If this value is ever nonzero, the PIC is probably malfunctioning. Output Errors —Output errors on the interface. Carrier transitions —Number of times the interface has gone from down to up. This number does not normally increment quickly, increasing only when the cable is unplugged, the far-end system is powered down and then up, or another problem occurs.
Collisions —Number of Ethernet collisions. If it is nonzero, there is a software bug. Aged packets —Number of packets that remained in shared packet SDRAM so long that the system automatically purged them. The value in this field should never increment. If it does, it is most likely a software bug or possibly malfunctioning hardware.
CoS queue number and its associated user-configured forwarding class name.
Forwarding engine ip checksum error counter free
Displayed on IQ2 interfaces. Queue counters Egress —CoS queue number and its associated user-configured forwarding class name. Tail-dropped packets —Number of packets dropped because of tail drop. RL-dropped packets —Number of packets dropped due to rate limiting.
- 4.1. Help Functions.
- missouri sales tax no charge warranty!
- Forwarding engine ip checksum error counter free.
RL-dropped bytes —Number of bytes dropped due to rate limiting. On all other M Series routers, the output classifies dropped packets into the following categories:. Low —Number of low-loss priority packets dropped because of RED. Medium-low —Number of medium-low loss priority packets dropped because of RED. Medium-high —Number of medium-high loss priority packets dropped because of RED.
High —Number of high-loss priority packets dropped because of RED. The byte counts vary by interface hardware. On all other M Series routers, the output classifies dropped bytes into the following categories:.