US20040022094A1 - Cache usage for concurrent multiple streams - Google Patents

Cache usage for concurrent multiple streams Download PDF

Info

Publication number
US20040022094A1
US20040022094A1 US10/358,618 US35861803A US2004022094A1 US 20040022094 A1 US20040022094 A1 US 20040022094A1 US 35861803 A US35861803 A US 35861803A US 2004022094 A1 US2004022094 A1 US 2004022094A1
Authority
US
United States
Prior art keywords
cache
stream
transaction
read
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/358,618
Inventor
Sivakumar Radhakrishnan
Chitra Natarajan
Kenneth Creta
Bradford Congdon
Hui Lu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US10/358,618 priority Critical patent/US20040022094A1/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LU, HUI, NATARAJAN, CHITRA, CONGDON, BRADFORD, RADHAKRISHNAN, SIVAKUMAR, CRETA, KENNETH
Publication of US20040022094A1 publication Critical patent/US20040022094A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0813Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0817Cache consistency protocols using directory methods
    • G06F12/082Associative directories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • G06F12/0833Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means in combination with broadcast means (e.g. for invalidation or updating)
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch

Definitions

  • An embodiment of the invention pertains generally to processor systems, and in particular pertains to scalable processor systems.
  • a chipset encompasses the major system components that move data between the main memory, the processor(s) and the I/O devices.
  • System vendors have designed separate chipsets with different system architectures to address the needs of different server segments or use industry standard components to address the needs for low-end systems and design proprietary components for mid-range and high-end systems.
  • a stream is a contiguous sequence of requests from an agent typically connected to the processor and, memory system via a chipset or the like.
  • the memory may include dynamic random access memory and the requests are processed in the same order as they are received. Processing the requests in the same order that they are received reduces memory bandwidth when, for example, a page replace conflict or DIMM turn around conflict forces a transaction to wait for a prior transaction to finish.
  • current systems provide a single path for cache coherency operations and data transfer, causing cache coherency transactions to wait for data transfers, increasing snoop latency.
  • Coherent transactions limit the bandwidth for transactions from a peripheral input-output (I/O) bus in processor-based systems such as desktop computers, laptop computers and servers.
  • processor-based systems typically have a host bus that couples a processor and main memory to ports for I/O devices.
  • the I/O devices such as Ethernet cards, couple to the host bus through an I/O controller or bridge via a bus such as a peripheral component interconnect (PCI) bus.
  • PCI peripheral component interconnect
  • the I/O bus has ordering rules that govern the order of handling of transactions so an I/O device may count on the ordering when issuing transactions. When the I/O devices may count on the ordering of transactions, I/O devices may issue transactions that would otherwise cause unpredictable results.
  • the I/O device After an I/O device issues a read transaction for a memory line and subsequently issues a write transaction for the memory line, the I/O device expects the read completion to return the data prior to the new data being written.
  • the host bus may be an unordered domain that does not guaranty that transactions are carried out in the order received from the PCI bus. In these situations, the I/O controller governs the order of transactions.
  • the I/O controller places the transactions in an ordering queue in the order received to govern the order of inbound transactions (transactions toward the main memory and/or processor) from an I/O bus, and waits to transmit the inbound transaction across the unordered interface until the ordering rules corresponding to each transaction are satisfied.
  • issuing transactions one at a time as the transaction satisfies ordering rules may limit the latency of a transaction to a nominal latency equal to the nominal snoop latency for the system.
  • transactions unnecessarily wait in the ordering queue for coherent transactions with unrelated ordering requirements.
  • a read transaction received subsequent to a write transaction for the same address will wait for the write transaction to issue even though the read transaction may have issued from a different I/O device, subjecting the read transaction to ordering rules independent from the ordering rules of the write transaction.
  • the latency of the snoop request, or ownership request, for the write transaction adds to the latency of the read transaction and when a conflict exists with the issuance of the ownership request for the write transaction, the latency of the write transaction, as well as the read transaction, will be longer than the nominal snoop latency for the system.
  • I/ 0 devices continue to demand increasing bandwidth, increasing the amount of time transactions remain in an ordering queue.
  • bandwidth increasing the amount of time transactions remain in an ordering queue.
  • the number of delays resulting from a foreseeable read transaction that waits to access a memory line across the unordered interface and a read transaction that waits for a write transaction to satisfy ordering requirements when the write transaction will write to a different memory line can escalate in proportion with bandwidth.
  • FIGS. 1 - 4 depict embodiments of a scalable system.
  • FIG. 5 depicts an embodiment of a Scalable Node Controller.
  • FIG. 6 depicts an embodiment of a Scalability Port Switch.
  • FIG. 7 depicts an embodiment of an I/O Hub.
  • FIG. 8 depicts a table to compare embodiments comprising partitioning and/or a hot plug mechanism.
  • FIG. 9 depicts an embodiment of an apparatus such as an I/O Hub for a scalable system.
  • FIG. 10 depicts another embodiment of an apparatus such as an I/O Hub or a Hub Interface thereof for a scalable system.
  • FIGS. 11 A-B depict example embodiments and comparisons of prefetch profiles and lookup tables for a scalable system.
  • FIG. 12 depicts an example operation of an embodiment comprising unified cache.
  • FIG. 13 depicts an example table for operation of an inactivity timer as an embodiment of a timer mechanism as well as comparisons for different tables and for operation without a timer.
  • FIG. 14 depicts a flow chart of an embodiment of a scalable system.
  • FIG. 15 depicts an embodiment of a machine-readable medium comprising instructions for a scalable system.
  • FIG. 16 depicts another example embodiment of a scalable switch comprising a shared bypass bus structure.
  • FIGS. 17 - 20 depict example embodiments of a shared bypass bus structure.
  • FIG. 21 depicts an example embodiment of an apparatus to re-order memory.
  • Various embodiments of the invention may improve the efficient use of a cache in a scalable system that supports concurrent multiple streams passing through the cache between memory and the requesting devices. Some embodiments use adaptive pre-fetching of memory data using a dynamic table to determine the maximum number of pre-fetched cache lines permissible per stream. Other embodiments dynamically allocate the cache to the active streams. Still other embodiments use a programmable timer to deallocate inactive streams, thereby freeing up a portion of the cache for other streams.
  • FIG. 1 there is shown an embodiment that may comprise a single-bus shared memory architecture supporting up to four processors. Another embodiment may support a distributed shared memory architecture up to 16 or more processors. Other embodiments are also possible.
  • processors 100 A and the memory controller in scalable node controller (SNC) 110 A may be attached to a common bus, front side bus 105 A.
  • This architecture may provide good performance and low cost and may be well suited for low-end servers.
  • processors 100 A may have a private cache and may use the internal bus interface unit to monitor memory accesses on the bus. For this reason, a cache coherency protocol that may be used in these systems may be called a snooping protocol.
  • the embodiment shown may comprise two main components: Scalable Node Controller (SNC) 110 A and the I/O Hub (IOH) 120 A.
  • the SNC 110 A may support one to four processors 100 A and may interface directly or substantially directly to the processors' Front Side Bus 105 A.
  • the main memory controller in the SNC 110 A may support four memory channels.
  • a double data rate (DDR) memory hub (DMH) on each memory channel may control eight DDR dual in-line memory modules (DIMM).
  • DDR double data rate
  • DIMM dual in-line memory modules
  • the SNC 110 A may also interface to a Firmware Hub (FWH) 117 A, which may serve as a boot ROM for the system.
  • FWH Firmware Hub
  • the SNC 110 A may couple to the IOH 120 A through a pair of Scalability Ports (SP). Each SP may provide 3.2 GB/s of bandwidth in each direction.
  • the IOH 120 A may support four Hub interfaces to connect to various bridges, such as a PCI/PCI-X bridge 125 A and/or Infiniband® bridge 130 A. A narrower version of the Hub interface may support legacy I/O devices 135 A.
  • FIG. 1 may be limited by the bandwidth and the electrical limits of Front Side Bus 105 A.
  • FIG. 2 shows an embodiment with a multi-node scheme where clusters of multiple processors may be interconnected with Scalability Port Switch (SPS) 140 A and SPS 140 B for the illustrated embodiment of a 16-processor configuration.
  • SPS Scalability Port Switch
  • SPS 140 A and 140 B may provide the interconnection and coherency support for building multi-node multiprocessor systems.
  • SPS 140 A for instance, may comprise six SP interfaces to interconnect the SNC and IOH components.
  • memory may be distributed physically across nodes but may also be visible from all processors as a single physical or logical address space.
  • multi-node systems may provide the programming simplicity of shared memory architectures.
  • Distributed memory architectures may exhibit significant difference in latency on local and remote memory accesses, sometimes by an order of magnitude.
  • software optimizations may mitigate the large remote to local accesses by moving or copying pages to the local memory.
  • the ratio of remote to local latency in other embodiments, such as a multi-node configuration may be about 2.2, which may not require such software optimizations for scalable performance.
  • the SP protocol may be designed for scalability and such a protocol may facilitate the design of specialized switch components to build large scale coherent multi-chassis systems.
  • FIG. 3 depicts a 64 processor configuration where four 16-processor chassis are interconnected through dedicated point-to-point links.
  • the embodiment may comprise processors such as processors 100 A-D; processor interface circuitry, such as scalable node controllers 110 A-B; memories 1115 A-B; I/O hub circuitry such as I/O hubs 120 A-B; and I/O devices such as bridges 160 and 190 connected to agents 162 , 164 , 192 and 194 .
  • processors such as processors 100 A-D
  • processor interface circuitry such as scalable node controllers 110 A-B
  • memories 1115 A-B such as I/O hub circuitry such as I/O hubs 120 A-B
  • I/O devices such as bridges 160 and 190 connected to agents 162 , 164 , 192 and 194 .
  • support circuitry may couple the processor interface circuitry with the multiple hubs to facilitate transactions between I/O hubs 120 A-B and processors 100 A-D.
  • Scalable node controllers 110 A and 110 B may couple with processors 100 A-B and 100 C-D, respectively, to apportion tasks between the processors.
  • a scalable node controller 110 A may apportion processing requests between processor 100 A and processor 100 B, as well as between processors 100 A-B and processors 10 C-D, for instance, based upon the type of processing request and/or the backlog of processing requests for the processors 100 A-B and processors 100 C-D.
  • scalable node controller 110 A may also coordinate access to memory 115 A between the processors 100 A-B and the I/O hubs 120 A-B.
  • the support circuitry for multiple I/O hubs such as scalability port switches 140 A and 140 B, may direct traffic to scalable node controllers 110 A and 110 B based upon a backlog of transactions.
  • scalability port switches 140 A and 140 B may direct transactions from scalable node controllers 110 A and 110 B to I/O hubs 120 A and 120 B based upon destination addresses for the transactions.
  • memory 115 A and memory 115 B may share entries, or maintain copies of the same data.
  • memory 115 A and memory 115 B may comprise an entry that may not be shared so a write transaction may be forwarded to either memory 115 A or memory 115 B.
  • SNC 110 A may comprise a central component in the processor/memory sub-system.
  • SNC 110 A may comprise interfaces to the processors 110 A and 110 B, the memory 115 A, a firmware interface, and two scalability ports for accesses to I/O.
  • features of the SNC 110 A may comprise: support for up to four processors; 200 MHz DDR SDRAM support through a DDR Memory Hub (DMH) interface; two SPs to connect to the SPS 140 A and 140 B or the IOH 120 A and 120 B; and support for 32 DIMMs resulting in up to 128 GB per SNC 110 A and 110 B with 1 Gigabit (Gb) DDR devices.
  • DDH DDR Memory Hub
  • SNC 110 A may comprise four high-speed point-to-point links to four DMHs that connect to components such as DDR DRAM components.
  • the four links may provide a peak memory bandwidth of 6.4 GB/s per node.
  • SNC 110 A may also buffer up to 8 KB of write data to prioritize reads over writes.
  • SNC 110 A may also implement interleaving and reordering to improve bandwidth and/or to reduce latency. Interleaving sequential accesses across many banks may optimize throughput and may minimize the effect of overhead. Reordering may allow conflict-free accesses to bypass requests to busy banks. Accesses may be sorted into four queues to minimize timing conflicts between accesses. If accesses are within a particular address range, they may be sorted by channel, then by least significant bank bit. Otherwise, they may be sorted by bank. An arbiter may choose from among the conflict-free accesses at the head of the four re-ordering queues. In many embodiments, these re-ordering policies may be chosen heuristically, deterministically, or by other techniques.
  • SNC 110 A may comprise three main units: local access transaction tracker (LATT) 100 E, remote access transaction tracker (RATT) 110 E, and Data Buffer 120 E.
  • LATT 100 E may track processor requests.
  • LATT 100 E may convert processor requests to SP or memory controller requests and may return responses to the processors 100 A-D.
  • RATT 110 E may track inbound transactions from Scalability Ports until the necessary snoops and/or memory accesses are complete. Further, Data Buffer 120 E may transport and may hold data between the processor bus, memory interface, and the SP interfaces.
  • SNC 110 A and/or 110 B may comprise a hot page mechanism to tune memory latency, as described below.
  • Multi-node configurations may feature a shorter latency for local memory accesses.
  • the SNC 110 A may contain some memory that may track and count the number of accesses to each of more than one address location or range of address locations (the granularity may be programmable). This mechanism, referred to herein as the hot page mechanism, may track local or remote accesses.
  • a software developer may use a hot page mechanism to identify hot spots in the memory that is being accessed by remote nodes and may optimize or enhance the software to move those accesses to the local node.
  • the hot page mechanism may also be used for other forms of software optimizations.
  • SPS 140 A may comprise a coherent interconnect switch that connects SNC 110 A, SNC 110 B, SPS 140 B, I/O Hub 120 A, and I/O Hub 120 B through the Scalability Ports (SP).
  • some features of the SPS 140 A may comprise: six identical Scalability Ports with a total peak bandwidth of 38.4 GB/s; an Integrated snoop filter that may track the state of one or more cache lines in processor and IOH caches which may reduce snoop probes to remote nodes and may support an SP cache consistency protocol; and an Internal interconnect that may comprise a crossbar and network of buses for critical coherent traffic.
  • a shared crossbar bypass bus structure may be incorporated into SPS to provide an independent path for SP to coherency interleave transactions and vice versa.
  • cache look up and update operations may not be delayed as a result of, for example, data streaming.
  • the shared crossbar bypass bus structure may comprise parallel bits, a data valid qualifier, a virtual channel qualifier, and a multi-bit destination qualifier.
  • a coherency interleave or SP with data to send may assert its request-channel arbitration request or its response-channel arbitration request, according to the type of data to be sent.
  • the unit may also transmit its data locally to bus multiplexors and may assert its valid signal. This data may be propagated onto the shared bus by the multiplexors after the arbiter selects this coherency interleave or SP.
  • the arbiter may select one of the requesting units to own the bus in the next clock cycle. (Note: during idle conditions, one of the transmitters may always be “selected” as well).
  • the arbiter may send control signals to the bus multiplexors, and may send a selected signal to the transmitter that has been selected.
  • Various types of arbiters are well-known and are not further described herein to avoid obscuring other aspects of the SPS.
  • the transmitting coherency interleave or SP may observe by its selected signal and its targeted destination's ready signal that the data may have been transmitted and absorbed, the transmitting coherency interleave or SP may deassert its valid qualifier or may proceed to send new data.
  • an SP may implement the physical, link, and part of the protocol layers.
  • the SP may comprise a point-to-point cache-consistent interface designed to build shared memory multiprocessor systems that may overcome the limitations of shared bus based architectures.
  • the embodiment depicts four centralized SP protocol (SPPC) and snoop filter (SF) units, which are interleaved for improved throughput and ease of physical design, although they form one logical unit. All the ports and SPPC/SF interleaves may be coupled by a crossbar (X-Bar) and network of buses. In several embodiments, these buses may reduce latency on critical operations.
  • SPPC centralized SP protocol
  • SF snoop filter
  • the physical layer may use pin-efficient simultaneous bi-directional signaling technology, where the same signal pins may be used to send signals in both directions in a full duplex manner. In other embodiments, signal pins may be used to implement half duplex or other signaling technology.
  • the physical layer may comprise a source synchronous interface where the transmitter may send the clock along with the data and the receiver may use the clock to sample the data.
  • a scalability port interface may be, for example, 40 bits wide, with 32 of those bits used for transmitting data, 2 bits used for link layer control information, and 6 bits used for maintaining data integrity. The interface may operate at various rates, for example 800 million transfers/sec, which may result in a peak bandwidth of 3.2 GB/sec per port in each direction.
  • the scalability port may comprise a packetized interface, where requests and responses may be multiplexed on the same physical medium and wherein each packet may contain a header to route the packet and to specify the attributes of the packet.
  • the effective bandwidth achieved on the interface may depend upon the distribution of packets of various sizes.
  • the SP may also be capable of delivering an effective bandwidth of 80% of the peak.
  • the SP link layer may support virtual channels and may provide flow control and reliable transmission. SP may use two virtual channels to build independent request and response virtual interconnect on a single physical interconnect.
  • flow control may be done using a credit-based scheme.
  • the unit of flow control may comprise a flit (sub-packet) that is four-transfers long on the interface.
  • the link layer may also be responsible for detecting transmission errors and may rely on a retry scheme using a modified version of “go-back-n” sliding window protocol for recovery.
  • the SP protocol layer may implement the state machines and may provide resources for functionality such as cache consistency, translation lookaside buffer (TLB) consistency, synchronization, interrupt delivery, etc.
  • the protocol layer may be designed to support both ItaniumTM and XeonTM processor families.
  • the protocol may allow for high performance and flexible interconnect fabric by not relying on an ordered fabric for performance sensitive operations.
  • the SP consistency protocol may allow for cache lines to be in Modified, Exclusive, Shared or Invalid state (MESI) at the caching agents and may use an invalidation-based protocol.
  • the protocol may be built on the concept of sparse directory, called snoop filter, which may keep track of lines present in the caches rather than keeping track of all or many lines in memory.
  • Such a protocol may allow for entire snoop filters to be stored on the same component as the directory state machine for high performance, which may not have been possible with a conventional directory. Separation of snoop filter from the memory agent may be allowed to facilitate the building block philosophy, for example, by allowing the use of node controllers and I/O hubs that may be designed for low cost systems.
  • the building block philosophy may be supported efficiently through transactions that may access memory concurrently or substantially concurrently with coherency resolution.
  • the protocol may also provide coherent transactions that may be optimized for I/O device operations.
  • Conflict resolution on concurrent accesses to same cache line may be done in a relaxed and, in several embodiments, a distributed manner.
  • the Scalability Port consistency protocol may allow for extensions to large-scale systems through a second level distributed directory that may work in conjunction with a basic snoop filter.
  • the distributed SP protocol logic may perform address/request decoding to determine how packets may be routed in the SPS 140 A and/or 140 B.
  • SPPD may control data transfers between ports including modified data transfers.
  • the SPPC may comprise a programmable protocol engine that may process requests and responses and may spawn transactions. In some embodiments, SPPC may handle global ordering and may contain anti-starvation logic to guarantee fairness between nodes.
  • the combined snoop filter tag array size may be 1 MB and may maintain the state of, for instance, approximately 200K cache lines.
  • the combined snoop filter tag array may support up to 266M snoop filter lookup-and-update operations per second.
  • an entry may contain an address tag, a presence vector (one bit per node), the cache consistency protocol state (M/E, S, I), and ECC check bits.
  • I/O hubs 120 A and 120 B may operate to bridge transactions between an ordered domain and an unordered domain by routing traffic between I/O devices and scalability ports.
  • the I/O hubs 120 A and 120 B may Is provide peer-to-peer communication between I/O interfaces.
  • I/O hub 120 A may comprise unordered interface 142 , upbound path 144 , snoop filter 146 , and a hub interface 147 .
  • the hub interface 147 may comprise arbitration circuitry 170 , ordering queue 171 , read bypass queue 172 , ownership pre-fetch circuitry 174 , address logic and queue 148 , read cache and logic 173 , and I/O interface 150 .
  • the I/O hubs 120 A and 120 B may, in one embodiment, comprise a central component of the I/O subsystem of a server. Such an I/O hub may comprise a prefetch engine and read caches to deliver full bandwidth on data return; two SP interfaces to connect to either the SPSs 140 A and 140 B or the SNCs 110 A and 110 B; and hub interface 147 with a peak bandwidth of 1 (GB/s) gigabyte per second.
  • the I/O hubs 120 A and 120 B may support a building block philosophy, which may result in a flexible and configurable I/O subsystem. Many embodiments may also comprise components such as a legacy I/O controller hub (ICH), a PCI/PCI-X bridge, for example bridges 160 and 190 , and a host controller adapter. Since I/O hubs 120 A and 120 B may interface a variety of different I/O bridges, the microarchitecture may be generically optimized for I/O traffic behavior.
  • ICH legacy I/O controller hub
  • PCI/PCI-X bridge for example bridges 160 and 190
  • host controller adapter a host controller adapter
  • the I/O hub may have internal structures that may comprise Read Caches 110 G, Write Cache and Data Buffer 120 G, Cache Directory 130 G, Local Request Buffer 150 G and Remote Request Buffer 140 G, Read Prefetch Engines 170 G, and Ordering Queues 180 G.
  • Read Caches 110 G may comprise, for example, a 4 KB Read Cache dedicated to a Hub Interface.
  • fully coherent read caches may allow an aggressive pre-fetching algorithm without exposure to stale data delivery.
  • a 4 KB Read Cache may be sufficient, in many embodiments, to accommodate enough read pre-fetches to hide memory latency.
  • independent read caches may prevent the traffic characteristics of one Hub interface to interfere with the traffic characteristics of the other Hub interfaces.
  • Write Cache and Data Buffer 120 G may comprise, for example, a write cache implemented in the I/O hub.
  • coherent write caching may promote combining of write data to a cache line granularity, potentially increasing the efficiency of the SP and decreasing snoop overhead on the system.
  • Cache Directory 130 G may comprise a directory that may track the cache lines held in the multiple read caches 110 G and the write cache 120 G.
  • the directory may also be responsible for tracking duplicate entries of shared lines.
  • Local Request Buffer 150 G and Remote Request Buffer 140 G may comprise buffers to track coherent transactions issued by the I/O hub (Local Request Buffer 150 G) and coherent transaction issued by other components (Remote Request Buffer 140 G).
  • the buffers may work together to detect access conflicts and may enforce cache consistency.
  • Read Prefetch Engines 170 G may comprise mechanisms to dynamically pre-fetch memory lines on behalf of the interfacing I/O devices.
  • Read Prefetch Engines 170 G may be optimized for traditional memory latencies so the I/O hub may be designed to prefetch beyond the requests issued by I/O devices for increased read bandwidth.
  • Ordering Queues 180 G may take advantage of the Scalability Port's inherently unordered protocol.
  • the I/O hub may increase or maximize performance by prefetching, pipelining, and parallelism.
  • unordered interface 142 may facilitate communication between I/O hub 120 A and a scalable node controller such as 110 A or 110 B with circuitry for a scalability port protocol layer, a scalability port link layer, and a scalability port physical layer.
  • unordered interface 142 may comprise simultaneous bi-directional signaling. Unordered interface 142 may couple to scalability port switches 140 A and 140 B to transmit transactions between scalability node controllers 110 A and 110 B and agents 162 and 164 .
  • Transactions between unordered interface 142 and scalability node controllers 110 A and 110 B may transmit in no particular order or in an order based upon the availability of resources or the ability for a target to complete a transaction.
  • the transmission order may not be based upon, for instance, a particular transaction order according to ordering rules of an I/O interface, such as a PCI bus.
  • agent 162 may initiate a transaction to write data to a memory line
  • agent 162 may transmit four packets to accomplish the write.
  • Bridge 160 may receive the four packets in order and forward the packets in order to I/O interface 150 .
  • Ordering queue 171 may maintain the order of the four packets to forward to the unordered interface 142 via the upbound path 144 .
  • Scalability port switch 140 A may receive the packets from unordered interface 142 and transmit the packets to memory 115 A and memory 115 B.
  • Upbound path 144 may comprise a path for hub interface 147 to issue transactions to the unordered interface 142 and to snoop filter 146 .
  • upbound path 144 may carry inbound coherent requests to unordered interface 142 , as well as ownership requests and read cache entry invalidations from ownership pre-fetch circuitry 174 and read cache and logic 173 , respectively, to snoop filter 146 .
  • upbound path 144 may comprise a pending transaction buffer to store a pending transaction on the unordered interface 142 until a scalability port switch 140 A or 140 B may retrieve or may be available to receive the pending transaction.
  • hub interface 147 may comprise arbitration circuitry 170 to grant access to upbound path 144 .
  • the arbitration circuitry 170 may provide substantially equivalent access to the unordered interface 142 .
  • the arbitration circuitry 170 may arbitrate between the ordering queue 171 and the read bypass queue 172 based upon a priority associated with, or an agent associated with, an enqueued transaction.
  • Snoop filter 146 may issue ownership requests on behalf of transactions in ordering queue 171 , return ownership completions, monitor pending transactions on unordered interface 142 , and respond to downbound snoop requests from the unordered interface 142 or from a peer hub interface.
  • snoop filter 146 may perform conflict checks between snoop requests, ownership requests, and ownerships of memory lines in memory 115 A or memory 115 B. For example, a write transaction waiting at ordering queue 171 to write data to memory line one in memory 115 A may reach a top of ordering queue 171 . After the write transaction for memory line one may reach the top of ordering queue 171 , hub interface 147 may request ownership of memory line one for the write transaction via snoop filter 146 .
  • Snoop filter 146 may perform a conflict check with the ownership request and determine that the ownership request may conflict with the ownership of memory line one by a pending write transaction on unordered interface 142 . Snoop filter 146 may respond to the ownership request by transmitting an invalidation request to hub interface 147 .
  • hub interface 147 may reissue a request for ownership of memory line one for the write transaction and snoop filter 146 may perform a conflict check and determine that no conflict exists with an ownership by the write transaction. Then, snoop filter 146 may transmit a request for ownership to scalable node controller 110 A via scalability port switch 140 A. In response, snoop filter 146 may receive an ownership completion for memory line one and may return the ownership completion to hub interface 147 . In some embodiments, hub interface 147 may receive an ownership completion for a transaction and may modify the coherency state of the transaction to ‘exclusive’. In several of these embodiments, snoop filter 146 may maintain the coherency state of the transaction in a buffer.
  • Hub interface 147 may maintain a transaction order for transactions received via I/O interface 150 in accordance with ordering rules associated with bridge 160 . Hub interface 147 may also determine the coherency state of transactions received via I/O interface 150 . For example, hub interface 147 may receive a write transaction from agent 164 via bridge 160 and place the header for the write transaction in ordering queue 171 . Substantially simultaneously ownership pre-fetch circuitry 174 may request ownership of the memory line associated the write transaction via snoop filter 146 . The ownership request may be referred to as ownership pre-fetching since the write transaction may not satisfy ordering rules associated with I/O interface 150 . In alternate embodiments, when the ordering queue 171 is empty and no transactions are pending on the unordered interface 142 , the write transaction may bypass ordering queue 171 and transmit to upbound path 144 to transmit across unordered interface 142 .
  • Snoop filter 146 may receive the request for ownership and perform a conflict check. In some instances, snoop filter 146 may determine a conflict with the ownership by the write transaction. Since the coherency state of the write transaction may be pending when received, snoop filter 146 may deny the request for ownership. After the transaction order of the write transaction may satisfy ordering rules, or in some embodiments after the write transaction reaches the top of ordering queue 171 , hub interface 147 may reissue a request for ownership. In response to receiving an ownership completion for the write transaction, hub interface 147 may change the coherency state of the write transaction to ‘exclusive’ and then to ‘modified’.
  • hub interface 147 may change the coherency state of the write transaction directly to ‘modified’, making the data of the write transaction globally visible. In several embodiments, hub interface 147 may transmit the transaction header of the write transaction to snoop filter 146 to indicate the change in the coherency state to ‘modified’.
  • hub interface 147 may change the coherency state of the write transaction to ‘exclusive’ and maintain the transaction in ‘exclusive’ state until the write transaction may satisfy the corresponding ordering rules, unless the ownership may be invalidated, or stolen.
  • the ordering rules governing transactions received via bridge 160 from agent 162 may be independent or substantially independent from ordering rules governing transactions received from agent 164 .
  • many embodiments allow a second transaction to steal or invalidate the ownership of the memory line by a first transaction to transmit to uphound path 144 when the ordering of the second transaction is independent or substantially independent from the ordering of the first transaction.
  • Ownership stealing may prevent backup, starvation, deadlock, or stalling of the second transaction or the leaf comprising the second transaction as a result of the first transaction.
  • ownership may be stolen when the first transaction may reside in a different leaf from the second transaction and/or in the same leaf.
  • read bypass queue 172 may provide a substantially independent path to the unordered interface 142 for read transactions that may be independent of ordering rules associated with transactions in ordering queue 171 .
  • Read bypass queue 172 may receive read transactions from the I/O interface 150 or may receive transactions from ordering queue 171 .
  • the embodiment may take advantage of the unrelated transaction ordering between the agent 162 and agent 164 or between read and write transactions from agent 162 and/or agent 164 .
  • agent 162 may request a read of memory line one of memory 115 A.
  • Address logic and queue 148 may determine that a transaction, such as a write transaction, associated with memory line one is in the ordering queue 171 .
  • Hub interface 147 may forward the read transaction to ordering queue 171 to maintain a transaction order according to an ordering rule associated with agent 162 . Afterwards, snoop filter may apply backpressure to read transactions from hub interface 147 until a pending read transaction in the snoop filter 146 may be transmitted across the unordered interface or until the ordering queue 171 may be flushed. The transactions of ordering queue 171 may be processed until the read transaction from agent 162 reaches the top of ordering queue 171 . While backpressure may be applied to the read transaction from snoop filter 146 , the read transaction may not be forwarded to snoop filter 146 . In response, hub interface 147 may forward the read transaction to the bottom of read bypass queue 172 .
  • the read transaction may be forwarded to read bypass queue 172 to allow subsequently received write transactions to continue to transmit to the unordered interface 142 .
  • the transaction order of the read transaction may have satisfied the ordering rules associated with agent 162 so the read transaction may proceed in a path independent from the ordering rules associated with ordering queue 171 .
  • I/O caching and adaptive pre-fetching of memory lines for read cache may be implemented in I/O hub 120 A may comprise implementing an integrated caching and prefetch mechanism to provide high I/O throughput.
  • Pre-fetching cache lines may hide round trip memory read latency and may save a read request from traversing through, for example, the chipset to memory 115 A and back.
  • Adaptive pre-fetch and throttling may utilize an adaptive algorithm with two or more dynamic profiles (conservative and aggressive in one embodiment) to pre-fetch cache lines speculatively. Pre-fetching of cache lines may be initiated after the initial request for a given stream is serviced.
  • a stream may comprise a sequence of contiguous address requests to an I/O hub 120 A or 120 B. Subsequent read requests from the stream that may hit the read cache (possibly from the pre-fetched data) may be sent back or responded to without incurring upstream latency.
  • a pre-fetch engine of logic circuitry in read cache and logic 173 may have the ability to sense traffic, like real time traffic, and modify its pre-fetch cache request generation rate for different I/O modes and may switches from one profile to another based on the prevailing conditions.
  • the degree of pre-fetching of cache lines may vary with the number of available streams for a given prefetch profile. For instance, if only one stream exists and the prefetch profile may be set to “aggressive,” then up to eight cache lines may be pre-fetched. If the number of streams increases to two, then one or each stream may be limited to a maximum of four cache lines. Pre-fetching cache lines may continue as long as the stream may still be allocated and an upper throttle limit may not have been reached. In several embodiments, this adaptive self-regulation may comprise a trade-off between pre-fetching enough data for cache to stream and not wasting the memory bandwidth excessively.
  • cache such as the read cache of read cache and logic 173 may comprise a unified, logically and/or physically, cache to enhance performance across one or more streams with an amount of cache space, by dynamically allocating cache space to streams of reads or writes.
  • high streaming bandwidth performance may be accomplished with a smaller cache size than conventional cache.
  • a bus such as a PCI bus may facilitate two kilobytes (KB) of cache for streams via bridge 160 to I/O hub 120 A, with up to four streams
  • hub interface 147 may comprise a unified cache with one or more cache or buffers comprising a total of two KB.
  • read cache and logic 173 may allocate one KB of the unified cache to stream one for real and/or speculative read requests, wherein a real read request may result from an actual request received from bridge 160 and a speculative read request may be initiated by the pre-fetch cache engine of read cache and logic 173 .
  • read cache and logic 173 may allocate 0.75 KB to cache for stream two. Substantially simultaneously, read cache and logic 173 may reduce the cache space available to stream one from one KB to 0.75 KB, leaving 0.5 KB of space for a subsequently active stream.
  • read cache and logic 173 may allocate 0.5 KB of cache to stream three and de-allocate 0.25 KB of cache from stream one and from stream two. Upon de-allocation of the 0.25 KB from both stream one and stream two, 0.5 KB of cache may remain available for a stream four.
  • the cache sizes and stream allocations may be adjusted. For instance, some embodiments may comprise more than one leaf in I/O hub 120 A and streams may be initiated on either leaf. In some of these embodiments, a unified cache may be allocated for each leaf. In other embodiments, a unified cache may be sized for caching of streams from both leaves. In still further embodiments, the unified cache may be partitioned dynamically between leaves or combined into a unified cache dynamically for two or more leaves.
  • a timing mechanism such as programmable timer may enhance the operation of pre-fetching cache by determining the number of active streams. For example, adaptive pre-fetch and throttling may allocate cache space, such as cache space of a unified read cache, based upon a number of streams concurrently or substantially concurrently requesting data so the efficiency of the cache allocation may be based upon the accuracy of the count of streams.
  • cache space such as cache space of a unified read cache
  • a stream may end without an indication to that effect being transmitted to I/O hub 140 .
  • Read cache and logic 173 may continue to allocate cache to the stream and may also continue to pre-fetch cache lines for the stream. After the stream may terminate, the speculative cache line pre-fetch requests may unnecessarily use bandwidth upstream in addition to the memory.
  • many embodiments may comprise the timing mechanism to terminate a stream based upon inactivity.
  • a timing mechanism may measure the time between a first request and a second request of the stream and after the time exceeds a certain time, the stream may be considered to have terminated.
  • a balance between maintaining allocation and bandwidth may be balanced with increased performance: resulting from cache allocation and speculative pre-fetching in a determination of the time selected for such as timing mechanism.
  • the time selection may also be based upon the latency, average or nominal, for receiving a completion from an upbound read request.
  • ordering queue 171 and read bypass queue 172 may comprise memory interleaving and reordering to increase memory throughput.
  • Coherent I/O caches and pre-fetching may hide I/O read latency, even in the large multi-node configurations and the snoop filter may reduce overall latency and may eliminate unnecessary snoop traffic.
  • Memory request re-ordering may attenuate a number of dead cycles on the memory data bus induced by a DDR/DRAM protocol.
  • One of the largest dead cycle penalties may be caused by a page replace, also called a page miss.
  • a page replace may happen after two consecutive requests go to different pages on the same DIMM.
  • the second request may be delayed for the duration to close the previously activated page before activating the page for the next request. With some DIMMs, this duration may be 70 ns.
  • there may be turnaround penalties of one cycle (e.g., 10 ns) on switching from read to write or vice-versa or when read data comes from different DIMMs on the same DDR channel.
  • memory requests may be placed in ordering queue 171 , like a FIFO queue and processed in-order, the protocol induced inefficiencies may reduce sustained bandwidth significantly for random stream of requests typical of server workloads. However, when requests may be re-ordered to avoid conflicts, the sustained bandwidth and the average read latency may be improved.
  • Ownership pre-fetch circuitry 174 may pre-fetch ownership of memory contents associated with a memory line after a transaction is received by I/O interface 150 and may prevent ownership from being pre-fetched in response to a signal from or not receiving a signal from address logic and queue 148 .
  • hub interface 147 may receive two write transactions from agent 162 to write data to the same memory line(s).
  • ownership pre-fetch circuitry 174 may initiate a request for ownership of the memory line(s) associated with the first write transaction. Subsequently, I/O interface 150 may receive the second write transaction.
  • Ownership pre-fetch circuitry 174 may receive a signal, or may not receive a signal in some embodiments, to indicate that ownership of the memory line(s) associated with the second write transaction may not be pre-fetched for the second transaction.
  • Address logic and queue 148 may maintain a list of pending transactions in hub interface 147 and/or I/O hub 120 A, depending upon the embodiment, and may compare an address of an upbound transaction to the list to determine whether ownership may be pre-fetched for the transaction and/or the upbound transaction may be subject to independent ordering rules from a transaction in the ordering queue 171 .
  • read transactions may comprise more than one address that may subject the read transaction to more than one ordering rule or set of ordering rules.
  • agent 162 may initiate a first write transaction to write to memory line one, a second write transaction to write to memory line one, and a first read transaction to read from memory line one. Then, agent 164 may initiate a second read transaction to read from memory line one.
  • the I/O interface 150 may receive the first write transaction and address logic and queue 148 may determine that no address in an address queue of the address logic and queue 148 may match memory line one and may transmit a signal to ownership pre-fetch circuitry 174 to pre-fetch ownership of memory line one for the first write transaction.
  • address logic and queue 148 may determine that the address is owned by the first write transaction, which is ahead of the second write transaction with regards to transaction order, and may transmit a signal to ownership pre-fetch circuitry 174 to indicate that ownership may not or should not be pre-fetched for the second write transaction.
  • the I/O interface 150 may receive the first read transaction and address logic and queue 148 may determine that the first read may follow the first and second write transactions in a transaction order since agent 162 also initiated the first read transaction. Hub interface 147 may forward the first read transaction to the bottom of ordering queue 171 . Then, I/O interface 150 may receive the second read transaction.
  • the second read transaction also performs an action on memory line one, but, in the present embodiment, address and queue logic 148 maintains an address associated with pending transactions that comprises the address of the source agent or a hub ID representing one or more source agents, such as agent 162 for the first and second write transactions and the first read transaction. Since the hub ID of the second read transaction may be different from the hub ID's associated with the first and second write transactions and the first read transaction, the second read transaction may advance toward the unordered interface 142 along an independent path, e.g. the read bypass queue 172 , bypassing the first and second write transactions and the first read transaction.
  • address and queue logic 148 maintains an address associated with pending transactions that comprises the address of the source agent or a hub ID representing one or more source agents, such as agent 162 for the first and second write transactions and the first read transaction. Since the hub ID of the second read transaction may be different from the hub ID's associated with the first and second write transactions and the first read transaction, the second read transaction may advance toward the unordered interface 142 along
  • read cache and logic 173 may attach cache line invalidation data to the second read transaction and in response to a match between the address associated with the cache line invalidation data and an address of a pending transaction, such as memory line one, the second read transaction may be forwarded to the bottom of the ordering queue 171 rather than the bottom of the read bypass queue 172 .
  • address logic and queue 148 may not maintain an address or ID associated with the source agent so determinations for ownership pre-fetching and/or bypassing may be made based upon the memory line(s) associated with a transaction.
  • Read cache and logic 173 may review a transaction after the transaction is received by I/O interface 150 .
  • read cache and logic 173 may recognize a read transaction for a memory line, may determine whether the read cache and logic 173 stores a valid cache line comprising a copy of the memory line, and may respond to the read transaction after determining that read cache and logic 173 stores a valid cache line comprising a copy of the memory line.
  • read cache and logic 173 may not comprise the valid cache line and, in many embodiments, read cache and logic 173 may then attach cache line invalidation data to the read transaction to clear space to store the data received in response to the read transaction.
  • the cache line invalidation data may be forwarded to the snoop filter 146 to maintain synchronization between the read cache coherency states and the coherency states stored in the snoop filter 146 .
  • the cache line invalidation data may comprise or be associated with an entry in the cache of read cache and logic 173 and the address of the memory line associated with the entry.
  • the cache line invalidation data may be designed to instruct the snoop filter to invalidate an association between an address in the snoop filter 146 and an entry in the read cache.
  • read cache and logic 173 may store a cache version of memory line one and I/O interface 150 may receive a read transaction for memory line two.
  • read cache and logic 173 may clear room in cache for memory line two.
  • read cache and logic 173 may invalidate the oldest and/or least used data in cache, such as memory line one to make room for a copy of memory line two.
  • read cache and logic 173 may insert an invalidation request for the copy of memory line one into the header for the read transaction of memory line two.
  • Snoop filter 146 may receive the invalidation request after the read transaction may reach the snoop filter 146 and may return a copy of the data from the read completion to read cache and logic 173 .
  • read cache and logic 173 may further store data of a write transaction, or other memory lines near the memory line subject to the read transaction, into cache in anticipation of a read transaction for the same memory line(s).
  • bridges 160 and 190 couple one or more agents 162 , 164 , 192 , and 194 to the I/O hubs 120 A and 120 B from an ordered domain such as a peripheral component interconnect (PCI) bus, a universal serial bus (USB), or an infiniband channel.
  • the agents 162 , 164 , 192 , and 194 may transact upbound or peer-to-peer via I/O hubs 120 A and 120 B.
  • agents 162 , 164 , 192 , and 194 may transact with any processor and any of processors 100 A-D may transact with any agent.
  • Redundancy may be provided in the architecture, enabling fast reset and reboot in a degraded mode in the event of a component or interconnect failure. For example, if an SP interface fails, the system is reset and reconfigured to use only one SPS switch. In a degraded mode, system performance may be impacted.
  • the SPS may be designed to support partitioning of the system into, for example, two domains.
  • a domain may be a “system within a system”, that is, a domain may have its own instance of the operating system.
  • a domain may support independent reset, independent error status and signaling, etc. Any two or more ports may be allocated to a domain (both an SNC and I/O hub may be present in a domain).
  • partitioning may be accomplished by configuring the SPS (via firmware setup, using a remote management console, or the like) during system initialization. Once the system is partitioned, processor/memory nodes or I/O nodes may be moved from one partition to the other using the node hot plug capabilities.
  • RAS is an acronym for Reliability/Availability/Servicability.
  • FIG. 9 there is shown an embodiment of an apparatus of an I/O hub to maintain ordering for transactions between an ordered domain, I/O interface 290 , and unordered domain, unordered interface 207 .
  • the embodiment may comprise unordered interface 207 , downbound snoop path 200 , upbound snoop path 205 , snoop filter 210 , coherency interface 230 , hub interface 280 , and upbound path 220 .
  • the downbound snoop path 200 may comprise circuitry to transmit a snoop request from the unordered interface 207 down to snoop filter 210 .
  • the upbound snoop path 205 may provide a path between snoop filter 210 and a controller on the other side of the unordered interface 207 to facilitate a snoop requests by snoop filter 210 and/or I/O devices coupled with I/O interface 290 .
  • upbound snoop path 205 may facilitate cache coherency requests.
  • a processor in the unordered domain may comprise cache, and snoop filter 210 may request invalidation of a cache line after hub interface 280 receives a write transaction for memory associated with that cache line.
  • Snoop filter 210 may comprise conflict circuitry and a buffer.
  • Conflict circuitry may determine conflicts between downbound snoop requests, inbound read transactions, inbound write transactions, and upbound transactions. Further, conflict circuitry may couple with the buffer to store the coherency states and associate the coherency states with entries in the upbound ordering first-in, first-out (FIFO) queue 240 .
  • FIFO first-in, first-out
  • Coherency interface 230 may relay internal coherency completion and invalidation requests from snoop filter 210 to hub interface 280 . These coherency requests may be generated by snoop filter 210 and may be the result of an ownership completion, a downbound snoop request, or an inbound coherent transaction. For example, after snoop filter 210 receives an ownership completion across unordered interface 207 , snoop filter 210 may forward the completion across coherency interface 230 to the hub interface 280 . The ownership completion may be addressed to the entry in the upbound ordering FIFO queue 240 that has a write transaction header associated with the corresponding ownership request.
  • Hub interface 280 may receive inbound transactions, such as upbound write and read transactions, and maintain ordering of the upbound transactions in accordance with ordering rules, such as PCI ordering rules and rules associated with coherency and the PCI producer consumer model.
  • Hub interface 280 may comprise arbitration circuitry 222 , transaction queues such as upbound ordering FIFO queue 240 and read bypass FIFO queue 250 , ownership pre-fetch circuitry 260 , address logic 270 , address queue 275 , read cache and logic 285 , and I/O interface 290 .
  • Arbitration circuitry 222 may arbitrate access to the upbound path 220 between transaction queues, upbound ordering FIFO queue 240 and read bypass FIFO queue 250 .
  • arbitration circuitry 222 may also arbitrate access between the transaction queues and ownership pre-fetch circuitry 260 to facilitate routing of coherency requests and responses from ownership pre-fetch circuitry 260 to snoop filter 210 .
  • arbitration circuitry 222 may arbitrate substantially equivalent access between upbound ordering FIFO queue 240 and read bypass FIFO queue 250 for transmission of transactions from a transaction queue upbound through the upbound path 220 to unordered interface 207 .
  • Hub interface 280 may comprise one or more transaction queues such as upbound ordering FIFO queue 240 to maintain a transaction order for upbound transactions according to the ordering rules and to store the coherency state and source ID for each upbound transaction.
  • the source ID may associate an agent, or I/O device, with a transaction.
  • upbound ordering FIFO queue 240 may maintain an ordering for transactions received from the same source agent, or same source ID and/or hub ID.
  • upbound ordering FIFO queue 240 may receive transactions from agent number one and transactions from agent number two.
  • the transaction order(s) maintained for agent number one and agent number two may be independent unless the transactions are associated with the same memory line.
  • an upbound ordering FIFO queue such as upbound ordering FIFO queue 240 , may be dedicated for a particular hub ID or source ID.
  • Read bypass FIFO queue 250 may facilitate progress of read transactions, wherein a read transaction may be subject to ordering rules independent of or substantially independent of ordering rules associated with transactions in upbound ordering FIFO queue 240 .
  • Read bypass FIFO queue 250 may receive read transactions from both the I/O interface 290 and the upbound ordering FIFO queue 240 .
  • I/O interface 290 may receive a first read transaction that may be associated with an address that may not have a matching entry in address queue 275 .
  • the read transaction may be forwarded to the bottom of the read bypass FIFO queue 250 .
  • hub interface 280 may comprise more than one read bypass FIFO queue to adjust access to upbound path 220 between targets of transactions or transactions from different sources.
  • An advantage of embodiments that may comprise transaction bypass circuitry, such as circuitry comprising read bypass FIFO queue 250 may be that transactions may be processed in less time than the nominal snoop latency of the system. For example, when a read transaction may bypass a write transaction for the same memory line(s), the latency of the read transaction may not be penalized by the latency of the write transaction. Further, in embodiments that comprise ownership pre-fetch circuitry, such as ownership pre-fetch circuitry 260 , the latency of a write transaction may not be limited to the nominal snoop latency of the system so the latency of the write transaction may decrease to the latency for the embodiment to process to write transaction.
  • Ownership pre-fetch circuitry 260 may pre-fetch ownership of a memory line for a transaction received by I/O interface 290 to avoid some latency involved with requesting ownership of the memory after the transaction may satisfy its corresponding ordering rules.
  • a determination of pre-fetch ownership may be based upon whether an ownership of the memory line may reside with a pending transaction in upbound ordering FIFO queue 240 .
  • I/O interface 290 may receive a write transaction to write data to memory line one.
  • Address logic 270 may verify that no entry in the upbound ordering FIFO queue 240 may be associated with memory line one.
  • ownership pre-fetch circuitry 260 may request ownership of memory line one via snoop filter 210 .
  • the write transaction may be placed into the bottom of upbound ordering FIFO queue 240 and ownership for memory line one by the write transaction may not be requested again until the write transaction satisfies associated ordering rules or, in some embodiments, after the write transaction reaches or nears the top of upbound ordering FIFO queue 240 .
  • Address logic 270 may maintain address queue 275 comprising addresses associated with pending transactions in hub interface 280 and may compare an address of an upbound transaction against addresses in the queue to determine whether ownership may be pre-fetched for the upbound transaction.
  • read cache and logic 285 piggy-backs or attaches cache line invalidation data to read transactions to make a cache line available for a new copy of a memory line
  • read transactions may comprise more than one address so address logic 270 may compare more than one address associated with a read transaction against addresses stored in a queue to determine whether the read transaction should be forwarded to the read bypass FIFO queue 250 or to the upbound ordering FIFO queue 240 .
  • address queue 275 may comprise memory, or a queue, to store an invalidation address of cache line invalidation data.
  • Address logic 270 and address queue 275 may maintain a list of one or more invalidation addresses to prevent a read transaction to read a memory line(s) from bypassing a transaction with cache line invalidation data, wherein the cache line invalidation data is associated with the same memory line(s). Preventing a read transaction from bypassing the transaction with cache line invalidation data may enhance synchronization between snoop filter 210 and the cache of read cache and logic 285 .
  • hub interface 280 compares a read transaction to the list of invalidation addresses in address queue 275 before forwarding the read transaction to the snoop filter 210 .
  • the read transaction may be held in a transaction queue, such as upbound ordering FIFO queue 240 or read bypass FIFO queue 250 , until the cache line invalidation data reaches snoop filter 210 .
  • the logic involved with checking invalidation addresses may be simplified by placing a read transaction with an address matching an address in a queue, such as a FIFO queue, of address queue 275 into the bottom of upbound ordering FIFO queue 240 .
  • the read transaction may be placed into the bottom of read bypass FIFO queue 250 when the address does not match an invalidation address in address queue 275 and/or be allowed to bypass upbound ordering FIFO queue 240 when read bypass FIFO queue 250 is empty and the address does not match an invalidation address.
  • the snoop filter 210 may compare the read transaction against the list of addresses associated with cache line invalidation data pending in the transaction queue(s) and prevent the read transaction from being completed until the corresponding cache line invalidation data reaches snoop filter 210 .
  • Read cache and logic 285 may snoop or monitor transactions as received by I/O interface 290 .
  • read cache and logic 285 comprised a queue to retrieve read transactions.
  • Read cache and logic 285 may recognize a read transaction for a memory line and may determine when a copy of the memory line associated with the read transaction is stored in read cache and logic 285 .
  • read cache and logic 285 may attach a cache line invalidation, or cache line invalidation data, to the read transaction.
  • the cache line invalidation may inform the snoop filter 210 of the invalidated cache line after the header for the read transaction is received by snoop filter 210 and snoop filter 210 may modify a corresponding entry in a buffer of the snoop filter 210 to maintain cache coherency.
  • Read cache and logic 285 may attach additional cache line invalidations to further read transactions to make room for copies of memory lines near the memory line associated with the read transaction.
  • read cache and logic 285 comprises a stream monitor 289 to determine stream activity; cache logic circuitry coupled with said stream monitor to determine a real and speculative pre-fetch cache line schedule based upon the stream activity and to generate pre-fetch requests; and cache coupled with said cache logic circuitry to store pre-fetched cache lines in response to the pre-fetch requests.
  • read cache and logic 285 comprises a stream monitor 289 to determine stream activity; a scheduler 287 coupled with the stream monitor 289 to determine a real and speculative pre-fetch cache line schedule based upon the stream activity; a pre-fetch engine 286 coupled with said scheduler 287 to generate pre-fetch requests; and cache coupled with said scheduler 287 to allocate cache to store pre-fetched cache lines in response to the pre-fetch requests.
  • the prefetch engine 286 may be responsible for handling read requests and sending data to a peripheral I/O device such as a NIC card, storage controller or a PCI-PCI bridge via I/O interface 290 .
  • the goal of the prefetch engine 286 may be to enhance or optimize high streaming data and, as well as to handle simultaneous concurrent I/O streams of varying demanded bandwidth in some embodiments.
  • DMA protocols for example, may comprise an application wherein large chunks of memory may be accessed by the I/O device to complete an operation (e.g., SCSI RAID controller initiates a disk write which translate into PCI reads).
  • Scheduler 287 may generate prefetch requests through a dynamic lookup table (LUT) 288 based on the number of available, active, or perceived streams.
  • scheduler 287 may have the ability to sense real time traffic with a real time traffic mechanism and may modify the pre-fetch request generation rate on a cache line granularity for different I/O modes (eg. PCI/PCI-x).
  • scheduler 287 may comprise an inbuilt adaptive throttling mechanism to prevent or substantially prevent memory subsystem overload and yet may also provide a requested I/O bandwidth.
  • Hub interface 280 may implement an integrated caching and prefetch mechanism to provide streaming data to high performance applications which occur, for example, in web-servers, database processing, data mining & retrieval, network and file servers.
  • the read cache of read cache and logic 285 may maintain coherency with the rest of the system and may eliminate the overheads to implement invalidation schemes which are less conducive to I/O streaming.
  • Pre-fetching cache lines may hide round trip read latency and may save every read request from traversing through the entire chipset to memory and back.
  • the spatial locality of read requests and contiguous address space (such as Memory Read Multiple in PCI Bus protocol) lend very well to pre-fetching cache lines. This may be important to server applications where a large amount of data may transfer at high bandwidth. For example, a SCSI RAID controller may initiate a 4 KB DMA transfer to perform disk write operation which translate into inbound reads.
  • FIG. 10 illustrates the basic relationships of the prefetch engine 286 , read cache 285 A, inbound queue 284 A and related components in one of the Hub interface clusters.
  • the I/O Hub may implement distributed read caches, one per Hub interface, such as hub interface 280 although some embodiments may comprise a unified cache.
  • a stream may comprise a sequence of requests, such as read or write requests, from an I/O bridge starting with an initial address and request length and further continued by requests with contiguous addresses in logical order.
  • requests from multiple streams may arrive at the I/O Hub or hub interface 280 in an interleaved fashion.
  • Read requests issued through the I/O Interface 290 by an external I/O bridge may be serviced by the prefetch engine 286 via I/O interface 290 .
  • the inbound transaction queue (ITQ) 284 A may accept transactions targeted for main memory and peer I/O bridges.
  • the ITQ 284 A may accept transactions originating from the I/O Interface 290 and may forward the transaction to the internal interconnect.
  • the Hub Interface cluster may break up coherent read requests into multiple cache line requests and may send them through the Inbound request buffer (IRB) queues 201 A to the internal interconnect and through the Scalability Port to the memory subsystem.
  • the Hub Id (encoded in the I/O Interface 290 request packet) may indicate which of the two IkEs to send the request to using the LSB (least significant bit, e.g. “0” or “1”) to specify IRB 0 or IRB 1 . Transactions may progress through the ITQ in FIFO order unless the ordering rules prevent its issuance.
  • the Hub Interface cluster may issue two 128-byte requests (e.g., four 64-byte reads) to the internal interconnect. For an unaligned request, the Hub Interface cluster may issue three 128-byte requests (e.g., six 64-byte reads).
  • a read completion structure may be assigned for each cache line request that was requested from the I/O interface 290 .
  • the read cache module for each I/O Interface 290 may comprise a fully associative memory of 4 KB that may store addresses and coherent information.
  • This central read cache may be referenced by a number of read streams issued by I/O bridges for that I/O interface 290 .
  • a stream may initially be assigned based on the incoming request.
  • the Hub Interface cluster may send cache line read requests to the memory controller. After a read completion is returned on the I/O Interface 290 that completion structure may be available for further read requests.
  • the Hub Interface cluster may not wait for enough completion structures for the entire subsequent request before issuing the next cache line read request.
  • the corresponding hub interface 280 may issue the request for the first 128 bytes of the 256 byte request.
  • the second 128 byte request may wait until another completion structure is available.
  • the ITQ 284 A may buffer subsequent inbound read requests (writes may proceed independent of the completion structures' status).
  • the Hub Interface cluster may exert backward pressure and may issue retries to future inbound I/O Interface 290 requests until there is at least one slot may be available in the ITQ 284 A.
  • the read data After the read data has returned (perhaps multiple lines) from memory, they may be installed in the read cache 285 A with coherence information, and the lines may be sequenced in the read completion unit 283 A and may be sent to the I/O Interface 290 .
  • Status and book-keeping information for a stream may be stored in a “read_cache_stream” structure, which may comprise a record of the current requested address, length, time last accessed, etc.
  • a timer 280 A may be associated with each stream to indicate when the stream becomes active, inactive, and/or may be perceived as active or inactive. If no subsequent requests are received for that stream before the timer expires, then the stream may be inactive or perceived as inactive.
  • pre-fetching cache may be initiated after the initial real request is sent inbound. (e.g. I/O bridge may send a read request for 256 B starting at Address x). Subsequent read requests from the I/O Interface 290 that hit the cache for the given stream may be sent to the I/O Interface 290 directly from the cache without incurring upstream latency.
  • the read cache prefetch engine 286 may dynamically allocate buffer space in the read cache 285 A based on incoming streams and may provide a seamless cache line replacement method for continuous streaming and buffer re-use; generate prefetch requests on a cache line granularity through a dynamic lookup table (LUT) 288 based on the number of available concurrent I/O streams; sense real time traffic and modify a prefetch cache request generation rate for different I/O modes (e.g. PCI/PCI-X); and throttle upstream requests to prevent memory subsystem overload.
  • LUT dynamic lookup table
  • Prefetch modes such as the modes shown in FIG. 11A, may be based upon the type of bus or agent that has an active stream.
  • An incoming read stream from the I/O Interface 290 may be considered as having two phases: Real Request Phase and Speculative Request Phase.
  • a read request of fixed length may be made by the I/O interface and the I/O Hub may attempt to deliver the requested data as quickly as possible.
  • the data may hit the read cache or it might miss the read cache, resulting in a fetch from main memory.
  • a stream enters the real request phase it may be considered a higher priority than streams in the Speculative Request phase.
  • the stream may enter the Speculative Request Phase, after all requested data has been fetched by the I/O Hub.
  • the stream may follow an adaptive prefetch mechanism.
  • the assumption may be that if the master requested data at address X, then the master may subsequently request data at address X+1.
  • Pre-fetching may continue as long as the stream is still allocated, e.g. active or perceived active, and, in some embodiments, the throttle limit has not been reached.
  • the Hub Interface cluster may attempt to prefetch n number of lines ahead of the Real request. Pre-fetching may be disabled when excessive read streams are generated at I/O Interface 290 . Speculative requests may be issued, for example, after the real request is greater than 128 bytes.
  • the adaptive prefetch mechanism may use a dynamic LUT to prefetch cache lines in the speculative phase.
  • Two prefetch profiles (conservative and aggressive) may be used to index the appropriate look-up table values as shown in FIG. 11B.
  • Profile selection may be a function of the number of PC[/PCI-x buses attached to the I/O bridge and the nature of devices (e.g. PCI vs. PCT-x).
  • the prefetch engine might be utilizing the conservative profile. As soon as any of the “aggressive” conditions are detected the Hub Interface 280 may change the pre-fetching to adapt to the change in bandwidth requirements. Likewise, after an aggressive condition no longer exists, the pre-fetch engine may switch back to the “conservative” pre-fetch profile.
  • the number of active streams may determine the appropriate LUT entry and control the number of lines to prefetch ahead of the real request for that stream as shown in FIG. 11B. For instance, if only 1 stream exists and the prefetch profile is set to “aggressive”, then up to 8 cache lines may be prefetched. If the number of streams increases to 2, then each stream may be limited to a maximum of 4 cache lines. Thus the degree of pre-fetching may vary with the number of available streams. By having the ability to detect when a stream becomes active or inactive through the timer mechanism, such as timer 280 A in FIG. 10, the number of streams may be automatically computed in real time and pre-fetching may be dynamically controlled.
  • This adaptive self-regulation may comprise a trade-off between pre-fetching enough data for the Hub Interface master and not overshooting the memory, thereby impacting the rest of the system.
  • the I/O Hub may maintain an upper limit of eight cache lines that may be pending delivery to a particular Hub Interface 280 and may minimize the memory overshoot.
  • the number of cache lines “in flight” or “pending” may be calculated on a 128-byte quantity.
  • the I/O Hub may issue a pair of 64-byte requests for a real request. This pair may be considered as one line “in flight” for purposes of the prefetch algorithm.
  • the number of real requests “in flight” may be compared against 8.
  • the term “in flight” may refer to reads that have been issued to the Scalability Port but have not yet returned to the Hub Interface cluster (read cache 285 A). For instance, if there are already eight lines in flight, then the read request may not issue until at least one line returns to the Hub Interface cluster. In many of these embodiments, a new request may not be issued until another completion structure is available.
  • the number of lines “in flight” is less than the maximum allowable, then the real request may be issued.
  • the mechanism for issuing speculative requests may determine the Prefetch profile using the table in FIG. 11B, or the like. Before any speculative requests may issue to the Scalability Port (after a read cache miss), the sum of real requests “in flight” and speculative requests “in flight” may be compared against the maximum cache lines in flight (8).
  • the term “in flight” may refer to reads that have been issued to the Scalability Port but have not yet returned to the Hub Interface cluster (read cache 285 A). For example, if there are already eight lines in flight, then the speculative read request may not issue until at least one line returns to the Hub Interface cluster.
  • prefetch parameters may be determined from the table in FIG. 11B.
  • the number of active streams or streams perceived as active and the prefetch profile may determine the upper limit of speculative requests for a particular stream. For example, if only one stream is active and the profile is Aggressive, then the Hub Interface cluster may check if the total number of “pending” cache lines is less than 8.
  • the term “pending” may refer to lines that have been issued for a read stream (real or speculative) but have not yet been delivered to the Hub Interface agent. If so, the Hub Interface cluster may issue up to 8 speculative read requests (the Aggressive profile may not allow more than 8 total requests in flight). If not, a speculative read is not issued until the pending lines drops below the value noted in the table (8 for this example).
  • the Hub Interface cluster may enforce speculative pre-fetching for streams which may have a zero Prefetch Horizon field in the initial real request and, in many embodiments, wherein the initial request may be greater than, e.g. 128 bytes (regardless of cache line size).
  • the number of cache lines pending may be incremented after a line read is issued by the I/O Interface 290 and may be decremented after they return to the I/O Interface. In other embodiments, the number may be incremented after a pair of lines is issued and may be decremented after one or both lines return. Read cache hits may not affect the number of cache lines pending.
  • the IOH may prioritize real requests and may maintain pre-fetching up to a high or maximum limit.
  • hub interface 280 may comprise a unified cache, the read cache of read cache and logic 285 , for streams via I/O interface 290 .
  • the unified cache may comprise a stream monitor to determine stream activity; cache logic circuitry coupled with said stream monitor to determine a cache structure to allocate to streams based upon the stream activity; and cache coupled with said cache logic circuitry to store pre-fetched cache lines in the cache structure for the streams.
  • the unified cache may comprise a stream monitor to determine a change in a number of active streams; a scheduler coupled with said stream monitor to determine pre-fetch schedule based upon the number of active streams; and cache coupled with said scheduler to allocate cache to active streams based upon the pre-fetch schedule.
  • the unified cache may comprise a unified cache for more than one hub interface like hub interface 280 .
  • the unified cache may also be implemented in other cache applications, such as other applications wherein data may be pre-fetched like an I/O bridge, network cards and storage controllers or to other I/O bridges that connect to the peripheral I/O devices, and connect on their other end to a system interconnection network/north bridge/memory controller that connects to the system memory and processors.
  • the unified cache may be logically unified but physically separate.
  • the I/O hubs/bridges may use read caches or buffers for staging data between the system memory and the peripheral I/O devices or I/O bridges when such devices read from the memory. More often than not, the I/O devices read huge chunks of contiguous data from the memory via DMA operations, for example paging out data to disk from memory. This traffic pattern lends itself very well to pre-fetching. Therefore, the I/O hubs/bridges typically also have pre-fetch engines like pre-fetch engine 286 , that are responsible for handling the read requests from the peripheral I/O devices or I/O bridges and pre-fetching ahead of these requests (using the read caches or buffers for storing/staging the data) from the system memory to provide high streaming bandwidth.
  • pre-fetch engine 286 that are responsible for handling the read requests from the peripheral I/O devices or I/O bridges and pre-fetching ahead of these requests (using the read caches or buffers for storing/staging the data) from the system memory to provide high
  • the number of read streams may vary dynamically as an application executes.
  • a high performance I/O hub/bridge may provide high streaming bandwidth both when there is a single read stream as well as when there are many concurrent read streams.
  • Embodiments may comprise a die space efficient unified read cache or buffer architecture in the I/O hub/bridge across more than one stream from an I/O bus with adaptive pre-fetch scheduling via scheduler 287 .
  • Embodiments of the unified cache may comprise adaptive pre-fetch scheduling to use a unified common read cache/buffer of size XYZ-KB across more than one stream; restrict the maximum total cache/buffer usage across the more than one stream to the unified cache/buffer size of XYZ-KB wherein XYZ may be larger (e.g.
  • FIG. 12 illustrates an embodiment for a 4 active streams scenario wherein there may be 0.5 KB per stream and no unused space, the total space being 2 KB. The same example may apply to eight streams with 0.25 KB per stream and no unused space.
  • a first stream may become active, using 1 KB of cache and leaving 1 KB for a subsequent active stream.
  • an adaptive mechanism may adjust the cache available to each stream to leave 0.5 KB available for a subsequent stream until four streams use 0.5 KB of cache each.
  • This example may illustrate, for example, a single bus or I/O interface that may limit streams to 2 KB.
  • the unified read cache/buffer architecture with adaptive pre-fetch scheduling may provide high streaming bandwidth performance in I/O hubs/bridges by efficiently using smaller die space.
  • an embodiment may comprise the I/O Hub chip of the chipset which may use a look-up table adaptive scheduling mechanism with timer based active/in-active stream detection and LRA cache replacement algorithm using a fully associative read cache per Hublink bus.
  • a stream timing system may be implemented by scheduler 287 to improve the determination or the timing of a determination of active and/or inactive streams.
  • the stream timing system may comprise a timing mechanism to determine an occurrence of a event and comprising a reset mechanism to change the event; cache logic circuitry coupled with said timing mechanism to change allocation of a cache structure for a stream based upon the occurrence of the event; and cache coupled with said cache logic circuitry to store data in the cache structure.
  • the stream timing system may comprise an event that is heuristically determined.
  • the stream timing system may provide an ability to enhance cache allocation for streams by detecting when a stream may becomes active or inactive so the number of streams may be automatically computed -in real time and pre-fetching may be dynamically controlled. For example, after a prefetch profile is chosen, the number of active streams may determine the appropriate LUT 288 entry and control the number of lines to prefetch. If only one stream exists and the mode is aggressive, then up to 8 cache lines, for instance, may be pre-fetched. If the number of streams increases to 2, then a stream may be limited to 4 cache lines. When there may be many streams active, the cost of excess pre-fetching may be high since the memory subsystem may be overloaded with requests that may be wasted and increasing the startup latency of the peripherals.
  • One embodiment of a timing mechanism may comprise an inactivity timer.
  • the I/O Hub may implement, for example, a 10-bit inactivity timer for each of the active streams involved with speculative pre-fetching.
  • the timer may facilitate de-allocation of a stream after the stream becomes inactive.
  • An embodiment of a timing mechanism that may maintain cache allocation for inactive streams may suppress pre-fetching for requests from other useful streams since the LUT is a function of the number of perceived streams in the IOH and will use a less than ideal value.
  • a timing mechanism that may de-allocate an active stream may result in early stream destruction and increase memory overshoot through excess prefetch.
  • some embodiments that implement timers may heuristically chose time periods to determine when a stream may be inactive or may still be active.
  • each 10-bit timer runs at 200 MHz providing a programmable value of 1.28 microseconds to 5.12 microseconds.
  • the timer may begin counting after the data requested by the Hub Interface master is delivered on the I/O Interface 290 . Whenever a new request “hits” an allocated stream, the timer may be cleared. After the timer reaches a value, such as values described in FIG. 13 or programmed in the I/O Hub register, the stream may be deemed to have expired and may be de-allocated from the stream structure.
  • FIG. 13 shows an example of how the inactivity may be programmed for each Hub Interface depending on, for instance, the corresponding PCI subsystem based on performance analysis.
  • hub interface 280 may also provide circuitry to determine a coherency state for an inbound transaction and respond to coherency requests issued across coherency interface 230 from snoop filter 210 . For example, when snoop filter 210 sends an ownership completion, hub interface 280 may accept the completion and update the status of the targeted inbound transaction as owning the memory line, or change the coherency state of the targeted inbound transaction from a pending state to ‘exclusive’.
  • hub interface 280 may accept the invalidation and reissue a request for ownership after the inbound write transaction may reach or near the top of upbound ordering FIFO queue 240 .
  • arbitration circuitry 222 may grant access to a transaction queue and the corresponding transaction may transmit to upbound path 220 .
  • Upbound path 220 may comprise pending data buffer 224 and pending transaction buffer 226 .
  • Pending data buffer 224 may receive and store data associated with upbound transaction awaiting transmission across unordered interface 207 .
  • Pending transaction buffer 226 may store a transaction header for a transaction pending on the unordered interface 207 .
  • hub interface 280 may place the header of the transaction in upbound ordering FIFO queue 240 and transmit the data associated with the header to pending data buffer 224 .
  • the header may be forwarded to the pending transaction buffer 226 to await transmission across unordered interface 207 . Then, the data may transmit across unordered interface 207 .
  • pending data buffer 224 may comprise a separate buffer for one or more I/O devices coupled with I/O interface 290 based upon one or more hub ID's. In other embodiments, pending data buffer 224 may comprise mechanisms such as pointers to associate a section of a buffer with a hub ID.
  • hub interface 280 may also comprise starvation circuitry to prevent starvation of a transaction, or leaf of transactions, as a result of ownership stealing.
  • starvation circuitry may monitor the number of invalidations transmitted to and/or accepted by hub interface 280 for a transaction, or a leaf of transactions, and once a count of invalidations reaches a starvation number, the starvation circuitry may stall the I/O interface 290 to flush the upbound ordering FIFO queue 240 and/or read bypass FIFO queue 250 .
  • the starvation number may be based upon statistical and/or heuristic data and/or a formula derived there from.
  • starvation circuitry may couple with arbitration circuitry 222 to modify the level of access arbitrated to upbound ordering FIFO queue 240 and/or read bypass FIFO queue 250 .
  • the embodiment may comprise receiving a first transaction from an ordered interface 300 ; comparing the first address to a cached address associated with a line of a cache, wherein the first transaction comprises a read transaction 310 ; comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a second transaction 320 ; prefetching ownership of a memory content associated with the first address, wherein the first address is different from the second address 340 ; and advancing the first transaction to the unordered interface substantially independent of an advancement of the second transaction to the unordered interface, wherein the second address is different from the first address 360 .
  • Receiving a first transaction from an ordered interface 300 may comprise receiving a transaction from an I/O device coupled with the ordered interface in a transaction order according to ordering rules associated with the I/O interface or an I/O device coupled with the I/O interface.
  • the transactions may be received from more than one I/O device and, in several embodiments, the transactions received may comprise transactions subject to independent ordering rules.
  • Some embodiments may comprise comparing the first address to a cached address associated with a line of a cache, wherein the first transaction comprises a read transaction 310 .
  • Comparing the first address 310 may compare a first memory line address against memory line addresses in the cache to determine whether a valid copy of the memory line may be stored in a line of the cache.
  • Comparing the first address 310 may comprise responding to the first transaction with the line of the cache, wherein the first address substantially matches the cached address 313 and attaching cache line invalidation data to the read transaction to invalidate the line in the cache 315 .
  • Responding to the first transaction with the line of the cache, wherein the first address substantially matches the cached address 313 may comprise retrieving a line of the cache from a cache to store data for a leaf and/or a hub ID.
  • the cache may comprise one or more memory arrays that dedicate an amount or physical division of the cache to store data for a leaf or hub ID. These embodiments may comprise populating the cache with data of a memory line anticipated to be the target of a subsequent read transaction.
  • a cache pre-fetch algorithm may anticipate a memory line as the target of a subsequent read transaction based upon a read and/or write transaction from an I/O device or leaf.
  • Responding to the first transaction 313 may transmit a response or completion to the requester, or I/O device, without forwarding the read transaction to the unordered interface.
  • a cache hit may reduce the latency of the read transaction, as well as transactions that may not compete with the read transaction for access to the unordered interface after the hit.
  • Attaching cache line invalidation data to the read transaction to invalidate the line in the cache 315 may, after a cache miss, attach data to cause the snoop filter to invalidate a line of the cache to make room for an additional entry in the cache.
  • the invalidation may be attached to or incorporated into a read transaction that may read a memory line to store in the cache and, in some embodiments, the memory line may be stored in the cache line associated with the invalidation.
  • the cache line invalidation data may be inserted into the header of the read transaction.
  • the cache line invalidation may be subject to an ordering rule that does not allow the cache line invalidation data to pass a transaction associated with the same memory line, e.g.
  • the ordering of the invalidation is dependent upon the transaction order of another pending transaction. So the advancement of the read transaction toward the unordered interface may be restricted or limited by the ordering rule for the attached cache line invalidation.
  • the ordered interface may receive a read transaction and a comparison of the memory line associated with the read transaction may result in a cache miss.
  • read cache logic may decide to store the memory contents of the memory line associated with the read transaction into the cache and piggy-back cache line invalidation data in the header of that read transaction.
  • Address logic may determine the memory line subject to the read transaction may have a different address than addresses stored in the address queue, however, the address associated with the cache line invalidation data, or the invalidation address, may match an entry in the address queue.
  • the read transaction may be placed at the bottom of an upbound ordering queue. Once the read transaction may reach the top of the upbound ordering queue, the read transaction may be eligible to transmit across the unordered interface, or may have satisfied the ordering rule corresponding to the cache line invalidation. In other situations, the read transaction may have to satisfy both an ordering rule associated with the invalidation address and the address of the memory line before becoming eligible to transmit upbound, advancing toward the unordered interface.
  • the snoop filter may invalidate an entry in the read cache to store the data resulting from the read transaction. After the completion for the read transaction is received, the data of the read completion may be written in the read cache at the entry associated with the cache line invalidation data.
  • Many embodiments may maintain a transaction order for an upbound transaction based upon an ordering of an I/O interface to transmit the upbound transaction to an unordered interface by placing the upbound transaction in an ordering queue. For example, a first write transaction received from an I/O interface may be placed in an upbound ordering queue. Then a second write transaction may be placed in the upbound order queue. After the first transaction may reach the top of the ordering queue, the first write transaction may issue to the unordered interface. In some embodiments, after receiving a completion for the first write transaction, the second write transaction may advance toward the unordered interface. In other embodiments, after the first write transaction may issue to the unordered interface, the second write transaction may advance upbound.
  • Many embodiments may maintain a transaction order to prevent problems associated with performing transactions out of order.
  • an agent on the ordered interface such as an I/O device coupled with a bridge, may issue a series of four write transactions and, assuming that the transactions will be performed in order, issue a fourth write transaction that may modify the same memory contents that the first write transaction modifies.
  • these transactions may be performed in an order other than the order of issuance, the changes to the memory contents may be unpredictable.
  • Comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a second transaction 320 may determine whether a second transaction may perform an action upon the same memory line as the first transaction, whether the first and the second transaction may be issued from the same I/O device, or whether an invalidation address attached to the first transaction may match an address in the address queue. For instance, a write transaction and a read transaction may have been received prior to receiving the first transaction and the memory line addresses associated with the write and read transactions may be stored in an address queue.
  • the address logic circuitry may compare the memory line address associated with the first transaction against the memory line addresses associated with the read and write transactions to determine that one, both or neither of the transactions may perform an action on the same memory line. In response to a determination that one of or both the read and write transaction may perform an action on the same address, the address logic may transmit a signal to the ownership pre-fetch circuitry. In many of these embodiments, a signal may be transmitted to the ownership pre-fetch circuitry to stop, request, and/or initiate, pre-fetching ownership for the first transaction.
  • Comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a second transaction 320 may comprise comparing a first memory line address associated with the first transaction against a second memory line address associated with the second transaction 325 and comparing a first hub identification of the first address against a second hub identification of the second address 330 .
  • Comparing a first memory line address 325 may compare the address of the memory line that the first transaction may perform a read of or write to against the address of the memory line address that the second transaction may perform a read of or write to, to determine whether the first transaction and the second transaction may perform action on the same memory line and, in many embodiments, whether the transactions may perform actions on the same memory contents of the memory line.
  • comparing a first memory line address 325 may determine that the first transaction may write to the same memory cells as the second transaction. In other situations, comparing a first memory line address 325 may determine that a read may be performed on the same memory cells as a write. In many embodiments, comparing a first memory line address 325 may further determine whether the first transaction may advance toward the unordered interface, or upbound, independent of an advancement of the second transaction upbound by comparing an invalidation address associated with the first transaction against a list of invalidations addresses in an address queue.
  • Comparing a first hub identification of the first address against a second hub identification of the second address 330 may determine whether the first transaction and the second transaction are transactions from the same I/O device, such as an Ethernet card.
  • the same I/O device such as an Ethernet card.
  • two I/O devices may be coupled to a bridge and the bridge may be coupled with an I/O interface to allow the two input output devices to transact across an unordered domain.
  • the first I/O device may be associated with a hub ID of zero and the second I/O device may be associated with a hub ID of one.
  • the transactions associated with the first I/O device may be independent of transactions associated with the second I/O device (hub ID one) with respect to ordering rules.
  • Some embodiments take advantage of the independence by comparing a first hub identification of the first address against a second hub identification of the second address 330 . Other embodiments may not track the hub ID associated with a transaction.
  • pre-fetching ownership of a memory content associated with the first address may initiate a request for ownership of an address prior to the first transaction satisfying ordering rules associated with that first transaction.
  • Pre-fetching ownership 340 may pre-fetch ownership of the memory content for a transaction so that the transaction may be ready to transmit across an unordered domain as soon as the transaction satisfies its ordering requirements.
  • Pre-fetching ownership 340 may comprise initiating a request for ownership of the memory content by the first transaction before the second transaction is to satisfy an ordering rule to transmit to the unordered interface 345 .
  • Initiating a request for ownership 345 may steal an ownership from the second transaction, or take ownership of the same memory line as the second transaction, wherein the transaction order of the first transaction is independent of the ordering rules associated with the second transaction.
  • the snoop filter may invalidate the ownership of the memory line by the second transaction.
  • the second transaction may not have an ownership of the memory line so the first transaction may gain ownership of the memory line before the second transaction may receive ownership.
  • the first transaction and the second transaction may race to satisfy ordering rules and after the second transaction may satisfy its ordering rules first, the second transaction may steal the ownership from the first transaction.
  • the second transaction may request and receive ownership for the memory line.
  • initiating a request for ownership of the memory content by the first transaction 345 may pre-fetch ownership for the first transaction after a determination that the ordering requirements of, or ordering rules for, the first transaction may be independent of the ordering rules for the second transaction.
  • determining the ordering rules may be independent may comprise determining that the first address and the second address are different, such as a different target address or a different source address.
  • the difference target address may comprise a different memory line and the different source address may comprise a different hub ID.
  • the hub ID may be a part of a number that identifies the source I/O device.
  • Many embodiments may comprise advancing the first transaction to an unordered interface substantially independent of an advancement of the second transaction to the unordered interface, wherein the second address is different from the first address 360 may allow a read or write transaction to bypass the upbound ordering queue wherein the memory line associated with the read or write transaction, invalidation address, and/or the hub ID associated with the read or write transaction may differ from memory lines, invalidation addresses, and/or hub ID's stored in the address queue.
  • Advancing the first transaction to an unordered interface substantially independent of an advancement of the second transaction to the unordered interface, wherein the second address is different from the first address 360 may comprise advancing a read transaction 365 and advancing the first transaction to the unordered interface substantially independent of the advancement of the second transaction, wherein a hub identification associated with the second transaction is different from a hub identification associated with the first transaction 375 .
  • Advancing a read transaction 365 may place the read transaction in a read bypass queue when the read transaction was initiated by a source device associated with a hub ID that is different from hub ID's associated with transactions in an upbound ordering queue. For instance, a read transaction having a hub ID of zero may be placed in the read bypass queue wherein transactions in the upbound ordering queue associated with hub ID zero have no entries associated with hub ID zero.
  • Advancing a read transaction 365 may comprise advancing the read transaction to the unordered interface substantially independent of the advancement of the second transaction unless a memory line address associated with the read transaction is substantially equivalent to a memory line address associated with the second transaction 370 .
  • Advancing the read transaction 370 may forward the read transaction to a read bypass queue when the memory line to be read is different from the memory lines stored in the address queue or memory lines of transactions awaiting transmission across the unordered interface.
  • the address queue may also store hub ID's associated with pending transactions
  • advancing the read transaction 370 may also forward the read transaction to the read bypass queue when the hub ID associated with the read transaction is different from the hub ID's in the address queue.
  • Advancing the first transaction to the unordered interface substantially independent of the advancement of the second transaction, wherein a hub identification associated with the second transaction is different from a hub identification associated with the first transaction 375 may allow a write and/or read transaction to bypass another write or read transaction in an upbound ordering queue since the ordering for the transactions are independent.
  • a write transaction initiated by a first I/O device may write to memory line one of a system memory in an unordered domain via an unordered interface.
  • a read transaction may read from memory line one and may be initiated by a second I/O device after the write transaction was stored in the upbound ordering queue.
  • the read transaction may bypass the write transaction since the ordering rules associated with the read transaction are independent of the ordering rules associated with the write transaction.
  • a machine-readable medium includes any mechanism that provides (i.e. stores and or transmits) information in a form readable by a machine (e.g., a computer), that when executed by the machine, may perform the functions described herein.
  • a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g. carrier waves, infrared signals, digital signals, etc.); etc. . . .
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media e.g. magnetic disks, magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g. carrier waves, infrared signals, digital signals, etc.); etc. . . .
  • Several embodiments of the present invention may comprise more than one machine-readable medium depending on the design of the machine.
  • FIG. 15 shows an embodiment of a machine-readable medium 400 comprising instructions for receiving a first transaction from an ordered interface 410 ; comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a second transaction 420 ; and pre-fetching ownership of a memory content associated with the first address, wherein the first address is different from the second address 430 .
  • Receiving a first transaction from an ordered interface 410 may comprise receiving a read or write transaction from an I/O device coupled with the ordered interface to transmit across an unordered interface.
  • Instructions for pre-fetching ownership of a memory content associated with the first address, wherein the first address is different from the second address 430 may comprise instructions for pre-fetching ownership for a write transaction wherein the address, such as a memory line address and/or hub ID, associated with the write transaction is different from one or more addresses stored in an address queue.
  • the instructions to determine the address is different from one or more addresses stored in an address queue may comprise instructions to determine whether the write transaction is subject to ordering rules that are not independent of ordering rules of transactions awaiting transmission across the unordered interface.
  • FIG. 16 there is shown an example embodiment of an SPS 500 comprising a shared bypass bus structure 510 .
  • the shared bypass bus structure 510 may facilitate low-latency cache coherency operations by providing practical, fast connectivity from the Scalability Ports to the component's coherency interleaves which bypasses the port-to-port crossbar interconnect.
  • the shared bypass bus structure may comprise a first scalability port 535 ; bypass bus structure 510 coupled with said first scalability port 535 ; a coherency interleave, e.g. 520 or 525 , coupled with said bypass structure 510 to transact with said first scalability port 535 ; a crossbar structure 530 coupled with said first scalability port 535 ; and a second scalability port 540 coupled with said crossbar structure 530 to transact with said first scalability port 535 and coupled with said bypass bus structure 510 to transact with said coherency interleave 520 or 525 substantially independent from a transaction with said first scalability port 535 .
  • the shared bypass bus structure 510 may couple between the Scalability Ports 535 and 540 and the coherency interleaves 520 and 525 of the switch component.
  • FIG. 17 depicts a Local shared crossbar bypass bus structure comprising the incoming local bypass bus 606 and outgoing local bypass bus 601 .
  • the interconnect structure is termed a Remote shared crossbar bypass bus structure.
  • FIG. 18 shows a Remote shared crossbar bypass bus structure, with an outgoing remote bypass bus 603 and an incoming remote bypass bus 608 .
  • a shared bypass bus structure may be used exclusively or substantially exclusively for communicating between a Scalability Port and a coherency interleave. In many of these embodiments, shared bypass bus structure may not be used for communicating between one Scalability Port and another Scalability Port, nor for communicating between two coherency interleaves.
  • Shared bypass bus structure 510 may provide, in some embodiments, complete or substantially complete connectivity between Scalability Ports 535 and 540 and coherency interleaves 520 and 525 with m x n bus structures, where m may be the number of Scalability Port groups and n may be the number of coherency interleave groups.
  • Each Shared bypass bus structure has an incoming data bus and an outgoing data bus, for a total of eight buses in the embodiment of the component shown.
  • the Shared bypass bus structure may comprise an arbitration controller to coordinate the use of the buses and, in many embodiments, to provide for fair access to the buses such that no coherency interleave or SP may be indefinitely blocked from access to a bus by the activities of another unit.
  • the Shared bypass bus structure may provide for communication of request and response information for memory cache coherency operations based on the Intel Scalability Port Protocol or a similar protocol.
  • request and response items may be transmissible independent of each other and may comprise independent flow control. For example, indefinite flow control against request information may not be permitted to block response information indefinitely.
  • the arbitration controller of the Shared bypass bus structure may treat request and response information or data as two separate “virtual channels,” and may provide for access to the buses for each virtual channel regardless of the status of the other virtual channel.
  • Shared bypass bus structure 510 may be shared both among multiple transmitters and among multiple receivers and may comprise parallel data bits, a data valid qualifier to identify valid data on a bus, a virtual channel qualifier to select a channel, and a multi-bit destination qualifier to select an address.
  • a receiver may consider data on the bus to be valid after it recognizes its identification code in the destination field after the data valid signal is asserted.
  • the buses may be accompanied by arbitration and handshaking signals to facilitate bus arbitration and flow control such as a request-channel arbitration request from each transmitting unit to the arbiter; a response-channel arbitration request from each transmitting unit to the arbiter; a selected signal from the arbiter to each transmitting unit, indicating that that transmitter owns the bus and its data can be observed by the receivers; a request-channel ready signal from each receiver for flow control, observed by all transmitters and by the arbiter; and a response-channel ready signal from each receiver for flow control, observed by all transmitters and by the arbiter.
  • arbitration and multiplexing may be accomplished as close physically to the transmitting units as possible, to limit data bus congestion and silicon area consumption.
  • Operations may comprise a unit, such as a coherency interleave or Scalability Port, having data to transmit asserting a request-channel arbitration request or a response-channel arbitration request.
  • the unit may also transmit data to a local bus and may assert a valid signal. Then, based upon a selection mechanism and a fairness mechanism, such as a round-robin determiner, the arbiter may select one of the requesting units to own the bus in a subsequent clock cycle.
  • the arbiter may transmit a control signal to the bus multiplexor and may transmit a selected signal to a transmitter. After the transmitting unit observes that the data has been received, the valid qualifier may be de-asserted or new data may transmit.
  • Advantages of these embodiments may, for instance, comprise an approach to a connectivity problem that neither a crossbar switch structure, nor a collection of point-to-point buses, nor a component-wide multiply driven bus may feasibly or advantageously solve.
  • the Shared bypass bus structure may yield performance, cost, and architectural advantages over these other approaches.
  • the performance presented here may combine advantages of a crossbar switch design with those of a direct-connect bus.
  • the crossbar switch may provide high throughput and connectivity for the streaming of memory data between ports.-Meanwhile operations to initiate memory transfers and to perform cache state lookups and updates may be allowed to bypass the crossbar, potentially yielding latency savings in the idle to light activity case. More specifically, in an embodiment comprising the chipset Scalability Port Switch component, the Scalability Port Switch component's latency contribution reduced by an estimated 20% to 30% for many common operations.
  • the structure may also provide similar advantages over an alternative approach and even alternative embodiments, such as that of a component-wide multiply driven bus.
  • the number of and distance between design units driving such a bus result in extra transmission times for control signals and bus driver turn-on and turn-off times, as well as possible frequency limitations as compared to the Shared crossbar bypass bus structure.
  • cache state lookup and update operations may not compete with data streaming resources.
  • the cache coherency operations may thus be processed immediately or substantially immediately, thereby providing performance gains under high-activity in many embodiments.
  • Another advantage may include area and cost improvements for some embodiments.
  • providing complete or substantially complete bipartite connectivity between all or most ports and interleaves via a coherency crossbar switch may comprise significantly more silicon area, which may raise the cost of those components.
  • the bypass structure may be expanded for data transfer between Scalability Ports and dedicated access ports may be added to the crossbar for the coherency interleaves, although this may be more costly in silicon area.
  • the Shared crossbar bypass bus structure design lowers development time and cost, and limits risk to the development schedule.
  • the logic and signal timing to share crossbar access ports between two distinct physical and logical design units on both the sending and receiving ends represent very difficult obstacles, to which some embodiments of the Shared bypass bus structure may provides a simple alternative.
  • a further advantage may comprise partitioning.
  • the Shared bypass bus structure lends itself well to the clean partitioning of the component into separate domains as a result of separating interleaves, regions, and/or SP's into distinct address ranges, for example. This aspect may facilitate development of chipsets or the like with desirable Reliability, Availability, and Serviceability (RAS) features.
  • RAS Reliability, Availability, and Serviceability
  • the memory re-ordering mechanism in FIG. 21 may comprise memory write queue 700 ; write re-order queue 705 ; memory read queue 710 ; read re-order queue 715 ; arbitration unit and conflict checker 720 ; refresh unit 730 , DDR protocol state machines 740 and multiplexer 750 .
  • Write queue 700 may hold, for example, 64 entries of requests and data to write to an address in memory.
  • read queue 710 for example, may hold 32 entries of read requests for memory.
  • a read may access the same address as a write, which is present in the write queue 700 , data may be forwarded from the pending data buffer to the requestor or agent without accessing physical memory. Writes may be flushed to memory in absence of reads and reads may fetch data from memory if they do not have a hit in write queue 700 .
  • Reads and writes may comprise inbound and/or upbound requests for memory.
  • the reads and writes that may remain in the read and write queues 700 and 710 may be forwarded or transmitted to the re-ordering queues 705 and 715 .
  • the present embodiment may comprise four write re-order queues 705 and four read re-order queues 715 for write requests and read requests, respectively, and a re-order queue may be two entries deep so that 8 reads and 8 writes may be stored in the re-order queues 705 and 715 .
  • Re-order queues 705 and 715 may become filled after a request belonging to that re-order queue arrives (for write requests after data has been received).
  • Read/write requests may be distributed to reordering queues 705 and 715 depending on which DDR channel (if there are independent DDR channels) and/or bank is targeted. In some embodiments, for instance, two channels of DDR I and bank address bit B[0] may also used.
  • Arbitration unit 720 may check timing conflicts and may schedule a request to one of the 8 protocol state machines.
  • the arbitration unit may look at 4 read requests and 4 write request at a time or substantially simultaneously. If there is no read or write request in the queue that may belong to a particular re-order queue based upon the channel number and/or the bank number, then that reorder queue may be empty in many embodiments.
  • Arbitration unit may keep track of memory addresses that are currently accessing memory and compare those addresses with 4 read addresses and 4 write addresses from the re-order queues 705 and 715 . A read or write request may be picked up in such a way that it may schedule to access memory immediately.
  • refresh may have the highest or nearly the highest priority, in part because, in many embodiments, refreshes may not occur often.
  • Read requests may comprise a second priority and then writes may be at a third priority level, unless, for instance, the write queue 700 is full. When the write queue 700 is full, the write requests may be at a second priority level.
  • a round robin priority determiner may facilitate selection of one of the four read requests and one of the four write requests from re-order queues 705 and 715 , unless the queue entry has a conflict with an ongoing transaction. Further, in several embodiments, when a re-order queue is skipped then it is marked and receives a high or a highest priority after some time. After a request has been scheduled by Arbitration unit, the request may go to one of the 8 DDR state machines for access to memory.
  • Arbitration unit and conflict check logic 720 or state machine may check for page replace conflicts and DIMM conflicts.
  • Page replace conflicts may involve a greater penalty than a DIMM conflict in terms of turnaround time. So if all re-order queue entries may have involve a conflict, an entry with a page replace conflict gets a lower priority.
  • the present embodiment may shows great performance gain on memory bandwidth. For example, memory reads/writes may be distributed in 4 re-order queues each.
  • the arbitration unit may review up to 8 transactions that are pending to be scheduled and also the transactions that are currently scheduled on DRAM channel by one of the 8 state machines. Arbitration unit may first look for transactions with page empty or page hit cases to be scheduled. Then, read/write requests with a page replace with existing transaction or a DIMM turnaround conflict may be pushed out until the timing conflict is eliminated.
  • State machines such as DDR protocol state machines 740 , may schedule one read or write transaction to a DDR channel and hold that entry until the transaction is complete.
  • the present embodiment may comprise 8 DDR protocol state machines.
  • Embodiments may provide better feedback from DRAM protocol state machines so arbitration unit does not have to wait for a data phase to complete before the next transaction to the same resource is scheduled.
  • Some embodiments may not have one reordering queue per resource, for example, embodiments may comprise 4 read and 4 write reorder queues 705 and 715 and may provide re-ordering and feedback from 8 state machines may provide more information. Further, embodiments may comprise no or infrequent timing dependency between different re-order queues and in the same queue there may or may not be timing dependency. Top entries of a re-order queue may not be checked against one other and a state machine may schedule one transaction as opposed to one state machine per bank or per resource.

Abstract

In a system supporting concurrent multiple streams that pass through a cache between memory and the requesting devices, various techniques improve the efficient use of the cache. Some embodiments use adaptive pre-fetching of memory data using a dynamic table to determine the maximum number of pre-fetched cache lines permissible per stream. Other embodiments dynamically allocate the cache to the active streams. Still other embodiments use a programmable timer to deallocate inactive streams.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims priority of the filing date of co-pending U.S. provisional application 60/359,316, filed Feb. 25, 2002.[0001]
  • BACKGROUND
  • 1. Technical Field [0002]
  • An embodiment of the invention pertains generally to processor systems, and in particular pertains to scalable processor systems. [0003]
  • 2. Description of the Related Art [0004]
  • With the rapid evolution of the Internet requirements for enterprise server systems have become increasingly diverse. Front-end and departmental servers are very cost and power sensitive while back-end servers that traditionally run database type applications require the highest level of performance along with multi-dimensional scalability and 24×7 availability. This segmentation of the server platforms has led to the development of a multitude of chipsets. A chipset encompasses the major system components that move data between the main memory, the processor(s) and the I/O devices. System vendors have designed separate chipsets with different system architectures to address the needs of different server segments or use industry standard components to address the needs for low-end systems and design proprietary components for mid-range and high-end systems. [0005]
  • Current systems have memory connected to a processor to store data such as data that is accessed with streams. A stream is a contiguous sequence of requests from an agent typically connected to the processor and, memory system via a chipset or the like. The memory may include dynamic random access memory and the requests are processed in the same order as they are received. Processing the requests in the same order that they are received reduces memory bandwidth when, for example, a page replace conflict or DIMM turn around conflict forces a transaction to wait for a prior transaction to finish. Further, current systems provide a single path for cache coherency operations and data transfer, causing cache coherency transactions to wait for data transfers, increasing snoop latency. [0006]
  • Coherent transactions limit the bandwidth for transactions from a peripheral input-output (I/O) bus in processor-based systems such as desktop computers, laptop computers and servers. Processor-based systems typically have a host bus that couples a processor and main memory to ports for I/O devices. The I/O devices, such as Ethernet cards, couple to the host bus through an I/O controller or bridge via a bus such as a peripheral component interconnect (PCI) bus. The I/O bus has ordering rules that govern the order of handling of transactions so an I/O device may count on the ordering when issuing transactions. When the I/O devices may count on the ordering of transactions, I/O devices may issue transactions that would otherwise cause unpredictable results. For example, after an I/O device issues a read transaction for a memory line and subsequently issues a write transaction for the memory line, the I/O device expects the read completion to return the data prior to the new data being written. However, the host bus may be an unordered domain that does not guaranty that transactions are carried out in the order received from the PCI bus. In these situations, the I/O controller governs the order of transactions. [0007]
  • The I/O controller places the transactions in an ordering queue in the order received to govern the order of inbound transactions (transactions toward the main memory and/or processor) from an I/O bus, and waits to transmit the inbound transaction across the unordered interface until the ordering rules corresponding to each transaction are satisfied. However, issuing transactions one at a time as the transaction satisfies ordering rules may limit the latency of a transaction to a nominal latency equal to the nominal snoop latency for the system. In addition, when multiple I/O devices transmit coherent transactions to the I/O controller, transactions unnecessarily wait in the ordering queue for coherent transactions with unrelated ordering requirements. For example, in conventional systems, a read transaction received subsequent to a write transaction for the same address will wait for the write transaction to issue even though the read transaction may have issued from a different I/O device, subjecting the read transaction to ordering rules independent from the ordering rules of the write transaction. As a result, the latency of the snoop request, or ownership request, for the write transaction adds to the latency of the read transaction and when a conflict exists with the issuance of the ownership request for the write transaction, the latency of the write transaction, as well as the read transaction, will be longer than the nominal snoop latency for the system. [0008]
  • I/[0009] 0 devices continue to demand increasing bandwidth, increasing the amount of time transactions remain in an ordering queue. For example, in conventional products, the number of delays resulting from a foreseeable read transaction that waits to access a memory line across the unordered interface and a read transaction that waits for a write transaction to satisfy ordering requirements when the write transaction will write to a different memory line, can escalate in proportion with bandwidth.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention may be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: [0010]
  • FIGS. [0011] 1-4 depict embodiments of a scalable system.
  • FIG. 5 depicts an embodiment of a Scalable Node Controller. [0012]
  • FIG. 6 depicts an embodiment of a Scalability Port Switch. [0013]
  • FIG. 7 depicts an embodiment of an I/O Hub. [0014]
  • FIG. 8 depicts a table to compare embodiments comprising partitioning and/or a hot plug mechanism. [0015]
  • FIG. 9 depicts an embodiment of an apparatus such as an I/O Hub for a scalable system. [0016]
  • FIG. 10 depicts another embodiment of an apparatus such as an I/O Hub or a Hub Interface thereof for a scalable system. [0017]
  • FIGS. [0018] 11A-B depict example embodiments and comparisons of prefetch profiles and lookup tables for a scalable system.
  • FIG. 12 depicts an example operation of an embodiment comprising unified cache. [0019]
  • FIG. 13 depicts an example table for operation of an inactivity timer as an embodiment of a timer mechanism as well as comparisons for different tables and for operation without a timer. [0020]
  • FIG. 14 depicts a flow chart of an embodiment of a scalable system. [0021]
  • FIG. 15 depicts an embodiment of a machine-readable medium comprising instructions for a scalable system. [0022]
  • FIG. 16 depicts another example embodiment of a scalable switch comprising a shared bypass bus structure. [0023]
  • FIGS. [0024] 17-20 depict example embodiments of a shared bypass bus structure.
  • FIG. 21 depicts an example embodiment of an apparatus to re-order memory.[0025]
  • Elements shown in the figures are presented as examples, and do not show all embodiments that are possible. [0026]
  • DETAILED DESCRIPTION
  • In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. References to “one embodiment”, “an embodiment”, “example embodiment”, “various embodiments”, etc., indicate that the embodiment(s) of the invention so described may include a particular feature, structure, or characteristic, but not every embodiment necessarily includes the particular feature, structure, or characteristic. Further, repeated use of the phrase “in one embodiment” does not necessarily refer to the same embodiment, although it may. In various places, multiple copies of an item are referred to with the designation “A”, “B”, “C”, etc. Any characteristics attributed to any one of the designated items (e.g., [0027] item 120A) may also be attributed any of the like items (e.g., item 120B).
  • Various embodiments of the invention may improve the efficient use of a cache in a scalable system that supports concurrent multiple streams passing through the cache between memory and the requesting devices. Some embodiments use adaptive pre-fetching of memory data using a dynamic table to determine the maximum number of pre-fetched cache lines permissible per stream. Other embodiments dynamically allocate the cache to the active streams. Still other embodiments use a programmable timer to deallocate inactive streams, thereby freeing up a portion of the cache for other streams. [0028]
  • Referring now to FIG. 1, there is shown an embodiment that may comprise a single-bus shared memory architecture supporting up to four processors. Another embodiment may support a distributed shared memory architecture up to 16 or more processors. Other embodiments are also possible. [0029]
  • In such a single-bus shared memory system, the [0030] processors 100A and the memory controller in scalable node controller (SNC) 110A, may be attached to a common bus, front side bus 105A. This architecture may provide good performance and low cost and may be well suited for low-end servers. Each of processors 100A may have a private cache and may use the internal bus interface unit to monitor memory accesses on the bus. For this reason, a cache coherency protocol that may be used in these systems may be called a snooping protocol.
  • The embodiment shown may comprise two main components: Scalable Node Controller (SNC) [0031] 110A and the I/O Hub (IOH) 120A. The SNC 110A may support one to four processors 100A and may interface directly or substantially directly to the processors' Front Side Bus 105A. The main memory controller in the SNC 110A may support four memory channels. A double data rate (DDR) memory hub (DMH) on each memory channel may control eight DDR dual in-line memory modules (DIMM). The SNC 110A may also interface to a Firmware Hub (FWH) 117A, which may serve as a boot ROM for the system.
  • The [0032] SNC 110A may couple to the IOH 120A through a pair of Scalability Ports (SP). Each SP may provide 3.2 GB/s of bandwidth in each direction. The IOH 120A may support four Hub interfaces to connect to various bridges, such as a PCI/PCI-X bridge 125A and/or Infiniband® bridge 130A. A narrower version of the Hub interface may support legacy I/O devices 135A.
  • The embodiment of FIG. 1 may be limited by the bandwidth and the electrical limits of [0033] Front Side Bus 105A. FIG. 2 shows an embodiment with a multi-node scheme where clusters of multiple processors may be interconnected with Scalability Port Switch (SPS) 140A and SPS 140B for the illustrated embodiment of a 16-processor configuration.
  • [0034] SPS 140A and 140B may provide the interconnection and coherency support for building multi-node multiprocessor systems. SPS 140A, for instance, may comprise six SP interfaces to interconnect the SNC and IOH components.
  • In multi-node configurations, memory may be distributed physically across nodes but may also be visible from all processors as a single physical or logical address space. In some embodiments, multi-node systems may provide the programming simplicity of shared memory architectures. [0035]
  • Distributed memory architectures may exhibit significant difference in latency on local and remote memory accesses, sometimes by an order of magnitude. In some embodiments, software optimizations may mitigate the large remote to local accesses by moving or copying pages to the local memory. On the other hand, the ratio of remote to local latency in other embodiments, such as a multi-node configuration may be about 2.2, which may not require such software optimizations for scalable performance. [0036]
  • The SP protocol may be designed for scalability and such a protocol may facilitate the design of specialized switch components to build large scale coherent multi-chassis systems. FIG. 3 depicts a 64 processor configuration where four 16-processor chassis are interconnected through dedicated point-to-point links. [0037]
  • Referring now to FIG. 4, there is shown an embodiment of a scalable system. The embodiment may comprise processors such as [0038] processors 100A-D; processor interface circuitry, such as scalable node controllers 110A-B; memories 1115A-B; I/O hub circuitry such as I/O hubs 120A-B; and I/O devices such as bridges 160 and 190 connected to agents 162, 164, 192 and 194. In embodiments that may comprise more than one 1/0 hub, such as I/ O hubs 120A and 120B, support circuitry may couple the processor interface circuitry with the multiple hubs to facilitate transactions between I/O hubs 120A-B and processors 100A-D.
  • [0039] Scalable node controllers 110A and 110B may couple with processors 100A-B and 100C-D, respectively, to apportion tasks between the processors. In some of these embodiments, a scalable node controller 110A may apportion processing requests between processor 100A and processor 100B, as well as between processors 100A-B and processors 10C-D, for instance, based upon the type of processing request and/or the backlog of processing requests for the processors 100A-B and processors 100C-D.
  • In several embodiments, [0040] scalable node controller 110A may also coordinate access to memory 115A between the processors 100A-B and the I/O hubs 120A-B. The support circuitry for multiple I/O hubs, such as scalability port switches 140A and 140B, may direct traffic to scalable node controllers 110A and 110B based upon a backlog of transactions. In addition, scalability port switches 140A and 140B may direct transactions from scalable node controllers 110A and 110B to I/ O hubs 120A and 120B based upon destination addresses for the transactions. In many embodiments, memory 115A and memory 115B may share entries, or maintain copies of the same data. In several embodiments, memory 115A and memory 115B may comprise an entry that may not be shared so a write transaction may be forwarded to either memory 115A or memory 115B.
  • [0041] SNC 110A, as well as SNC 110B, may comprise a central component in the processor/memory sub-system. SNC 110A may comprise interfaces to the processors 110A and 110B, the memory 115A, a firmware interface, and two scalability ports for accesses to I/O. In some embodiments, features of the SNC 110A may comprise: support for up to four processors; 200 MHz DDR SDRAM support through a DDR Memory Hub (DMH) interface; two SPs to connect to the SPS 140A and 140B or the IOH 120A and 120B; and support for 32 DIMMs resulting in up to 128 GB per SNC 110A and 110B with 1 Gigabit (Gb) DDR devices.
  • In several embodiments, [0042] SNC 110A may comprise four high-speed point-to-point links to four DMHs that connect to components such as DDR DRAM components. In many of these embodiments, the four links may provide a peak memory bandwidth of 6.4 GB/s per node. SNC 110A may also buffer up to 8 KB of write data to prioritize reads over writes.
  • In further embodiments, [0043] SNC 110A may also implement interleaving and reordering to improve bandwidth and/or to reduce latency. Interleaving sequential accesses across many banks may optimize throughput and may minimize the effect of overhead. Reordering may allow conflict-free accesses to bypass requests to busy banks. Accesses may be sorted into four queues to minimize timing conflicts between accesses. If accesses are within a particular address range, they may be sorted by channel, then by least significant bank bit. Otherwise, they may be sorted by bank. An arbiter may choose from among the conflict-free accesses at the head of the four re-ordering queues. In many embodiments, these re-ordering policies may be chosen heuristically, deterministically, or by other techniques.
  • As shown in the embodiment of an SNC in FIG. 5, [0044] SNC 110A may comprise three main units: local access transaction tracker (LATT) 100E, remote access transaction tracker (RATT) 110E, and Data Buffer 120E. LATT 100E may track processor requests. LATT 100E may convert processor requests to SP or memory controller requests and may return responses to the processors 100A-D.
  • [0045] RATT 110E may track inbound transactions from Scalability Ports until the necessary snoops and/or memory accesses are complete. Further, Data Buffer 120E may transport and may hold data between the processor bus, memory interface, and the SP interfaces.
  • n still further embodiments, [0046] SNC 110A and/or 110B may comprise a hot page mechanism to tune memory latency, as described below. Multi-node configurations may feature a shorter latency for local memory accesses. In such embodiments, which may include software that may comprehend processor affinity and may be tuned to favor local memory accesses, performance may be enhanced or potentially optimized. To aid software in optimizing for local accesses, the SNC 110A, for instance, may contain some memory that may track and count the number of accesses to each of more than one address location or range of address locations (the granularity may be programmable). This mechanism, referred to herein as the hot page mechanism, may track local or remote accesses. For example, a software developer may use a hot page mechanism to identify hot spots in the memory that is being accessed by remote nodes and may optimize or enhance the software to move those accesses to the local node. The hot page mechanism may also be used for other forms of software optimizations.
  • In some embodiments, [0047] SPS 140A (shown in FIG. 4) may comprise a coherent interconnect switch that connects SNC 110A, SNC 110B, SPS 140B, I/O Hub 120A, and I/O Hub 120B through the Scalability Ports (SP). In one embodiment, some features of the SPS 140A may comprise: six identical Scalability Ports with a total peak bandwidth of 38.4 GB/s; an Integrated snoop filter that may track the state of one or more cache lines in processor and IOH caches which may reduce snoop probes to remote nodes and may support an SP cache consistency protocol; and an Internal interconnect that may comprise a crossbar and network of buses for critical coherent traffic.
  • Optimizations or improvements such as the network of buses in SPS may minimize the latency. For example, a shared crossbar bypass bus structure may be incorporated into SPS to provide an independent path for SP to coherency interleave transactions and vice versa. As a result, cache look up and update operations may not be delayed as a result of, for example, data streaming. In some embodiments, the shared crossbar bypass bus structure may comprise parallel bits, a data valid qualifier, a virtual channel qualifier, and a multi-bit destination qualifier. For example, a coherency interleave or SP with data to send may assert its request-channel arbitration request or its response-channel arbitration request, according to the type of data to be sent. The unit may also transmit its data locally to bus multiplexors and may assert its valid signal. This data may be propagated onto the shared bus by the multiplexors after the arbiter selects this coherency interleave or SP. [0048]
  • Based upon the current set of requests and a least-recently-used, round-robin priority, and possibly on the receivers' ready signals, the arbiter may select one of the requesting units to own the bus in the next clock cycle. (Note: during idle conditions, one of the transmitters may always be “selected” as well). The arbiter may send control signals to the bus multiplexors, and may send a selected signal to the transmitter that has been selected. Various types of arbiters are well-known and are not further described herein to avoid obscuring other aspects of the SPS. [0049]
  • After the transmitting coherency interleave or SP may observe by its selected signal and its targeted destination's ready signal that the data may have been transmitted and absorbed, the transmitting coherency interleave or SP may deassert its valid qualifier or may proceed to send new data. [0050]
  • As shown in FIG. 6, an SP may implement the physical, link, and part of the protocol layers. The SP may comprise a point-to-point cache-consistent interface designed to build shared memory multiprocessor systems that may overcome the limitations of shared bus based architectures. The embodiment depicts four centralized SP protocol (SPPC) and snoop filter (SF) units, which are interleaved for improved throughput and ease of physical design, although they form one logical unit. All the ports and SPPC/SF interleaves may be coupled by a crossbar (X-Bar) and network of buses. In several embodiments, these buses may reduce latency on critical operations. [0051]
  • The physical layer may use pin-efficient simultaneous bi-directional signaling technology, where the same signal pins may be used to send signals in both directions in a full duplex manner. In other embodiments, signal pins may be used to implement half duplex or other signaling technology. In several embodiments, the physical layer may comprise a source synchronous interface where the transmitter may send the clock along with the data and the receiver may use the clock to sample the data. A scalability port interface may be, for example, 40 bits wide, with 32 of those bits used for transmitting data, 2 bits used for link layer control information, and 6 bits used for maintaining data integrity. The interface may operate at various rates, for example 800 million transfers/sec, which may result in a peak bandwidth of 3.2 GB/sec per port in each direction. Further, in many embodiments, the scalability port may comprise a packetized interface, where requests and responses may be multiplexed on the same physical medium and wherein each packet may contain a header to route the packet and to specify the attributes of the packet. The effective bandwidth achieved on the interface, in one embodiment, may depend upon the distribution of packets of various sizes. The SP may also be capable of delivering an effective bandwidth of 80% of the peak. [0052]
  • The SP link layer may support virtual channels and may provide flow control and reliable transmission. SP may use two virtual channels to build independent request and response virtual interconnect on a single physical interconnect. In some embodiments, flow control may be done using a credit-based scheme. The unit of flow control may comprise a flit (sub-packet) that is four-transfers long on the interface. The link layer may also be responsible for detecting transmission errors and may rely on a retry scheme using a modified version of “go-back-n” sliding window protocol for recovery. [0053]
  • The SP protocol layer may implement the state machines and may provide resources for functionality such as cache consistency, translation lookaside buffer (TLB) consistency, synchronization, interrupt delivery, etc. In several embodiments, the protocol layer may be designed to support both Itanium™ and Xeon™ processor families. The protocol may allow for high performance and flexible interconnect fabric by not relying on an ordered fabric for performance sensitive operations. [0054]
  • The SP consistency protocol may allow for cache lines to be in Modified, Exclusive, Shared or Invalid state (MESI) at the caching agents and may use an invalidation-based protocol. In some embodiments, the protocol may be built on the concept of sparse directory, called snoop filter, which may keep track of lines present in the caches rather than keeping track of all or many lines in memory. Such a protocol may allow for entire snoop filters to be stored on the same component as the directory state machine for high performance, which may not have been possible with a conventional directory. Separation of snoop filter from the memory agent may be allowed to facilitate the building block philosophy, for example, by allowing the use of node controllers and I/O hubs that may be designed for low cost systems. The building block philosophy may be supported efficiently through transactions that may access memory concurrently or substantially concurrently with coherency resolution. The protocol may also provide coherent transactions that may be optimized for I/O device operations. Conflict resolution on concurrent accesses to same cache line may be done in a relaxed and, in several embodiments, a distributed manner. [0055]
  • The Scalability Port consistency protocol may allow for extensions to large-scale systems through a second level distributed directory that may work in conjunction with a basic snoop filter. [0056]
  • The distributed SP protocol logic (SPPD) may perform address/request decoding to determine how packets may be routed in the [0057] SPS 140A and/or 140B. SPPD may control data transfers between ports including modified data transfers. The SPPC may comprise a programmable protocol engine that may process requests and responses and may spawn transactions. In some embodiments, SPPC may handle global ordering and may contain anti-starvation logic to guarantee fairness between nodes. The combined snoop filter tag array size may be 1 MB and may maintain the state of, for instance, approximately 200K cache lines. The combined snoop filter tag array may support up to 266M snoop filter lookup-and-update operations per second. In many embodiments, an entry may contain an address tag, a presence vector (one bit per node), the cache consistency protocol state (M/E, S, I), and ECC check bits.
  • I/[0058] O hubs 120A and 120B may operate to bridge transactions between an ordered domain and an unordered domain by routing traffic between I/O devices and scalability ports. Returning to FIG. 4, in some embodiments the I/ O hubs 120A and 120B may Is provide peer-to-peer communication between I/O interfaces. In particular, I/O hub 120A may comprise unordered interface 142, upbound path 144, snoop filter 146, and a hub interface 147. The hub interface 147 may comprise arbitration circuitry 170, ordering queue 171, read bypass queue 172, ownership pre-fetch circuitry 174, address logic and queue 148, read cache and logic 173, and I/O interface 150.
  • The I/[0059] O hubs 120A and 120B may, in one embodiment, comprise a central component of the I/O subsystem of a server. Such an I/O hub may comprise a prefetch engine and read caches to deliver full bandwidth on data return; two SP interfaces to connect to either the SPSs 140A and 140B or the SNCs 110A and 110B; and hub interface 147 with a peak bandwidth of 1 (GB/s) gigabyte per second.
  • The I/[0060] O hubs 120A and 120B may support a building block philosophy, which may result in a flexible and configurable I/O subsystem. Many embodiments may also comprise components such as a legacy I/O controller hub (ICH), a PCI/PCI-X bridge, for example bridges 160 and 190, and a host controller adapter. Since I/ O hubs 120A and 120B may interface a variety of different I/O bridges, the microarchitecture may be generically optimized for I/O traffic behavior.
  • Referring now to FIG. 7, there is shown a block diagram of the high-level micro-architecture embodiment of I/[0061] O hubs 120A and/or 120B. The I/O hub, generically shown as 100G, may have internal structures that may comprise Read Caches 110G, Write Cache and Data Buffer 120G, Cache Directory 130G, Local Request Buffer 150G and Remote Request Buffer 140G, Read Prefetch Engines 170G, and Ordering Queues 180G. Read Caches 110G may comprise, for example, a 4 KB Read Cache dedicated to a Hub Interface. In some embodiments, fully coherent read caches may allow an aggressive pre-fetching algorithm without exposure to stale data delivery. A 4 KB Read Cache may be sufficient, in many embodiments, to accommodate enough read pre-fetches to hide memory latency. Further, independent read caches may prevent the traffic characteristics of one Hub interface to interfere with the traffic characteristics of the other Hub interfaces.
  • Write Cache and [0062] Data Buffer 120G may comprise, for example, a write cache implemented in the I/O hub. In some embodiments, coherent write caching may promote combining of write data to a cache line granularity, potentially increasing the efficiency of the SP and decreasing snoop overhead on the system.
  • [0063] Cache Directory 130G may comprise a directory that may track the cache lines held in the multiple read caches 110G and the write cache 120G. The directory may also be responsible for tracking duplicate entries of shared lines.
  • [0064] Local Request Buffer 150G and Remote Request Buffer 140G may comprise buffers to track coherent transactions issued by the I/O hub (Local Request Buffer 150G) and coherent transaction issued by other components (Remote Request Buffer 140G). In some embodiments, the buffers may work together to detect access conflicts and may enforce cache consistency.
  • [0065] Read Prefetch Engines 170G may comprise mechanisms to dynamically pre-fetch memory lines on behalf of the interfacing I/O devices. In several embodiments, Read Prefetch Engines 170G may be optimized for traditional memory latencies so the I/O hub may be designed to prefetch beyond the requests issued by I/O devices for increased read bandwidth.
  • Ordering [0066] Queues 180G may take advantage of the Scalability Port's inherently unordered protocol. As a proxy for I/O devices which follow Producer-Consumer ordering rules, the I/O hub, for example, may increase or maximize performance by prefetching, pipelining, and parallelism.
  • Referring back to FIG. 4, [0067] unordered interface 142 may facilitate communication between I/O hub 120A and a scalable node controller such as 110A or 110B with circuitry for a scalability port protocol layer, a scalability port link layer, and a scalability port physical layer. In some embodiments, unordered interface 142 may comprise simultaneous bi-directional signaling. Unordered interface 142 may couple to scalability port switches 140A and 140B to transmit transactions between scalability node controllers 110A and 110B and agents 162 and 164. Transactions between unordered interface 142 and scalability node controllers 110A and 110B may transmit in no particular order or in an order based upon the availability of resources or the ability for a target to complete a transaction. The transmission order may not be based upon, for instance, a particular transaction order according to ordering rules of an I/O interface, such as a PCI bus. For example, when agent 162 may initiate a transaction to write data to a memory line, agent 162 may transmit four packets to accomplish the write. Bridge 160 may receive the four packets in order and forward the packets in order to I/O interface 150. Ordering queue 171 may maintain the order of the four packets to forward to the unordered interface 142 via the upbound path 144. Scalability port switch 140A may receive the packets from unordered interface 142 and transmit the packets to memory 115A and memory 115B.
  • [0068] Upbound path 144 may comprise a path for hub interface 147 to issue transactions to the unordered interface 142 and to snoop filter 146. For example, upbound path 144 may carry inbound coherent requests to unordered interface 142, as well as ownership requests and read cache entry invalidations from ownership pre-fetch circuitry 174 and read cache and logic 173, respectively, to snoop filter 146. In many embodiments, upbound path 144 may comprise a pending transaction buffer to store a pending transaction on the unordered interface 142 until a scalability port switch 140A or 140B may retrieve or may be available to receive the pending transaction.
  • Further, when an I/O hub such as I/[0069] O hub 120A may couple more than one transaction queue, such as ordering queue 171 and read bypass queue 172, to scalability port switches 140A and 140B, hub interface 147 may comprise arbitration circuitry 170 to grant access to upbound path 144. In many embodiments, the arbitration circuitry 170 may provide substantially equivalent access to the unordered interface 142. In other embodiments, the arbitration circuitry 170 may arbitrate between the ordering queue 171 and the read bypass queue 172 based upon a priority associated with, or an agent associated with, an enqueued transaction.
  • [0070] Snoop filter 146 may issue ownership requests on behalf of transactions in ordering queue 171, return ownership completions, monitor pending transactions on unordered interface 142, and respond to downbound snoop requests from the unordered interface 142 or from a peer hub interface. In addition, snoop filter 146 may perform conflict checks between snoop requests, ownership requests, and ownerships of memory lines in memory 115A or memory 115B. For example, a write transaction waiting at ordering queue 171 to write data to memory line one in memory 115A may reach a top of ordering queue 171. After the write transaction for memory line one may reach the top of ordering queue 171, hub interface 147 may request ownership of memory line one for the write transaction via snoop filter 146. Snoop filter 146 may perform a conflict check with the ownership request and determine that the ownership request may conflict with the ownership of memory line one by a pending write transaction on unordered interface 142. Snoop filter 146 may respond to the ownership request by transmitting an invalidation request to hub interface 147.
  • Subsequently, [0071] hub interface 147 may reissue a request for ownership of memory line one for the write transaction and snoop filter 146 may perform a conflict check and determine that no conflict exists with an ownership by the write transaction. Then, snoop filter 146 may transmit a request for ownership to scalable node controller 110A via scalability port switch 140A. In response, snoop filter 146 may receive an ownership completion for memory line one and may return the ownership completion to hub interface 147. In some embodiments, hub interface 147 may receive an ownership completion for a transaction and may modify the coherency state of the transaction to ‘exclusive’. In several of these embodiments, snoop filter 146 may maintain the coherency state of the transaction in a buffer.
  • [0072] Hub interface 147 may maintain a transaction order for transactions received via I/O interface 150 in accordance with ordering rules associated with bridge 160. Hub interface 147 may also determine the coherency state of transactions received via I/O interface 150. For example, hub interface 147 may receive a write transaction from agent 164 via bridge 160 and place the header for the write transaction in ordering queue 171. Substantially simultaneously ownership pre-fetch circuitry 174 may request ownership of the memory line associated the write transaction via snoop filter 146. The ownership request may be referred to as ownership pre-fetching since the write transaction may not satisfy ordering rules associated with I/O interface 150. In alternate embodiments, when the ordering queue 171 is empty and no transactions are pending on the unordered interface 142, the write transaction may bypass ordering queue 171 and transmit to upbound path 144 to transmit across unordered interface 142.
  • [0073] Snoop filter 146 may receive the request for ownership and perform a conflict check. In some instances, snoop filter 146 may determine a conflict with the ownership by the write transaction. Since the coherency state of the write transaction may be pending when received, snoop filter 146 may deny the request for ownership. After the transaction order of the write transaction may satisfy ordering rules, or in some embodiments after the write transaction reaches the top of ordering queue 171, hub interface 147 may reissue a request for ownership. In response to receiving an ownership completion for the write transaction, hub interface 147 may change the coherency state of the write transaction to ‘exclusive’ and then to ‘modified’. In some embodiments, when the transaction may be at the top of ordering queue 171 upon receipt of an ownership completion, hub interface 147 may change the coherency state of the write transaction directly to ‘modified’, making the data of the write transaction globally visible. In several embodiments, hub interface 147 may transmit the transaction header of the write transaction to snoop filter 146 to indicate the change in the coherency state to ‘modified’.
  • On the other hand, when the [0074] hub interface 147 receives the ownership completion in response to pre-fetching the ownership, hub interface 147 may change the coherency state of the write transaction to ‘exclusive’ and maintain the transaction in ‘exclusive’ state until the write transaction may satisfy the corresponding ordering rules, unless the ownership may be invalidated, or stolen. For example, the ordering rules governing transactions received via bridge 160 from agent 162 may be independent or substantially independent from ordering rules governing transactions received from agent 164. As a result, many embodiments allow a second transaction to steal or invalidate the ownership of the memory line by a first transaction to transmit to uphound path 144 when the ordering of the second transaction is independent or substantially independent from the ordering of the first transaction. Ownership stealing may prevent backup, starvation, deadlock, or stalling of the second transaction or the leaf comprising the second transaction as a result of the first transaction. In many of these embodiments, ownership may be stolen when the first transaction may reside in a different leaf from the second transaction and/or in the same leaf.
  • In the present embodiment, read [0075] bypass queue 172 may provide a substantially independent path to the unordered interface 142 for read transactions that may be independent of ordering rules associated with transactions in ordering queue 171. Read bypass queue 172 may receive read transactions from the I/O interface 150 or may receive transactions from ordering queue 171. As a result, the embodiment may take advantage of the unrelated transaction ordering between the agent 162 and agent 164 or between read and write transactions from agent 162 and/or agent 164. For example, agent 162 may request a read of memory line one of memory 115A. Address logic and queue 148 may determine that a transaction, such as a write transaction, associated with memory line one is in the ordering queue 171. Hub interface 147 may forward the read transaction to ordering queue 171 to maintain a transaction order according to an ordering rule associated with agent 162. Afterwards, snoop filter may apply backpressure to read transactions from hub interface 147 until a pending read transaction in the snoop filter 146 may be transmitted across the unordered interface or until the ordering queue 171 may be flushed. The transactions of ordering queue 171 may be processed until the read transaction from agent 162 reaches the top of ordering queue 171. While backpressure may be applied to the read transaction from snoop filter 146, the read transaction may not be forwarded to snoop filter 146. In response, hub interface 147 may forward the read transaction to the bottom of read bypass queue 172. The read transaction may be forwarded to read bypass queue 172 to allow subsequently received write transactions to continue to transmit to the unordered interface 142. In addition, by reaching the top of ordering queue 171, the transaction order of the read transaction may have satisfied the ordering rules associated with agent 162 so the read transaction may proceed in a path independent from the ordering rules associated with ordering queue 171.
  • In many embodiments, I/O caching and adaptive pre-fetching of memory lines for read cache, such as read cache and [0076] logic 173 may be implemented in I/O hub 120A may comprise implementing an integrated caching and prefetch mechanism to provide high I/O throughput. Pre-fetching cache lines may hide round trip memory read latency and may save a read request from traversing through, for example, the chipset to memory 115A and back.
  • Adaptive pre-fetch and throttling, for instance, may utilize an adaptive algorithm with two or more dynamic profiles (conservative and aggressive in one embodiment) to pre-fetch cache lines speculatively. Pre-fetching of cache lines may be initiated after the initial request for a given stream is serviced. A stream may comprise a sequence of contiguous address requests to an I/[0077] O hub 120A or 120B. Subsequent read requests from the stream that may hit the read cache (possibly from the pre-fetched data) may be sent back or responded to without incurring upstream latency. A pre-fetch engine of logic circuitry in read cache and logic 173 may have the ability to sense traffic, like real time traffic, and modify its pre-fetch cache request generation rate for different I/O modes and may switches from one profile to another based on the prevailing conditions. In many embodiments, the degree of pre-fetching of cache lines may vary with the number of available streams for a given prefetch profile. For instance, if only one stream exists and the prefetch profile may be set to “aggressive,” then up to eight cache lines may be pre-fetched. If the number of streams increases to two, then one or each stream may be limited to a maximum of four cache lines. Pre-fetching cache lines may continue as long as the stream may still be allocated and an upper throttle limit may not have been reached. In several embodiments, this adaptive self-regulation may comprise a trade-off between pre-fetching enough data for cache to stream and not wasting the memory bandwidth excessively.
  • In some embodiments, cache such as the read cache of read cache and [0078] logic 173 may comprise a unified, logically and/or physically, cache to enhance performance across one or more streams with an amount of cache space, by dynamically allocating cache space to streams of reads or writes. In such embodiments, high streaming bandwidth performance may be accomplished with a smaller cache size than conventional cache. For example, if a bus such as a PCI bus may facilitate two kilobytes (KB) of cache for streams via bridge 160 to I/O hub 120A, with up to four streams, hub interface 147 may comprise a unified cache with one or more cache or buffers comprising a total of two KB. After stream one may be initiated, read cache and logic 173 may allocate one KB of the unified cache to stream one for real and/or speculative read requests, wherein a real read request may result from an actual request received from bridge 160 and a speculative read request may be initiated by the pre-fetch cache engine of read cache and logic 173. After a stream two may become active, read cache and logic 173 may allocate 0.75 KB to cache for stream two. Substantially simultaneously, read cache and logic 173 may reduce the cache space available to stream one from one KB to 0.75 KB, leaving 0.5 KB of space for a subsequently active stream. In situations wherein stream one and stream two remain active and stream three becomes active, read cache and logic 173 may allocate 0.5 KB of cache to stream three and de-allocate 0.25 KB of cache from stream one and from stream two. Upon de-allocation of the 0.25 KB from both stream one and stream two, 0.5 KB of cache may remain available for a stream four.
  • Depending upon streaming scenarios for I/[0079] O hub 120A, the cache sizes and stream allocations may be adjusted. For instance, some embodiments may comprise more than one leaf in I/O hub 120A and streams may be initiated on either leaf. In some of these embodiments, a unified cache may be allocated for each leaf. In other embodiments, a unified cache may be sized for caching of streams from both leaves. In still further embodiments, the unified cache may be partitioned dynamically between leaves or combined into a unified cache dynamically for two or more leaves.
  • In many embodiments, a timing mechanism such as programmable timer may enhance the operation of pre-fetching cache by determining the number of active streams. For example, adaptive pre-fetch and throttling may allocate cache space, such as cache space of a unified read cache, based upon a number of streams concurrently or substantially concurrently requesting data so the efficiency of the cache allocation may be based upon the accuracy of the count of streams. On a bus such as a PCI (peripheral interconnect interface) bus, a stream may end without an indication to that effect being transmitted to I/O hub [0080] 140. Read cache and logic 173 may continue to allocate cache to the stream and may also continue to pre-fetch cache lines for the stream. After the stream may terminate, the speculative cache line pre-fetch requests may unnecessarily use bandwidth upstream in addition to the memory. As a result, many embodiments may comprise the timing mechanism to terminate a stream based upon inactivity.
  • A timing mechanism may measure the time between a first request and a second request of the stream and after the time exceeds a certain time, the stream may be considered to have terminated. A balance between maintaining allocation and bandwidth may be balanced with increased performance: resulting from cache allocation and speculative pre-fetching in a determination of the time selected for such as timing mechanism. The time selection may also be based upon the latency, average or nominal, for receiving a completion from an upbound read request. [0081]
  • In still further embodiments, ordering [0082] queue 171 and read bypass queue 172 may comprise memory interleaving and reordering to increase memory throughput. Coherent I/O caches and pre-fetching may hide I/O read latency, even in the large multi-node configurations and the snoop filter may reduce overall latency and may eliminate unnecessary snoop traffic.
  • Memory request re-ordering may attenuate a number of dead cycles on the memory data bus induced by a DDR/DRAM protocol. One of the largest dead cycle penalties may be caused by a page replace, also called a page miss. For example, a page replace may happen after two consecutive requests go to different pages on the same DIMM. In this scenario, the second request may be delayed for the duration to close the previously activated page before activating the page for the next request. With some DIMMs, this duration may be 70 ns. In addition, there may be turnaround penalties of one cycle (e.g., 10 ns) on switching from read to write or vice-versa or when read data comes from different DIMMs on the same DDR channel. [0083]
  • In embodiments wherein memory requests may be placed in ordering [0084] queue 171, like a FIFO queue and processed in-order, the protocol induced inefficiencies may reduce sustained bandwidth significantly for random stream of requests typical of server workloads. However, when requests may be re-ordered to avoid conflicts, the sustained bandwidth and the average read latency may be improved.
  • Performance trade-off for the SNC and memory subsystem may be selected by using a detailed micro-architecture simulation model. Many different queue structures, queue assignments, re-ordering and other arbitration policies, and workloads may also facilitate selection of a queue structure and policies. [0085]
  • [0086] Ownership pre-fetch circuitry 174 may pre-fetch ownership of memory contents associated with a memory line after a transaction is received by I/O interface 150 and may prevent ownership from being pre-fetched in response to a signal from or not receiving a signal from address logic and queue 148. For instance, hub interface 147 may receive two write transactions from agent 162 to write data to the same memory line(s). After the first write transaction is received at I/O interface 150, ownership pre-fetch circuitry 174 may initiate a request for ownership of the memory line(s) associated with the first write transaction. Subsequently, I/O interface 150 may receive the second write transaction. Ownership pre-fetch circuitry 174 may receive a signal, or may not receive a signal in some embodiments, to indicate that ownership of the memory line(s) associated with the second write transaction may not be pre-fetched for the second transaction.
  • Address logic and [0087] queue 148 may maintain a list of pending transactions in hub interface 147 and/or I/O hub 120A, depending upon the embodiment, and may compare an address of an upbound transaction to the list to determine whether ownership may be pre-fetched for the transaction and/or the upbound transaction may be subject to independent ordering rules from a transaction in the ordering queue 171. In some embodiments, read transactions may comprise more than one address that may subject the read transaction to more than one ordering rule or set of ordering rules. For example, agent 162 may initiate a first write transaction to write to memory line one, a second write transaction to write to memory line one, and a first read transaction to read from memory line one. Then, agent 164 may initiate a second read transaction to read from memory line one. The I/O interface 150 may receive the first write transaction and address logic and queue 148 may determine that no address in an address queue of the address logic and queue 148 may match memory line one and may transmit a signal to ownership pre-fetch circuitry 174 to pre-fetch ownership of memory line one for the first write transaction.
  • In response to receiving the second write transaction, address logic and [0088] queue 148 may determine that the address is owned by the first write transaction, which is ahead of the second write transaction with regards to transaction order, and may transmit a signal to ownership pre-fetch circuitry 174 to indicate that ownership may not or should not be pre-fetched for the second write transaction. The I/O interface 150 may receive the first read transaction and address logic and queue 148 may determine that the first read may follow the first and second write transactions in a transaction order since agent 162 also initiated the first read transaction. Hub interface 147 may forward the first read transaction to the bottom of ordering queue 171. Then, I/O interface 150 may receive the second read transaction. The second read transaction also performs an action on memory line one, but, in the present embodiment, address and queue logic 148 maintains an address associated with pending transactions that comprises the address of the source agent or a hub ID representing one or more source agents, such as agent 162 for the first and second write transactions and the first read transaction. Since the hub ID of the second read transaction may be different from the hub ID's associated with the first and second write transactions and the first read transaction, the second read transaction may advance toward the unordered interface 142 along an independent path, e.g. the read bypass queue 172, bypassing the first and second write transactions and the first read transaction. In other situations, however, read cache and logic 173 may attach cache line invalidation data to the second read transaction and in response to a match between the address associated with the cache line invalidation data and an address of a pending transaction, such as memory line one, the second read transaction may be forwarded to the bottom of the ordering queue 171 rather than the bottom of the read bypass queue 172. In alternative embodiments, address logic and queue 148 may not maintain an address or ID associated with the source agent so determinations for ownership pre-fetching and/or bypassing may be made based upon the memory line(s) associated with a transaction.
  • Read cache and [0089] logic 173 may review a transaction after the transaction is received by I/O interface 150. In some embodiments, read cache and logic 173 may recognize a read transaction for a memory line, may determine whether the read cache and logic 173 stores a valid cache line comprising a copy of the memory line, and may respond to the read transaction after determining that read cache and logic 173 stores a valid cache line comprising a copy of the memory line. In other situations, read cache and logic 173 may not comprise the valid cache line and, in many embodiments, read cache and logic 173 may then attach cache line invalidation data to the read transaction to clear space to store the data received in response to the read transaction.
  • The cache line invalidation data may be forwarded to the snoop [0090] filter 146 to maintain synchronization between the read cache coherency states and the coherency states stored in the snoop filter 146. The cache line invalidation data may comprise or be associated with an entry in the cache of read cache and logic 173 and the address of the memory line associated with the entry. In many embodiments, the cache line invalidation data may be designed to instruct the snoop filter to invalidate an association between an address in the snoop filter 146 and an entry in the read cache. For example, read cache and logic 173 may store a cache version of memory line one and I/O interface 150 may receive a read transaction for memory line two. When read cache and logic 173 may not comprise a copy of the latest version of memory line two, read cache and logic 173 may clear room in cache for memory line two. In several embodiments, read cache and logic 173 may invalidate the oldest and/or least used data in cache, such as memory line one to make room for a copy of memory line two. In many of these embodiments, read cache and logic 173 may insert an invalidation request for the copy of memory line one into the header for the read transaction of memory line two. As a result, the data of the read completion for memory line two may be stored over the entry for memory line one. Snoop filter 146 may receive the invalidation request after the read transaction may reach the snoop filter 146 and may return a copy of the data from the read completion to read cache and logic 173. In some embodiments, read cache and logic 173 may further store data of a write transaction, or other memory lines near the memory line subject to the read transaction, into cache in anticipation of a read transaction for the same memory line(s).
  • In the present embodiment, bridges [0091] 160 and 190 couple one or more agents 162, 164, 192, and 194 to the I/ O hubs 120A and 120B from an ordered domain such as a peripheral component interconnect (PCI) bus, a universal serial bus (USB), or an infiniband channel. The agents 162, 164, 192, and 194 may transact upbound or peer-to-peer via I/ O hubs 120A and 120B. In many of these embodiments, agents 162, 164, 192, and 194 may transact with any processor and any of processors 100A-D may transact with any agent.
  • Redundancy may be provided in the architecture, enabling fast reset and reboot in a degraded mode in the event of a component or interconnect failure. For example, if an SP interface fails, the system is reset and reconfigured to use only one SPS switch. In a degraded mode, system performance may be impacted. [0092]
  • The SPS may be designed to support partitioning of the system into, for example, two domains. A domain may be a “system within a system”, that is, a domain may have its own instance of the operating system. A domain may support independent reset, independent error status and signaling, etc. Any two or more ports may be allocated to a domain (both an SNC and I/O hub may be present in a domain). [0093]
  • In many embodiments, partitioning may be accomplished by configuring the SPS (via firmware setup, using a remote management console, or the like) during system initialization. Once the system is partitioned, processor/memory nodes or I/O nodes may be moved from one partition to the other using the node hot plug capabilities. An example of the user benefits of domain partitioning combined with node hot plug is depicted in the table in FIG. 8, in which RAS is an acronym for Reliability/Availability/Servicability. [0094]
  • Referring now to FIG. 9, there is shown an embodiment of an apparatus of an I/O hub to maintain ordering for transactions between an ordered domain, I/[0095] O interface 290, and unordered domain, unordered interface 207. The embodiment may comprise unordered interface 207, downbound snoop path 200, upbound snoop path 205, snoop filter 210, coherency interface 230, hub interface 280, and upbound path 220. The downbound snoop path 200 may comprise circuitry to transmit a snoop request from the unordered interface 207 down to snoop filter 210. The upbound snoop path 205 may provide a path between snoop filter 210 and a controller on the other side of the unordered interface 207 to facilitate a snoop requests by snoop filter 210 and/or I/O devices coupled with I/O interface 290. In some embodiments, upbound snoop path 205 may facilitate cache coherency requests. For example, a processor in the unordered domain may comprise cache, and snoop filter 210 may request invalidation of a cache line after hub interface 280 receives a write transaction for memory associated with that cache line.
  • [0096] Snoop filter 210 may comprise conflict circuitry and a buffer. Conflict circuitry may determine conflicts between downbound snoop requests, inbound read transactions, inbound write transactions, and upbound transactions. Further, conflict circuitry may couple with the buffer to store the coherency states and associate the coherency states with entries in the upbound ordering first-in, first-out (FIFO) queue 240.
  • [0097] Coherency interface 230 may relay internal coherency completion and invalidation requests from snoop filter 210 to hub interface 280. These coherency requests may be generated by snoop filter 210 and may be the result of an ownership completion, a downbound snoop request, or an inbound coherent transaction. For example, after snoop filter 210 receives an ownership completion across unordered interface 207, snoop filter 210 may forward the completion across coherency interface 230 to the hub interface 280. The ownership completion may be addressed to the entry in the upbound ordering FIFO queue 240 that has a write transaction header associated with the corresponding ownership request.
  • [0098] Hub interface 280 may receive inbound transactions, such as upbound write and read transactions, and maintain ordering of the upbound transactions in accordance with ordering rules, such as PCI ordering rules and rules associated with coherency and the PCI producer consumer model. Hub interface 280 may comprise arbitration circuitry 222, transaction queues such as upbound ordering FIFO queue 240 and read bypass FIFO queue 250, ownership pre-fetch circuitry 260, address logic 270, address queue 275, read cache and logic 285, and I/O interface 290. Arbitration circuitry 222 may arbitrate access to the upbound path 220 between transaction queues, upbound ordering FIFO queue 240 and read bypass FIFO queue 250. In some embodiments, arbitration circuitry 222 may also arbitrate access between the transaction queues and ownership pre-fetch circuitry 260 to facilitate routing of coherency requests and responses from ownership pre-fetch circuitry 260 to snoop filter 210. For example, arbitration circuitry 222 may arbitrate substantially equivalent access between upbound ordering FIFO queue 240 and read bypass FIFO queue 250 for transmission of transactions from a transaction queue upbound through the upbound path 220 to unordered interface 207.
  • [0099] Hub interface 280 may comprise one or more transaction queues such as upbound ordering FIFO queue 240 to maintain a transaction order for upbound transactions according to the ordering rules and to store the coherency state and source ID for each upbound transaction. The source ID may associate an agent, or I/O device, with a transaction. Further, upbound ordering FIFO queue 240 may maintain an ordering for transactions received from the same source agent, or same source ID and/or hub ID. For example, upbound ordering FIFO queue 240 may receive transactions from agent number one and transactions from agent number two. The transaction order(s) maintained for agent number one and agent number two may be independent unless the transactions are associated with the same memory line. As a result, transactions from agent number one may satisfy their corresponding ordering rules and be transmitted to the unordered interface 207 without regard to transactions from agent number two, while transactions from agent number two may remain in upbound ordering FIFO queue 240. In some embodiments, an upbound ordering FIFO queue, such as upbound ordering FIFO queue 240, may be dedicated for a particular hub ID or source ID.
  • Read [0100] bypass FIFO queue 250 may facilitate progress of read transactions, wherein a read transaction may be subject to ordering rules independent of or substantially independent of ordering rules associated with transactions in upbound ordering FIFO queue 240. Read bypass FIFO queue 250 may receive read transactions from both the I/O interface 290 and the upbound ordering FIFO queue 240. For instance, I/O interface 290 may receive a first read transaction that may be associated with an address that may not have a matching entry in address queue 275. As a result, the read transaction may be forwarded to the bottom of the read bypass FIFO queue 250. In alternate embodiments, hub interface 280 may comprise more than one read bypass FIFO queue to adjust access to upbound path 220 between targets of transactions or transactions from different sources.
  • An advantage of embodiments that may comprise transaction bypass circuitry, such as circuitry comprising read [0101] bypass FIFO queue 250, may be that transactions may be processed in less time than the nominal snoop latency of the system. For example, when a read transaction may bypass a write transaction for the same memory line(s), the latency of the read transaction may not be penalized by the latency of the write transaction. Further, in embodiments that comprise ownership pre-fetch circuitry, such as ownership pre-fetch circuitry 260, the latency of a write transaction may not be limited to the nominal snoop latency of the system so the latency of the write transaction may decrease to the latency for the embodiment to process to write transaction.
  • [0102] Ownership pre-fetch circuitry 260 may pre-fetch ownership of a memory line for a transaction received by I/O interface 290 to avoid some latency involved with requesting ownership of the memory after the transaction may satisfy its corresponding ordering rules. A determination of pre-fetch ownership may be based upon whether an ownership of the memory line may reside with a pending transaction in upbound ordering FIFO queue 240. For instance, I/O interface 290 may receive a write transaction to write data to memory line one. Address logic 270 may verify that no entry in the upbound ordering FIFO queue 240 may be associated with memory line one. In response, ownership pre-fetch circuitry 260 may request ownership of memory line one via snoop filter 210. After address logic 270 determines that an entry in the upbound ordering FIFO queue 240 is associated with memory line one, the write transaction may be placed into the bottom of upbound ordering FIFO queue 240 and ownership for memory line one by the write transaction may not be requested again until the write transaction satisfies associated ordering rules or, in some embodiments, after the write transaction reaches or nears the top of upbound ordering FIFO queue 240.
  • [0103] Address logic 270 may maintain address queue 275 comprising addresses associated with pending transactions in hub interface 280 and may compare an address of an upbound transaction against addresses in the queue to determine whether ownership may be pre-fetched for the upbound transaction. In embodiments wherein read cache and logic 285 piggy-backs or attaches cache line invalidation data to read transactions to make a cache line available for a new copy of a memory line, read transactions may comprise more than one address so address logic 270 may compare more than one address associated with a read transaction against addresses stored in a queue to determine whether the read transaction should be forwarded to the read bypass FIFO queue 250 or to the upbound ordering FIFO queue 240.
  • Further, [0104] address queue 275 may comprise memory, or a queue, to store an invalidation address of cache line invalidation data. Address logic 270 and address queue 275 may maintain a list of one or more invalidation addresses to prevent a read transaction to read a memory line(s) from bypassing a transaction with cache line invalidation data, wherein the cache line invalidation data is associated with the same memory line(s). Preventing a read transaction from bypassing the transaction with cache line invalidation data may enhance synchronization between snoop filter 210 and the cache of read cache and logic 285.
  • In many embodiments, [0105] hub interface 280 compares a read transaction to the list of invalidation addresses in address queue 275 before forwarding the read transaction to the snoop filter 210. In some embodiments, the read transaction may be held in a transaction queue, such as upbound ordering FIFO queue 240 or read bypass FIFO queue 250, until the cache line invalidation data reaches snoop filter 210. In several embodiments, the logic involved with checking invalidation addresses may be simplified by placing a read transaction with an address matching an address in a queue, such as a FIFO queue, of address queue 275 into the bottom of upbound ordering FIFO queue 240. The read transaction may be placed into the bottom of read bypass FIFO queue 250 when the address does not match an invalidation address in address queue 275 and/or be allowed to bypass upbound ordering FIFO queue 240 when read bypass FIFO queue 250 is empty and the address does not match an invalidation address. In alternative embodiments, the snoop filter 210 may compare the read transaction against the list of addresses associated with cache line invalidation data pending in the transaction queue(s) and prevent the read transaction from being completed until the corresponding cache line invalidation data reaches snoop filter 210.
  • Read cache and [0106] logic 285 may snoop or monitor transactions as received by I/O interface 290. In some embodiments, read cache and logic 285 comprised a queue to retrieve read transactions. Read cache and logic 285 may recognize a read transaction for a memory line and may determine when a copy of the memory line associated with the read transaction is stored in read cache and logic 285. In response to a determination that a read transaction is associated with a memory line that is not stored in read cache, read cache and logic 285 may attach a cache line invalidation, or cache line invalidation data, to the read transaction. The cache line invalidation may inform the snoop filter 210 of the invalidated cache line after the header for the read transaction is received by snoop filter 210 and snoop filter 210 may modify a corresponding entry in a buffer of the snoop filter 210 to maintain cache coherency. Read cache and logic 285 may attach additional cache line invalidations to further read transactions to make room for copies of memory lines near the memory line associated with the read transaction.
  • In some embodiments, read cache and [0107] logic 285 comprises a stream monitor 289 to determine stream activity; cache logic circuitry coupled with said stream monitor to determine a real and speculative pre-fetch cache line schedule based upon the stream activity and to generate pre-fetch requests; and cache coupled with said cache logic circuitry to store pre-fetched cache lines in response to the pre-fetch requests. In several embodiments read cache and logic 285 comprises a stream monitor 289 to determine stream activity; a scheduler 287 coupled with the stream monitor 289 to determine a real and speculative pre-fetch cache line schedule based upon the stream activity; a pre-fetch engine 286 coupled with said scheduler 287 to generate pre-fetch requests; and cache coupled with said scheduler 287 to allocate cache to store pre-fetched cache lines in response to the pre-fetch requests.
  • The [0108] prefetch engine 286 may be responsible for handling read requests and sending data to a peripheral I/O device such as a NIC card, storage controller or a PCI-PCI bridge via I/O interface 290. The goal of the prefetch engine 286 may be to enhance or optimize high streaming data and, as well as to handle simultaneous concurrent I/O streams of varying demanded bandwidth in some embodiments. DMA protocols, for example, may comprise an application wherein large chunks of memory may be accessed by the I/O device to complete an operation (e.g., SCSI RAID controller initiates a disk write which translate into PCI reads).
  • [0109] Scheduler 287 may generate prefetch requests through a dynamic lookup table (LUT) 288 based on the number of available, active, or perceived streams. In addition, scheduler 287 may have the ability to sense real time traffic with a real time traffic mechanism and may modify the pre-fetch request generation rate on a cache line granularity for different I/O modes (eg. PCI/PCI-x). In still further embodiments, scheduler 287 may comprise an inbuilt adaptive throttling mechanism to prevent or substantially prevent memory subsystem overload and yet may also provide a requested I/O bandwidth.
  • [0110] Hub interface 280 may implement an integrated caching and prefetch mechanism to provide streaming data to high performance applications which occur, for example, in web-servers, database processing, data mining & retrieval, network and file servers. The read cache of read cache and logic 285 may maintain coherency with the rest of the system and may eliminate the overheads to implement invalidation schemes which are less conducive to I/O streaming. Pre-fetching cache lines may hide round trip read latency and may save every read request from traversing through the entire chipset to memory and back. The spatial locality of read requests and contiguous address space (such as Memory Read Multiple in PCI Bus protocol) lend very well to pre-fetching cache lines. This may be important to server applications where a large amount of data may transfer at high bandwidth. For example, a SCSI RAID controller may initiate a 4 KB DMA transfer to perform disk write operation which translate into inbound reads.
  • Referring now to FIG. 10, there is shown a Hub Interface Cluster in [0111] Hub Interface 280. FIG. 10 illustrates the basic relationships of the prefetch engine 286, read cache 285A, inbound queue 284A and related components in one of the Hub interface clusters. For purposes of better scalability, and reducing unrelated traffic interaction between streams, the I/O Hub may implement distributed read caches, one per Hub interface, such as hub interface 280 although some embodiments may comprise a unified cache. A stream may comprise a sequence of requests, such as read or write requests, from an I/O bridge starting with an initial address and request length and further continued by requests with contiguous addresses in logical order. Note that requests from multiple streams may arrive at the I/O Hub or hub interface 280 in an interleaved fashion. Read requests issued through the I/O Interface 290 by an external I/O bridge may be serviced by the prefetch engine 286 via I/O interface 290. The inbound transaction queue (ITQ) 284A may accept transactions targeted for main memory and peer I/O bridges. The ITQ 284A may accept transactions originating from the I/O Interface 290 and may forward the transaction to the internal interconnect.
  • The Hub Interface cluster may break up coherent read requests into multiple cache line requests and may send them through the Inbound request buffer (IRB) [0112] queues 201A to the internal interconnect and through the Scalability Port to the memory subsystem. The Hub Id (encoded in the I/O Interface 290 request packet) may indicate which of the two IkEs to send the request to using the LSB (least significant bit, e.g. “0” or “1”) to specify IRB0 or IRB1. Transactions may progress through the ITQ in FIFO order unless the ordering rules prevent its issuance.
  • For example, while the Hub Interface master may request a cache line aligned 256 bytes of data, the Hub Interface cluster may issue two 128-byte requests (e.g., four 64-byte reads) to the internal interconnect. For an unaligned request, the Hub Interface cluster may issue three 128-byte requests (e.g., six 64-byte reads). A read completion structure may be assigned for each cache line request that was requested from the I/[0113] O interface 290.
  • The read cache module for each I/[0114] O Interface 290 may comprise a fully associative memory of 4 KB that may store addresses and coherent information. This central read cache may be referenced by a number of read streams issued by I/O bridges for that I/O interface 290. A stream may initially be assigned based on the incoming request. After assignment, the Hub Interface cluster may send cache line read requests to the memory controller. After a read completion is returned on the I/O Interface 290 that completion structure may be available for further read requests. The Hub Interface cluster may not wait for enough completion structures for the entire subsequent request before issuing the next cache line read request. For example, when there is one completion structure available and the I/O Hub gets an inbound read request for 256 bytes, the corresponding hub interface 280 may issue the request for the first 128 bytes of the 256 byte request. The second 128 byte request may wait until another completion structure is available. After all the completion structures are pending completion, the ITQ 284A may buffer subsequent inbound read requests (writes may proceed independent of the completion structures' status). After all the ITQ 284A entries are full, the Hub Interface cluster may exert backward pressure and may issue retries to future inbound I/O Interface 290 requests until there is at least one slot may be available in the ITQ 284A.
  • After the read data has returned (perhaps multiple lines) from memory, they may be installed in the [0115] read cache 285A with coherence information, and the lines may be sequenced in the read completion unit 283A and may be sent to the I/O Interface 290. Status and book-keeping information for a stream may be stored in a “read_cache_stream” structure, which may comprise a record of the current requested address, length, time last accessed, etc. A timer 280A may be associated with each stream to indicate when the stream becomes active, inactive, and/or may be perceived as active or inactive. If no subsequent requests are received for that stream before the timer expires, then the stream may be inactive or perceived as inactive. To provide for long read bursts, pre-fetching cache may be initiated after the initial real request is sent inbound. (e.g. I/O bridge may send a read request for 256B starting at Address x). Subsequent read requests from the I/O Interface 290 that hit the cache for the given stream may be sent to the I/O Interface 290 directly from the cache without incurring upstream latency. A pre-fetch cache depth for sustaining the pipeline may be calculated as a function of the round trip delay for the read data and the time to transfer the data across the I/O Interface 290 (e.g., to tolerate a memory latency of 960 ns from the I/O Hub and burst at 1066 MB/s on the I/O interface 290, approximately 960*1.066=1024 bytes or eight 128B cache lines may be “in flight”).
  • The read [0116] cache prefetch engine 286 may dynamically allocate buffer space in the read cache 285A based on incoming streams and may provide a seamless cache line replacement method for continuous streaming and buffer re-use; generate prefetch requests on a cache line granularity through a dynamic lookup table (LUT) 288 based on the number of available concurrent I/O streams; sense real time traffic and modify a prefetch cache request generation rate for different I/O modes (e.g. PCI/PCI-X); and throttle upstream requests to prevent memory subsystem overload.
  • Prefetch modes such as the modes shown in FIG. 11A, may be based upon the type of bus or agent that has an active stream. An incoming read stream from the I/[0117] O Interface 290 may be considered as having two phases: Real Request Phase and Speculative Request Phase. In the real request phase, a read request of fixed length may be made by the I/O interface and the I/O Hub may attempt to deliver the requested data as quickly as possible. The data may hit the read cache or it might miss the read cache, resulting in a fetch from main memory. When a stream enters the real request phase, it may be considered a higher priority than streams in the Speculative Request phase. The stream may enter the Speculative Request Phase, after all requested data has been fetched by the I/O Hub. At this point, the stream may follow an adaptive prefetch mechanism. The assumption may be that if the master requested data at address X, then the master may subsequently request data at address X+1. Pre-fetching may continue as long as the stream is still allocated, e.g. active or perceived active, and, in some embodiments, the throttle limit has not been reached. Based on the number of active streams, the Hub Interface cluster may attempt to prefetch n number of lines ahead of the Real request. Pre-fetching may be disabled when excessive read streams are generated at I/O Interface 290. Speculative requests may be issued, for example, after the real request is greater than 128 bytes.
  • The adaptive prefetch mechanism may use a dynamic LUT to prefetch cache lines in the speculative phase. Two prefetch profiles (conservative and aggressive) may be used to index the appropriate look-up table values as shown in FIG. 11B. Profile selection may be a function of the number of PC[/PCI-x buses attached to the I/O bridge and the nature of devices (e.g. PCI vs. PCT-x). At-a given point in time, the prefetch engine might be utilizing the conservative profile. As soon as any of the “aggressive” conditions are detected the [0118] Hub Interface 280 may change the pre-fetching to adapt to the change in bandwidth requirements. Likewise, after an aggressive condition no longer exists, the pre-fetch engine may switch back to the “conservative” pre-fetch profile.
  • Once a prefetch profile is chosen, the number of active streams may determine the appropriate LUT entry and control the number of lines to prefetch ahead of the real request for that stream as shown in FIG. 11B. For instance, if only 1 stream exists and the prefetch profile is set to “aggressive”, then up to 8 cache lines may be prefetched. If the number of streams increases to 2, then each stream may be limited to a maximum of 4 cache lines. Thus the degree of pre-fetching may vary with the number of available streams. By having the ability to detect when a stream becomes active or inactive through the timer mechanism, such as [0119] timer 280A in FIG. 10, the number of streams may be automatically computed in real time and pre-fetching may be dynamically controlled. This adaptive self-regulation may comprise a trade-off between pre-fetching enough data for the Hub Interface master and not overshooting the memory, thereby impacting the rest of the system. As a further governor, the I/O Hub may maintain an upper limit of eight cache lines that may be pending delivery to a particular Hub Interface 280 and may minimize the memory overshoot.
  • In many embodiments, the number of cache lines “in flight” or “pending” may be calculated on a 128-byte quantity. For example, the I/O Hub may issue a pair of 64-byte requests for a real request. This pair may be considered as one line “in flight” for purposes of the prefetch algorithm. For example, after a real request is issued to the Hub Interface cluster and it misses the [0120] read cache 285A, the number of real requests “in flight” may be compared against 8. The term “in flight” may refer to reads that have been issued to the Scalability Port but have not yet returned to the Hub Interface cluster (read cache 285A). For instance, if there are already eight lines in flight, then the read request may not issue until at least one line returns to the Hub Interface cluster. In many of these embodiments, a new request may not be issued until another completion structure is available. On the other hand, if the number of lines “in flight” is less than the maximum allowable, then the real request may be issued.
  • In several embodiments, the mechanism for issuing speculative requests may determine the Prefetch profile using the table in FIG. 11B, or the like. Before any speculative requests may issue to the Scalability Port (after a read cache miss), the sum of real requests “in flight” and speculative requests “in flight” may be compared against the maximum cache lines in flight (8). The term “in flight” may refer to reads that have been issued to the Scalability Port but have not yet returned to the Hub Interface cluster (read [0121] cache 285A). For example, if there are already eight lines in flight, then the speculative read request may not issue until at least one line returns to the Hub Interface cluster. If the number of lines “in flight” is less than the maximum allowable, then prefetch parameters may be determined from the table in FIG. 11B. The number of active streams or streams perceived as active and the prefetch profile may determine the upper limit of speculative requests for a particular stream. For example, if only one stream is active and the profile is Aggressive, then the Hub Interface cluster may check if the total number of “pending” cache lines is less than 8. The term “pending” may refer to lines that have been issued for a read stream (real or speculative) but have not yet been delivered to the Hub Interface agent. If so, the Hub Interface cluster may issue up to 8 speculative read requests (the Aggressive profile may not allow more than 8 total requests in flight). If not, a speculative read is not issued until the pending lines drops below the value noted in the table (8 for this example).
  • The Hub Interface cluster may enforce speculative pre-fetching for streams which may have a zero Prefetch Horizon field in the initial real request and, in many embodiments, wherein the initial request may be greater than, e.g. 128 bytes (regardless of cache line size). The number of cache lines pending may be incremented after a line read is issued by the I/[0122] O Interface 290 and may be decremented after they return to the I/O Interface. In other embodiments, the number may be incremented after a pair of lines is issued and may be decremented after one or both lines return. Read cache hits may not affect the number of cache lines pending. In several such embodiments, the IOH may prioritize real requests and may maintain pre-fetching up to a high or maximum limit.
  • Referring back to FIG. 9, [0123] hub interface 280 may comprise a unified cache, the read cache of read cache and logic 285, for streams via I/O interface 290. The unified cache may comprise a stream monitor to determine stream activity; cache logic circuitry coupled with said stream monitor to determine a cache structure to allocate to streams based upon the stream activity; and cache coupled with said cache logic circuitry to store pre-fetched cache lines in the cache structure for the streams. In several embodiments, the unified cache may comprise a stream monitor to determine a change in a number of active streams; a scheduler coupled with said stream monitor to determine pre-fetch schedule based upon the number of active streams; and cache coupled with said scheduler to allocate cache to active streams based upon the pre-fetch schedule. In further embodiments, the unified cache may comprise a unified cache for more than one hub interface like hub interface 280.
  • The unified cache may also be implemented in other cache applications, such as other applications wherein data may be pre-fetched like an I/O bridge, network cards and storage controllers or to other I/O bridges that connect to the peripheral I/O devices, and connect on their other end to a system interconnection network/north bridge/memory controller that connects to the system memory and processors. In many applications, the unified cache may be logically unified but physically separate. [0124]
  • The I/O hubs/bridges may use read caches or buffers for staging data between the system memory and the peripheral I/O devices or I/O bridges when such devices read from the memory. More often than not, the I/O devices read huge chunks of contiguous data from the memory via DMA operations, for example paging out data to disk from memory. This traffic pattern lends itself very well to pre-fetching. Therefore, the I/O hubs/bridges typically also have pre-fetch engines like [0125] pre-fetch engine 286, that are responsible for handling the read requests from the peripheral I/O devices or I/O bridges and pre-fetching ahead of these requests (using the read caches or buffers for storing/staging the data) from the system memory to provide high streaming bandwidth.
  • The number of read streams may vary dynamically as an application executes. A high performance I/O hub/bridge may provide high streaming bandwidth both when there is a single read stream as well as when there are many concurrent read streams. Embodiments may comprise a die space efficient unified read cache or buffer architecture in the I/O hub/bridge across more than one stream from an I/O bus with adaptive pre-fetch scheduling via [0126] scheduler 287.
  • Embodiments of the unified cache may comprise adaptive pre-fetch scheduling to use a unified common read cache/buffer of size XYZ-KB across more than one stream; restrict the maximum total cache/buffer usage across the more than one stream to the unified cache/buffer size of XYZ-KB wherein XYZ may be larger (e.g. 2×) than the amount of useful pre-fetch data for continuous streaming to smoothly transition between different numbers of streams; track the number of active streams, N; the total cache/buffer space being used currently, TOTUSE; and the cache/buffer space in use by each stream j, USE_j; adapt the pre-fetch scheduling using XYZ, N, TOTUSE, USE_j by adaptively restricting or substantially restricting the maximum cache/buffer usage per stream such as by using a look-up table that uses N to look up a pre-set table to determine the maximum allowed cache/buffer usage per stream for that N or by using a formula that is computed for a given N etc; and allocating the same cache/buffer space to different streams using a replacement mechanism such as LRU, LRA etc. [0127]
  • FIG. 12 illustrates an embodiment for a 4 active streams scenario wherein there may be 0.5 KB per stream and no unused space, the total space being 2 KB. The same example may apply to eight streams with 0.25 KB per stream and no unused space. [0128]
  • In the embodiment shown in FIG. 12, a first stream may become active, using [0129] 1KB of cache and leaving 1KB for a subsequent active stream. After each subsequently active stream, an adaptive mechanism may adjust the cache available to each stream to leave 0.5 KB available for a subsequent stream until four streams use 0.5 KB of cache each. This example, may illustrate, for example, a single bus or I/O interface that may limit streams to 2 KB.
  • The actual implementation details may differ depending on many conditions such as whether the I/O hub/bridge is directly connected to the peripheral devices or to other bridges which may require different ways of tracking the number of active streams, the schedule/cost/complexity/application trade-offs for a particular chip that may result in choosing different replacement algorithms for the cache/buffer, or different throttling mechanism (fine or coarse grained throttling with number of streams) etc. Implementation detail aside, the unified read cache/buffer architecture with adaptive pre-fetch scheduling may provide high streaming bandwidth performance in I/O hubs/bridges by efficiently using smaller die space. For example, an embodiment may comprise the I/O Hub chip of the chipset which may use a look-up table adaptive scheduling mechanism with timer based active/in-active stream detection and LRA cache replacement algorithm using a fully associative read cache per Hublink bus. [0130]
  • Referring back to FIG. 9, in many embodiments, a stream timing system may be implemented by [0131] scheduler 287 to improve the determination or the timing of a determination of active and/or inactive streams. The stream timing system may comprise a timing mechanism to determine an occurrence of a event and comprising a reset mechanism to change the event; cache logic circuitry coupled with said timing mechanism to change allocation of a cache structure for a stream based upon the occurrence of the event; and cache coupled with said cache logic circuitry to store data in the cache structure. In several embodiments, the stream timing system may comprise an event that is heuristically determined.
  • The stream timing system may provide an ability to enhance cache allocation for streams by detecting when a stream may becomes active or inactive so the number of streams may be automatically computed -in real time and pre-fetching may be dynamically controlled. For example, after a prefetch profile is chosen, the number of active streams may determine the [0132] appropriate LUT 288 entry and control the number of lines to prefetch. If only one stream exists and the mode is aggressive, then up to 8 cache lines, for instance, may be pre-fetched. If the number of streams increases to 2, then a stream may be limited to 4 cache lines. When there may be many streams active, the cost of excess pre-fetching may be high since the memory subsystem may be overloaded with requests that may be wasted and increasing the startup latency of the peripherals.
  • One embodiment of a timing mechanism may comprise an inactivity timer. The I/O Hub may implement, for example, a 10-bit inactivity timer for each of the active streams involved with speculative pre-fetching. The timer may facilitate de-allocation of a stream after the stream becomes inactive. An embodiment of a timing mechanism that may maintain cache allocation for inactive streams may suppress pre-fetching for requests from other useful streams since the LUT is a function of the number of perceived streams in the IOH and will use a less than ideal value. Conversely, a timing mechanism that may de-allocate an active stream may result in early stream destruction and increase memory overshoot through excess prefetch. Hence, some embodiments that implement timers may heuristically chose time periods to determine when a stream may be inactive or may still be active. [0133]
  • In one embodiment, each 10-bit timer runs at 200 MHz providing a programmable value of 1.28 microseconds to 5.12 microseconds. The timer may begin counting after the data requested by the Hub Interface master is delivered on the I/[0134] O Interface 290. Whenever a new request “hits” an allocated stream, the timer may be cleared. After the timer reaches a value, such as values described in FIG. 13 or programmed in the I/O Hub register, the stream may be deemed to have expired and may be de-allocated from the stream structure. FIG. 13 shows an example of how the inactivity may be programmed for each Hub Interface depending on, for instance, the corresponding PCI subsystem based on performance analysis.
  • Referring back to FIG. 9, in some [0135] embodiments hub interface 280 may also provide circuitry to determine a coherency state for an inbound transaction and respond to coherency requests issued across coherency interface 230 from snoop filter 210. For example, when snoop filter 210 sends an ownership completion, hub interface 280 may accept the completion and update the status of the targeted inbound transaction as owning the memory line, or change the coherency state of the targeted inbound transaction from a pending state to ‘exclusive’. On the other hand, when snoop filter 210 sends an invalidation request targeting an inbound write transaction that has a coherency state of pending, (e.g., may not own the memory line), hub interface 280 may accept the invalidation and reissue a request for ownership after the inbound write transaction may reach or near the top of upbound ordering FIFO queue 240.
  • After a transaction reaches the top of a transaction queue, such as upbound [0136] ordering FIFO queue 240 or read bypass FIFO queue 250, arbitration circuitry 222 may grant access to a transaction queue and the corresponding transaction may transmit to upbound path 220. Upbound path 220 may comprise pending data buffer 224 and pending transaction buffer 226. Pending data buffer 224 may receive and store data associated with upbound transaction awaiting transmission across unordered interface 207. Pending transaction buffer 226 may store a transaction header for a transaction pending on the unordered interface 207. For example, when I/O interface 290 receives an upbound transaction, hub interface 280 may place the header of the transaction in upbound ordering FIFO queue 240 and transmit the data associated with the header to pending data buffer 224. At some point after satisfying ordering rules, the header may be forwarded to the pending transaction buffer 226 to await transmission across unordered interface 207. Then, the data may transmit across unordered interface 207.
  • In some embodiments, pending [0137] data buffer 224 may comprise a separate buffer for one or more I/O devices coupled with I/O interface 290 based upon one or more hub ID's. In other embodiments, pending data buffer 224 may comprise mechanisms such as pointers to associate a section of a buffer with a hub ID.
  • In many embodiments, [0138] hub interface 280 may also comprise starvation circuitry to prevent starvation of a transaction, or leaf of transactions, as a result of ownership stealing. For example, starvation circuitry may monitor the number of invalidations transmitted to and/or accepted by hub interface 280 for a transaction, or a leaf of transactions, and once a count of invalidations reaches a starvation number, the starvation circuitry may stall the I/O interface 290 to flush the upbound ordering FIFO queue 240 and/or read bypass FIFO queue 250. The starvation number may be based upon statistical and/or heuristic data and/or a formula derived there from. Thus, the transactions associated with upbound ordering FIFO queue 240 and/or read bypass FIFO queue 250 may-clear before additional write and/or read transactions may be received via I/O interface 290. In some embodiments, starvation circuitry may couple with arbitration circuitry 222 to modify the level of access arbitrated to upbound ordering FIFO queue 240 and/or read bypass FIFO queue 250.
  • Referring now to FIG. 14, there is shown a flow chart of an embodiment to maintain ordering for transactions and to transact between an ordered interface and an unordered interface. The embodiment may comprise receiving a first transaction from an ordered [0139] interface 300; comparing the first address to a cached address associated with a line of a cache, wherein the first transaction comprises a read transaction 310; comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a second transaction 320; prefetching ownership of a memory content associated with the first address, wherein the first address is different from the second address 340; and advancing the first transaction to the unordered interface substantially independent of an advancement of the second transaction to the unordered interface, wherein the second address is different from the first address 360. Receiving a first transaction from an ordered interface 300 may comprise receiving a transaction from an I/O device coupled with the ordered interface in a transaction order according to ordering rules associated with the I/O interface or an I/O device coupled with the I/O interface. In many embodiments, the transactions may be received from more than one I/O device and, in several embodiments, the transactions received may comprise transactions subject to independent ordering rules.
  • Some embodiments may comprise comparing the first address to a cached address associated with a line of a cache, wherein the first transaction comprises a read [0140] transaction 310. Comparing the first address 310 may compare a first memory line address against memory line addresses in the cache to determine whether a valid copy of the memory line may be stored in a line of the cache. Comparing the first address 310 may comprise responding to the first transaction with the line of the cache, wherein the first address substantially matches the cached address 313 and attaching cache line invalidation data to the read transaction to invalidate the line in the cache 315. Responding to the first transaction with the line of the cache, wherein the first address substantially matches the cached address 313 may comprise retrieving a line of the cache from a cache to store data for a leaf and/or a hub ID. In other embodiments, the cache may comprise one or more memory arrays that dedicate an amount or physical division of the cache to store data for a leaf or hub ID. These embodiments may comprise populating the cache with data of a memory line anticipated to be the target of a subsequent read transaction. In many embodiments, a cache pre-fetch algorithm may anticipate a memory line as the target of a subsequent read transaction based upon a read and/or write transaction from an I/O device or leaf. Responding to the first transaction 313 may transmit a response or completion to the requester, or I/O device, without forwarding the read transaction to the unordered interface. In several of these embodiments, such a cache hit may reduce the latency of the read transaction, as well as transactions that may not compete with the read transaction for access to the unordered interface after the hit.
  • Attaching cache line invalidation data to the read transaction to invalidate the line in the [0141] cache 315 may, after a cache miss, attach data to cause the snoop filter to invalidate a line of the cache to make room for an additional entry in the cache. In some embodiments, the invalidation may be attached to or incorporated into a read transaction that may read a memory line to store in the cache and, in some embodiments, the memory line may be stored in the cache line associated with the invalidation. In one embodiment, the cache line invalidation data may be inserted into the header of the read transaction. In several embodiments, the cache line invalidation may be subject to an ordering rule that does not allow the cache line invalidation data to pass a transaction associated with the same memory line, e.g. the ordering of the invalidation is dependent upon the transaction order of another pending transaction. So the advancement of the read transaction toward the unordered interface may be restricted or limited by the ordering rule for the attached cache line invalidation. For example, the ordered interface may receive a read transaction and a comparison of the memory line associated with the read transaction may result in a cache miss. As a result, read cache logic may decide to store the memory contents of the memory line associated with the read transaction into the cache and piggy-back cache line invalidation data in the header of that read transaction. Address logic may determine the memory line subject to the read transaction may have a different address than addresses stored in the address queue, however, the address associated with the cache line invalidation data, or the invalidation address, may match an entry in the address queue. As a result, the read transaction may be placed at the bottom of an upbound ordering queue. Once the read transaction may reach the top of the upbound ordering queue, the read transaction may be eligible to transmit across the unordered interface, or may have satisfied the ordering rule corresponding to the cache line invalidation. In other situations, the read transaction may have to satisfy both an ordering rule associated with the invalidation address and the address of the memory line before becoming eligible to transmit upbound, advancing toward the unordered interface. After the snoop filter receives the cache line invalidation data, the snoop filter may invalidate an entry in the read cache to store the data resulting from the read transaction. After the completion for the read transaction is received, the data of the read completion may be written in the read cache at the entry associated with the cache line invalidation data.
  • Many embodiments may maintain a transaction order for an upbound transaction based upon an ordering of an I/O interface to transmit the upbound transaction to an unordered interface by placing the upbound transaction in an ordering queue. For example, a first write transaction received from an I/O interface may be placed in an upbound ordering queue. Then a second write transaction may be placed in the upbound order queue. After the first transaction may reach the top of the ordering queue, the first write transaction may issue to the unordered interface. In some embodiments, after receiving a completion for the first write transaction, the second write transaction may advance toward the unordered interface. In other embodiments, after the first write transaction may issue to the unordered interface, the second write transaction may advance upbound. [0142]
  • Many embodiments may maintain a transaction order to prevent problems associated with performing transactions out of order. For example, an agent on the ordered interface, such as an I/O device coupled with a bridge, may issue a series of four write transactions and, assuming that the transactions will be performed in order, issue a fourth write transaction that may modify the same memory contents that the first write transaction modifies. When these transactions may be performed in an order other than the order of issuance, the changes to the memory contents may be unpredictable. [0143]
  • Comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a [0144] second transaction 320 may determine whether a second transaction may perform an action upon the same memory line as the first transaction, whether the first and the second transaction may be issued from the same I/O device, or whether an invalidation address attached to the first transaction may match an address in the address queue. For instance, a write transaction and a read transaction may have been received prior to receiving the first transaction and the memory line addresses associated with the write and read transactions may be stored in an address queue. After the first transaction is received, the address logic circuitry may compare the memory line address associated with the first transaction against the memory line addresses associated with the read and write transactions to determine that one, both or neither of the transactions may perform an action on the same memory line. In response to a determination that one of or both the read and write transaction may perform an action on the same address, the address logic may transmit a signal to the ownership pre-fetch circuitry. In many of these embodiments, a signal may be transmitted to the ownership pre-fetch circuitry to stop, request, and/or initiate, pre-fetching ownership for the first transaction.
  • Comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a [0145] second transaction 320 may comprise comparing a first memory line address associated with the first transaction against a second memory line address associated with the second transaction 325 and comparing a first hub identification of the first address against a second hub identification of the second address 330. Comparing a first memory line address 325 may compare the address of the memory line that the first transaction may perform a read of or write to against the address of the memory line address that the second transaction may perform a read of or write to, to determine whether the first transaction and the second transaction may perform action on the same memory line and, in many embodiments, whether the transactions may perform actions on the same memory contents of the memory line. For example, when the first transaction is a write and the second transaction is a write transaction, comparing a first memory line address 325 may determine that the first transaction may write to the same memory cells as the second transaction. In other situations, comparing a first memory line address 325 may determine that a read may be performed on the same memory cells as a write. In many embodiments, comparing a first memory line address 325 may further determine whether the first transaction may advance toward the unordered interface, or upbound, independent of an advancement of the second transaction upbound by comparing an invalidation address associated with the first transaction against a list of invalidations addresses in an address queue.
  • Comparing a first hub identification of the first address against a second hub identification of the [0146] second address 330 may determine whether the first transaction and the second transaction are transactions from the same I/O device, such as an Ethernet card. For example, two I/O devices may be coupled to a bridge and the bridge may be coupled with an I/O interface to allow the two input output devices to transact across an unordered domain. The first I/O device may be associated with a hub ID of zero and the second I/O device may be associated with a hub ID of one. When the buses interconnecting the two IO devices to the PO interface is a peripheral component interconnect bus and operate according to PCI ordering rules, the transactions associated with the first I/O device (hub ID zero) may be independent of transactions associated with the second I/O device (hub ID one) with respect to ordering rules. Some embodiments take advantage of the independence by comparing a first hub identification of the first address against a second hub identification of the second address 330. Other embodiments may not track the hub ID associated with a transaction.
  • Referring still to FIG. 14, pre-fetching ownership of a memory content associated with the first address, wherein the first address is different from the [0147] second address 340, may initiate a request for ownership of an address prior to the first transaction satisfying ordering rules associated with that first transaction. Pre-fetching ownership 340 may pre-fetch ownership of the memory content for a transaction so that the transaction may be ready to transmit across an unordered domain as soon as the transaction satisfies its ordering requirements.
  • [0148] Pre-fetching ownership 340 may comprise initiating a request for ownership of the memory content by the first transaction before the second transaction is to satisfy an ordering rule to transmit to the unordered interface 345. Initiating a request for ownership 345 may steal an ownership from the second transaction, or take ownership of the same memory line as the second transaction, wherein the transaction order of the first transaction is independent of the ordering rules associated with the second transaction. After the ownership of the same memory line is taken by the first transaction, or stolen, the snoop filter may invalidate the ownership of the memory line by the second transaction. In other situations, the second transaction may not have an ownership of the memory line so the first transaction may gain ownership of the memory line before the second transaction may receive ownership. In many of these cases, the first transaction and the second transaction may race to satisfy ordering rules and after the second transaction may satisfy its ordering rules first, the second transaction may steal the ownership from the first transaction. In other situations, after the first transaction may transmit across the unordered domain and/or a completion may be received for the first transaction, the second transaction may request and receive ownership for the memory line.
  • In some embodiments, initiating a request for ownership of the memory content by the [0149] first transaction 345 may pre-fetch ownership for the first transaction after a determination that the ordering requirements of, or ordering rules for, the first transaction may be independent of the ordering rules for the second transaction. In many embodiments, determining the ordering rules may be independent may comprise determining that the first address and the second address are different, such as a different target address or a different source address. The difference target address may comprise a different memory line and the different source address may comprise a different hub ID. The hub ID may be a part of a number that identifies the source I/O device.
  • Many embodiments may comprise advancing the first transaction to an unordered interface substantially independent of an advancement of the second transaction to the unordered interface, wherein the second address is different from the [0150] first address 360 may allow a read or write transaction to bypass the upbound ordering queue wherein the memory line associated with the read or write transaction, invalidation address, and/or the hub ID associated with the read or write transaction may differ from memory lines, invalidation addresses, and/or hub ID's stored in the address queue.
  • Advancing the first transaction to an unordered interface substantially independent of an advancement of the second transaction to the unordered interface, wherein the second address is different from the [0151] first address 360 may comprise advancing a read transaction 365 and advancing the first transaction to the unordered interface substantially independent of the advancement of the second transaction, wherein a hub identification associated with the second transaction is different from a hub identification associated with the first transaction 375. Advancing a read transaction 365 may place the read transaction in a read bypass queue when the read transaction was initiated by a source device associated with a hub ID that is different from hub ID's associated with transactions in an upbound ordering queue. For instance, a read transaction having a hub ID of zero may be placed in the read bypass queue wherein transactions in the upbound ordering queue associated with hub ID zero have no entries associated with hub ID zero.
  • Advancing a [0152] read transaction 365 may comprise advancing the read transaction to the unordered interface substantially independent of the advancement of the second transaction unless a memory line address associated with the read transaction is substantially equivalent to a memory line address associated with the second transaction 370. Advancing the read transaction 370 may forward the read transaction to a read bypass queue when the memory line to be read is different from the memory lines stored in the address queue or memory lines of transactions awaiting transmission across the unordered interface. In embodiments where the address queue may also store hub ID's associated with pending transactions, advancing the read transaction 370 may also forward the read transaction to the read bypass queue when the hub ID associated with the read transaction is different from the hub ID's in the address queue.
  • Advancing the first transaction to the unordered interface substantially independent of the advancement of the second transaction, wherein a hub identification associated with the second transaction is different from a hub identification associated with the [0153] first transaction 375 may allow a write and/or read transaction to bypass another write or read transaction in an upbound ordering queue since the ordering for the transactions are independent. For example, a write transaction initiated by a first I/O device may write to memory line one of a system memory in an unordered domain via an unordered interface. A read transaction may read from memory line one and may be initiated by a second I/O device after the write transaction was stored in the upbound ordering queue. However, after comparing the address of the read transaction again the address of the write transaction, the read transaction may bypass the write transaction since the ordering rules associated with the read transaction are independent of the ordering rules associated with the write transaction.
  • Referring now to FIG. 15, a machine-readable medium embodiment of the present invention is shown. A machine-readable medium includes any mechanism that provides (i.e. stores and or transmits) information in a form readable by a machine (e.g., a computer), that when executed by the machine, may perform the functions described herein. For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g. carrier waves, infrared signals, digital signals, etc.); etc. . . . Several embodiments of the present invention may comprise more than one machine-readable medium depending on the design of the machine. [0154]
  • In particular, FIG. 15 shows an embodiment of a machine-[0155] readable medium 400 comprising instructions for receiving a first transaction from an ordered interface 410; comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a second transaction 420; and pre-fetching ownership of a memory content associated with the first address, wherein the first address is different from the second address 430. Receiving a first transaction from an ordered interface 410 may comprise receiving a read or write transaction from an I/O device coupled with the ordered interface to transmit across an unordered interface.
  • Instructions for comparing a first address associated with the first transaction against a second address in an address queue, wherein the second address is associated with a [0156] second transaction 420, may comprise instructions for comparing an address associated with a write transaction against one or more addresses stored in an address queue to determine whether a pending transaction in an upbound ordering queue or pending on the unordered interface may be associated with the same or substantially the same address. For example, a transaction, after having satisfied ordering rules, may be pending on an unordered interface. A subsequent transaction may be received and the address associated with the transaction may match or substantially match the address of the transaction pending on the unordered interface. As a result, instructions may prevent the subsequent transaction from obtaining ownership of the memory line, wherein the subsequent transaction may comprise a write transaction. On the other hand, when the instruction may cause the subsequent transaction to be forwarded to an upbound ordering queue wherein the subsequent transaction comprises a read transaction.
  • Instructions for pre-fetching ownership of a memory content associated with the first address, wherein the first address is different from the [0157] second address 430 may comprise instructions for pre-fetching ownership for a write transaction wherein the address, such as a memory line address and/or hub ID, associated with the write transaction is different from one or more addresses stored in an address queue. The instructions to determine the address is different from one or more addresses stored in an address queue may comprise instructions to determine whether the write transaction is subject to ordering rules that are not independent of ordering rules of transactions awaiting transmission across the unordered interface.
  • Referring now to FIG. 16, there is shown an example embodiment of an [0158] SPS 500 comprising a shared bypass bus structure 510. The shared bypass bus structure 510 may facilitate low-latency cache coherency operations by providing practical, fast connectivity from the Scalability Ports to the component's coherency interleaves which bypasses the port-to-port crossbar interconnect.
  • The shared bypass bus structure may comprise a [0159] first scalability port 535; bypass bus structure 510 coupled with said first scalability port 535; a coherency interleave, e.g. 520 or 525, coupled with said bypass structure 510 to transact with said first scalability port 535; a crossbar structure 530 coupled with said first scalability port 535; and a second scalability port 540 coupled with said crossbar structure 530 to transact with said first scalability port 535 and coupled with said bypass bus structure 510 to transact with said coherency interleave 520 or 525 substantially independent from a transaction with said first scalability port 535. The shared bypass bus structure 510 may couple between the Scalability Ports 535 and 540 and the coherency interleaves 520 and 525 of the switch component.
  • The shared bypass bus structure may comprise an incoming shared crossbar bypass data bus to transmit data from a localized group of [0160] Scalability Ports 535 to a coherency interleave 525 or to a localized group of coherency interleaves 525; an outgoing shared crossbar bypass data bus to transmit data from a coherency interleave 520 or from a localized group of coherency interleaves 520 to a localized group of Scalability Ports 540; a data bus multiplexing structure for each incoming or outgoing shared crossbar bypass data bus; and an arbitration controller with handshake signals. The data bus multiplexing structure may be an ordinary single-point data multiplexer which drives the shared bus, or may comprise distributed three-state driver buffers controlled such that exactly one set of buffers drives the shared bus at a given time.
  • [0161] Scalability Ports 535, for instance, located within the same region of the component may form a localized group of Scalability Ports which may share one or more incoming shared crossbar bypass data buses. Likewise, coherency interleaves 520, for instance, located within the same region of the component may form a group which may share one or more outgoing shared crossbar bypass data buses.
  • For a [0162] Scalability Port group 535 and a coherency interleave group 520 that may both reside in the same region of the component, the Shared crossbar bypass bus structure which connects these groups is referred to as a Local shared crossbar bypass bus structure. FIG. 17 depicts a Local shared crossbar bypass bus structure comprising the incoming local bypass bus 606 and outgoing local bypass bus 601. On the other hand, when the Scalability Port group 535 and the coherency interleave group 540 may be located in different regions of the component 500, the interconnect structure is termed a Remote shared crossbar bypass bus structure. FIG. 18 shows a Remote shared crossbar bypass bus structure, with an outgoing remote bypass bus 603 and an incoming remote bypass bus 608.
  • In several embodiments, a shared bypass bus structure may be used exclusively or substantially exclusively for communicating between a Scalability Port and a coherency interleave. In many of these embodiments, shared bypass bus structure may not be used for communicating between one Scalability Port and another Scalability Port, nor for communicating between two coherency interleaves. [0163]
  • Shared [0164] bypass bus structure 510 may provide, in some embodiments, complete or substantially complete connectivity between Scalability Ports 535 and 540 and coherency interleaves 520 and 525 with m x n bus structures, where m may be the number of Scalability Port groups and n may be the number of coherency interleave groups. For example, in a component such as SPS 500 which has two Scalability Port groups 535 and 540 and two coherency interleave groups 520 and 525, with one Scalability Port group 535 and one coherency interleave group 520 in Region A, and the other Scalability Port group 540 and coherency interleave group 525 in Region B, then four Shared bypass bus structures may be used-two local bus structures (one within each region), and two remote bus structures (one to connect the Scalability Ports in Region A to the coherency interleaves in Region B, and one to connect the Scalability Ports in Region B to the coherency interleaves in Region A). FIGS. 19 and 20 show the Shared bypass bus structures 600, 605, 602, and 607 in such a component. Each Shared bypass bus structure has an incoming data bus and an outgoing data bus, for a total of eight buses in the embodiment of the component shown.
  • The Shared bypass bus structure may comprise an arbitration controller to coordinate the use of the buses and, in many embodiments, to provide for fair access to the buses such that no coherency interleave or SP may be indefinitely blocked from access to a bus by the activities of another unit. [0165]
  • In some embodiments, the Shared bypass bus structure may provide for communication of request and response information for memory cache coherency operations based on the Intel Scalability Port Protocol or a similar protocol. In the Scalability Port Protocol, to avoid or attempt to avoid deadlocks, request and response items may be transmissible independent of each other and may comprise independent flow control. For example, indefinite flow control against request information may not be permitted to block response information indefinitely. The arbitration controller of the Shared bypass bus structure may treat request and response information or data as two separate “virtual channels,” and may provide for access to the buses for each virtual channel regardless of the status of the other virtual channel. [0166]
  • Shared [0167] bypass bus structure 510 may be shared both among multiple transmitters and among multiple receivers and may comprise parallel data bits, a data valid qualifier to identify valid data on a bus, a virtual channel qualifier to select a channel, and a multi-bit destination qualifier to select an address. A receiver may consider data on the bus to be valid after it recognizes its identification code in the destination field after the data valid signal is asserted. In addition, the buses may be accompanied by arbitration and handshaking signals to facilitate bus arbitration and flow control such as a request-channel arbitration request from each transmitting unit to the arbiter; a response-channel arbitration request from each transmitting unit to the arbiter; a selected signal from the arbiter to each transmitting unit, indicating that that transmitter owns the bus and its data can be observed by the receivers; a request-channel ready signal from each receiver for flow control, observed by all transmitters and by the arbiter; and a response-channel ready signal from each receiver for flow control, observed by all transmitters and by the arbiter.
  • In some embodiments, arbitration and multiplexing may be accomplished as close physically to the transmitting units as possible, to limit data bus congestion and silicon area consumption. Operations may comprise a unit, such as a coherency interleave or Scalability Port, having data to transmit asserting a request-channel arbitration request or a response-channel arbitration request. The unit may also transmit data to a local bus and may assert a valid signal. Then, based upon a selection mechanism and a fairness mechanism, such as a round-robin determiner, the arbiter may select one of the requesting units to own the bus in a subsequent clock cycle. [0168]
  • The arbiter may transmit a control signal to the bus multiplexor and may transmit a selected signal to a transmitter. After the transmitting unit observes that the data has been received, the valid qualifier may be de-asserted or new data may transmit. [0169]
  • Advantages of these embodiments may, for instance, comprise an approach to a connectivity problem that neither a crossbar switch structure, nor a collection of point-to-point buses, nor a component-wide multiply driven bus may feasibly or advantageously solve. Further, the Shared bypass bus structure may yield performance, cost, and architectural advantages over these other approaches. For example, in regards to Idle Latency, the performance presented here may combine advantages of a crossbar switch design with those of a direct-connect bus. The crossbar switch may provide high throughput and connectivity for the streaming of memory data between ports.-Meanwhile operations to initiate memory transfers and to perform cache state lookups and updates may be allowed to bypass the crossbar, potentially yielding latency savings in the idle to light activity case. More specifically, in an embodiment comprising the chipset Scalability Port Switch component, the Scalability Port Switch component's latency contribution reduced by an estimated 20% to 30% for many common operations. [0170]
  • In some embodiments, the structure may also provide similar advantages over an alternative approach and even alternative embodiments, such as that of a component-wide multiply driven bus. The number of and distance between design units driving such a bus result in extra transmission times for control signals and bus driver turn-on and turn-off times, as well as possible frequency limitations as compared to the Shared crossbar bypass bus structure. [0171]
  • In regards to High-activity latency, like under heavier loading, given a mechanism to bypass the crossbar, cache state lookup and update operations may not compete with data streaming resources. The cache coherency operations may thus be processed immediately or substantially immediately, thereby providing performance gains under high-activity in many embodiments. [0172]
  • Another advantage may include area and cost improvements for some embodiments. Other embodiments, providing complete or substantially complete bipartite connectivity between all or most ports and interleaves via a coherency crossbar switch may comprise significantly more silicon area, which may raise the cost of those components. In further embodiments, the bypass structure may be expanded for data transfer between Scalability Ports and dedicated access ports may be added to the crossbar for the coherency interleaves, although this may be more costly in silicon area. [0173]
  • Alternatively, to address connectivity and latency requirements with a collection of point-to-point buses without the partitioning and sharing applied in the Shared bypass bus structure may likewise be more costly. The metal routing about the component to connect six ports to four interleaves and four interleaves to six ports may consume large amounts of silicon area. Similarly the routing congestion immediately surrounding each unit may be costly. [0174]
  • The Shared crossbar bypass bus structure design lowers development time and cost, and limits risk to the development schedule. In a conventional system, the logic and signal timing to share crossbar access ports between two distinct physical and logical design units on both the sending and receiving ends represent very difficult obstacles, to which some embodiments of the Shared bypass bus structure may provides a simple alternative. [0175]
  • A further advantage may comprise partitioning. The Shared bypass bus structure lends itself well to the clean partitioning of the component into separate domains as a result of separating interleaves, regions, and/or SP's into distinct address ranges, for example. This aspect may facilitate development of chipsets or the like with desirable Reliability, Availability, and Serviceability (RAS) features. [0176]
  • Referring to FIG. 21, there is shown an embodiment of block diagram for logic to re-order memory. The memory re-ordering mechanism in FIG. 21 may comprise [0177] memory write queue 700; write re-order queue 705; memory read queue 710; read re-order queue 715; arbitration unit and conflict checker 720; refresh unit 730, DDR protocol state machines 740 and multiplexer 750. Write queue 700 may hold, for example, 64 entries of requests and data to write to an address in memory. Similarly, read queue 710, for example, may hold 32 entries of read requests for memory. If a read may access the same address as a write, which is present in the write queue 700, data may be forwarded from the pending data buffer to the requestor or agent without accessing physical memory. Writes may be flushed to memory in absence of reads and reads may fetch data from memory if they do not have a hit in write queue 700.
  • Reads and writes may comprise inbound and/or upbound requests for memory. In some embodiments, the reads and writes that may remain in the read and write [0178] queues 700 and 710, may be forwarded or transmitted to the re-ordering queues 705 and 715. The present embodiment may comprise four write re-order queues 705 and four read re-order queues 715 for write requests and read requests, respectively, and a re-order queue may be two entries deep so that 8 reads and 8 writes may be stored in the re-order queues 705 and 715. Re-order queues 705 and 715 may become filled after a request belonging to that re-order queue arrives (for write requests after data has been received). Read/write requests may be distributed to reordering queues 705 and 715 depending on which DDR channel (if there are independent DDR channels) and/or bank is targeted. In some embodiments, for instance, two channels of DDR I and bank address bit B[0] may also used.
  • [0179] Arbitration unit 720 may check timing conflicts and may schedule a request to one of the 8 protocol state machines. The arbitration unit may look at 4 read requests and 4 write request at a time or substantially simultaneously. If there is no read or write request in the queue that may belong to a particular re-order queue based upon the channel number and/or the bank number, then that reorder queue may be empty in many embodiments. Arbitration unit may keep track of memory addresses that are currently accessing memory and compare those addresses with 4 read addresses and 4 write addresses from the re-order queues 705 and 715. A read or write request may be picked up in such a way that it may schedule to access memory immediately. Among reads/writes and refreshes, such as a refresh of DRAM or dynamic random access memory, refresh may have the highest or nearly the highest priority, in part because, in many embodiments, refreshes may not occur often. Read requests may comprise a second priority and then writes may be at a third priority level, unless, for instance, the write queue 700 is full. When the write queue 700 is full, the write requests may be at a second priority level.
  • A round robin priority determiner may facilitate selection of one of the four read requests and one of the four write requests from [0180] re-order queues 705 and 715, unless the queue entry has a conflict with an ongoing transaction. Further, in several embodiments, when a re-order queue is skipped then it is marked and receives a high or a highest priority after some time. After a request has been scheduled by Arbitration unit, the request may go to one of the 8 DDR state machines for access to memory.
  • Arbitration unit and [0181] conflict check logic 720 or state machine may check for page replace conflicts and DIMM conflicts. Page replace conflicts may involve a greater penalty than a DIMM conflict in terms of turnaround time. So if all re-order queue entries may have involve a conflict, an entry with a page replace conflict gets a lower priority. The present embodiment may shows great performance gain on memory bandwidth. For example, memory reads/writes may be distributed in 4 re-order queues each. The arbitration unit may review up to 8 transactions that are pending to be scheduled and also the transactions that are currently scheduled on DRAM channel by one of the 8 state machines. Arbitration unit may first look for transactions with page empty or page hit cases to be scheduled. Then, read/write requests with a page replace with existing transaction or a DIMM turnaround conflict may be pushed out until the timing conflict is eliminated.
  • State machines, such as DDR [0182] protocol state machines 740, may schedule one read or write transaction to a DDR channel and hold that entry until the transaction is complete. The present embodiment may comprise 8 DDR protocol state machines.
  • Embodiments may provide better feedback from DRAM protocol state machines so arbitration unit does not have to wait for a data phase to complete before the next transaction to the same resource is scheduled. Some embodiments may not have one reordering queue per resource, for example, embodiments may comprise 4 read and 4 [0183] write reorder queues 705 and 715 and may provide re-ordering and feedback from 8 state machines may provide more information. Further, embodiments may comprise no or infrequent timing dependency between different re-order queues and in the same queue there may or may not be timing dependency. Top entries of a re-order queue may not be checked against one other and a state machine may schedule one transaction as opposed to one state machine per bank or per resource.
  • The foregoing description is intended to be illustrative and not limiting. Variations will occur to those of skill in the art. Those variations are intended to be included in the various embodiments of the invention, which are limited only by the spirit and scope of the appended claims. [0184]

Claims (35)

What is claimed is:
1. An apparatus, comprising:
a stream monitor to determine stream activity;
a scheduler coupled with the stream monitor to determine a maximum number of cache lines per stream to pre-fetch based upon the stream activity;
a pre-fetch engine coupled with the scheduler to generate pre-fetch requests; and
cache coupled with the scheduler to store pre-fetched cache lines in response to the pre-fetch requests.
2. The apparatus of claim 1, further comprising a pending transaction monitor coupled with the scheduler to limit a number of pending requests.
3. The apparatus of claim 1, further comprising an in-flight transaction monitor coupled with the scheduler to limit a number of in-flight requests.
4. The apparatus of claim 1, further comprising a throttle mechanism coupled with the scheduler to change a rate of generation of the pre-fetch requests.
5. The apparatus of claim 4, wherein the throttle mechanism is to change the rate of the generation of the pre-fetch requests based upon stream traffic.
6. The apparatus of claim 1, further comprising cache logic circuitry coupled with the stream monitor to limit a number of pre-fetched cache lines per stream based upon the stream activity.
7. An apparatus, comprising:
a stream monitor to determine stream activity;
cache logic circuitry coupled with the stream monitor to generate pre-fetch requests and to limit a number of pre-fetched cache lines per stream based upon the stream activity; and
cache coupled with the cache logic circuitry to store pre-fetched cache lines in response to the pre-fetch requests.
8. The apparatus of claim 7, wherein the stream monitor comprises inactivity circuitry to determine that a stream is inactive.
9. The apparatus of claim 7, wherein the stream monitor comprises inactivity circuitry to de-allocate a stream structure for an inactive stream.
10. The apparatus of claim 7, wherein the cache logic circuitry comprises a throttle mechanism to change a rate of generation of pre-fetch requests based upon the stream activity.
11. The apparatus of claim 7, wherein the cache logic circuitry comprises a scheduler to schedule a pre-fetch request for a stream based upon an amount of the cache allocated to the stream.
12. An apparatus, comprising:
a stream monitor to determine stream activity;
cache to store pre-fetched cache lines for active streams in a cache structure of the cache; and
cache logic circuitry coupled with the stream monitor and with the cache to determine the cache structure and to allocate the cache structure to the active streams based upon the stream activity.
13. The apparatus of claim 12, wherein the stream monitor comprises stream type circuitry to determine a type of active stream.
14. The apparatus of claim 12, wherein the stream monitor comprises stream count circuitry to determine a number of active streams.
15. The apparatus of claim 14, wherein the cache logic circuitry comprises allocation circuitry to allocate the cache structure to the active streams.
16. The apparatus of claim 12, wherein the stream monitor comprises circuitry to determine a change in a number of active streams.
17. The apparatus of claim 16, wherein the cache logic circuitry comprises allocation circuitry to change the cache structure for the change in the number of active streams.
18. A system, comprising:
a memory;
a plurality of devices to request data from the memory;
input-output circuitry coupled between the memory and the plurality of devices and comprising
a stream monitor to determine stream activity;
cache to store pre-fetched cache lines for active streams; and
cache logic circuitry coupled with the stream monitor and with the cache to allocate portions of the cache to the active streams based upon the stream activity.
19. The system of claim 18, wherein the stream monitor comprises stream count circuitry to determine a number of active streams.
20. The system of claim 19, wherein the cache logic circuitry comprises allocation circuitry to allocate the portions of the cache to the active streams.
21. The system of claim 18, wherein the stream monitor comprises circuitry to determine a change in a number of active streams.
22. The apparatus of claim 21, wherein the cache logic circuitry comprises allocation circuitry to change the portions of the cache responsive to the change in the number of active streams.
23. An apparatus, comprising:
a timer to determine a stream is inactive;
cache logic circuitry coupled with the timer to de-allocate a cache structure for the stream based upon the determination that the stream is inactive; and
cache coupled with said cache logic circuitry to store data in the cache structure.
24. The apparatus of claim 23, wherein the timer is to determine that a stream is inactive based upon a heuristically determined time between requests associated with an active stream.
25. The apparatus of claim 23, wherein the timer comprises a reset circuit to restart the timer based upon receipt of a request associated with the stream.
26. The apparatus of claim 23, wherein the cache logic circuitry comprises a table to store a value indicating a time between requests of an active stream based upon a number of active streams.
27. The apparatus of claim 23 wherein the cache logic circuitry comprises a table to store a value indicating a time between requests of an active stream based upon a type of the active stream.
28. An apparatus, comprising:
a pre-fetch engine to pre-fetch data from a memory for multiple concurrent streams;
a timer coupled to the pre-fetch engine to determine a particular stream of the multiple concurrent streams is inactive;
cache logic circuitry coupled with the timer to de-allocate a cache structure for the particular stream based upon the determination that the particular stream is inactive; and
cache coupled with said cache logic circuitry to store the data in the cache structure.
29. The apparatus of claim 28, wherein a timeout value for the timer is a heuristically determined timeout value.
30. The apparatus of claim 28, further comprising circuitry to reset the timer upon receipt of a new request associated with the particular stream.
31. The apparatus of claim 28, wherein the cache logic circuitry comprises a table to store a timeout value based upon a number of active concurrent streams.
32. The apparatus of claim 28 wherein the cache logic circuitry comprises a table to store a timeout value based upon a type of the particular stream.
33. An apparatus, comprising:
a stream monitor to determine stream activity,
a scheduler coupled with the stream monitor to determine a maximum number of cache lines per stream to pre-fetch based upon the stream activity;
cache coupled with the scheduler to store pre-fetched cache lines;
cache logic circuitry coupled with the stream monitor to generate pre-fetch requests, to limit a number of pre-fetched cache lines per stream based upon the maximum number, and to allocate the cache based upon the stream activity; and
a timer coupled with the cache logic circuitry to de-allocate a portion of the cache for a particular stream responsive to a determination that the particular stream is inactive.
34. The apparatus of claim 33, further comprising a pre-fetch engine coupled with the scheduler to adjust a generation rate of the pre-fetch requests based upon the stream activity.
35. The apparatus of claim 33, wherein the stream monitor comprises circuitry to determine a change in a number of active streams.
US10/358,618 2002-02-25 2003-02-05 Cache usage for concurrent multiple streams Abandoned US20040022094A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/358,618 US20040022094A1 (en) 2002-02-25 2003-02-05 Cache usage for concurrent multiple streams

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US35931602P 2002-02-25 2002-02-25
US10/358,618 US20040022094A1 (en) 2002-02-25 2003-02-05 Cache usage for concurrent multiple streams

Publications (1)

Publication Number Publication Date
US20040022094A1 true US20040022094A1 (en) 2004-02-05

Family

ID=28045183

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/358,745 Expired - Fee Related US7047374B2 (en) 2002-02-25 2003-02-05 Memory read/write reordering
US10/358,618 Abandoned US20040022094A1 (en) 2002-02-25 2003-02-05 Cache usage for concurrent multiple streams
US10/358,568 Expired - Fee Related US6912612B2 (en) 2002-02-25 2003-02-05 Shared bypass bus structure

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/358,745 Expired - Fee Related US7047374B2 (en) 2002-02-25 2003-02-05 Memory read/write reordering

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/358,568 Expired - Fee Related US6912612B2 (en) 2002-02-25 2003-02-05 Shared bypass bus structure

Country Status (1)

Country Link
US (3) US7047374B2 (en)

Cited By (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030101312A1 (en) * 2001-11-26 2003-05-29 Doan Trung T. Machine state storage apparatus and method
US20030229770A1 (en) * 2002-06-07 2003-12-11 Jeddeloh Joseph M. Memory hub with internal cache and/or memory access prediction
US20040024978A1 (en) * 2002-08-05 2004-02-05 Jeddeloh Joseph M. Memory hub and access method having internal row caching
US20040024959A1 (en) * 2002-08-02 2004-02-05 Taylor George R. System and method for optically interconnecting memory devices
US20040028412A1 (en) * 2002-08-09 2004-02-12 Tim Murphy System and method for multiple bit optical data transmission in memory systems
US20040034753A1 (en) * 2002-08-16 2004-02-19 Jeddeloh Joseph M. Memory hub bypass circuit and method
US20040044833A1 (en) * 2002-08-29 2004-03-04 Ryan Kevin J. System and method for optimizing interconnections of memory devices in a multichip module
US20040047169A1 (en) * 2002-09-09 2004-03-11 Lee Terry R. Wavelength division multiplexed memory module, memory system and method
US20040193771A1 (en) * 2003-03-31 2004-09-30 Ebner Sharon M. Method, apparatus, and system for processing a plurality of outstanding data requests
US20040210722A1 (en) * 2003-04-21 2004-10-21 Sharma Debendra Das Directory-based coherency scheme for reducing memory bandwidth loss
US20040243743A1 (en) * 2003-05-30 2004-12-02 Brian Smith History FIFO with bypass
US20040251929A1 (en) * 2003-06-11 2004-12-16 Pax George E. Memory module and method having improved signal routing topology
US20040260864A1 (en) * 2003-06-19 2004-12-23 Lee Terry R. Reconfigurable memory module and method
US20040260909A1 (en) * 2003-06-20 2004-12-23 Lee Terry R. Memory hub and access method having internal prefetch buffers
US20040260891A1 (en) * 2003-06-20 2004-12-23 Jeddeloh Joseph M. Posted write buffers and methods of posting write requests in memory modules
US20040260957A1 (en) * 2003-06-20 2004-12-23 Jeddeloh Joseph M. System and method for selective memory module power management
US6842827B2 (en) 2002-01-02 2005-01-11 Intel Corporation Cache coherency arrangement to enhance inbound bandwidth
US20050021884A1 (en) * 2003-07-22 2005-01-27 Jeddeloh Joseph M. Apparatus and method for direct memory access in a hub-based memory system
US20050030313A1 (en) * 2000-06-23 2005-02-10 William Radke Apparatus and method for distributed memory control in a graphics processing system
US20050044304A1 (en) * 2003-08-20 2005-02-24 Ralph James Method and system for capturing and bypassing memory transactions in a hub-based memory system
US20050050255A1 (en) * 2003-08-28 2005-03-03 Jeddeloh Joseph M. Multiple processor system and method including multiple memory hub modules
US20050060600A1 (en) * 2003-09-12 2005-03-17 Jeddeloh Joseph M. System and method for on-board timing margin testing of memory modules
US20050066137A1 (en) * 2002-08-29 2005-03-24 Jeddeloh Joseph M. Method and system for controlling memory accesses to memory modules having a memory hub architecture
US20050086441A1 (en) * 2003-10-20 2005-04-21 Meyer James W. Arbitration system and method for memory responses in a hub-based memory system
US20050108450A1 (en) * 2003-11-18 2005-05-19 Hirofumi Sahara Information processing system and method
US20050149774A1 (en) * 2003-12-29 2005-07-07 Jeddeloh Joseph M. System and method for read synchronization of memory modules
US20050146943A1 (en) * 2003-08-28 2005-07-07 Jeddeloh Joseph M. Memory module and method having on-board data search capabilities and processor-based system using such memory modules
US20050172084A1 (en) * 2004-01-30 2005-08-04 Jeddeloh Joseph M. Buffer control system and method for a memory system having memory request buffers
US20050177690A1 (en) * 2004-02-05 2005-08-11 Laberge Paul A. Dynamic command and/or address mirroring system and method for memory modules
US20050216648A1 (en) * 2004-03-25 2005-09-29 Jeddeloh Joseph M System and method for memory hub-based expansion bus
US20050213571A1 (en) * 2004-03-29 2005-09-29 Zarlink Semiconductor Inc. Compact packet switching node storage architecture employing double data rate synchronous dynamic RAM
US20050216677A1 (en) * 2004-03-24 2005-09-29 Jeddeloh Joseph M Memory arbitration system and method having an arbitration packet protocol
US20050213611A1 (en) * 2004-03-29 2005-09-29 Ralph James Method and system for synchronizing communications links in a hub-based memory system
US20050218956A1 (en) * 2004-04-05 2005-10-06 Laberge Paul A Delay line synchronizer apparatus and method
US20050228939A1 (en) * 2004-04-08 2005-10-13 Janzen Jeffery W System and method for optimizing interconnections of components in a multichip memory module
US20050240736A1 (en) * 2004-04-23 2005-10-27 Mark Shaw System and method for coherency filtering
US20050243829A1 (en) * 2002-11-11 2005-11-03 Clearspeed Technology Pic Traffic management architecture
US20050257005A1 (en) * 2004-05-14 2005-11-17 Jeddeloh Joseph M Memory hub and method for memory sequencing
US20050257021A1 (en) * 2004-05-17 2005-11-17 Ralph James System and method for communicating the synchronization status of memory modules during initialization of the memory modules
US20050268060A1 (en) * 2004-05-28 2005-12-01 Cronin Jeffrey J Method and system for terminating write commands in a hub-based memory system
US20050283681A1 (en) * 2004-06-04 2005-12-22 Jeddeloh Joseph M Memory hub tester interface and method for use thereof
US20050286506A1 (en) * 2004-06-04 2005-12-29 Laberge Paul A System and method for an asynchronous data buffer having buffer write and read pointers
US20060047891A1 (en) * 2004-08-31 2006-03-02 Ralph James System and method for transmitting data packets in a computer system having a memory hub architecture
US20060146864A1 (en) * 2004-12-30 2006-07-06 Rosenbluth Mark B Flexible use of compute allocation in a multi-threaded compute engines
US20060168407A1 (en) * 2005-01-26 2006-07-27 Micron Technology, Inc. Memory hub system and method having large virtual page size
US20060179238A1 (en) * 2005-02-10 2006-08-10 Griswell John B Jr Store stream prefetching in a microprocessor
US20060179239A1 (en) * 2005-02-10 2006-08-10 Fluhr Eric J Data stream prefetching in a microprocessor
US20060200620A1 (en) * 2003-09-18 2006-09-07 Schnepper Randy L Memory hub with integrated non-volatile memory
US20060206766A1 (en) * 2003-08-19 2006-09-14 Jeddeloh Joseph M System and method for on-board diagnostics of memory modules
US20060221961A1 (en) * 2005-04-01 2006-10-05 International Business Machines Corporation Network communications for operating system partitions
US20060221977A1 (en) * 2005-04-01 2006-10-05 International Business Machines Corporation Method and apparatus for providing a network connection table
US20070033369A1 (en) * 2005-08-02 2007-02-08 Fujitsu Limited Reconfigurable integrated circuit device
US20070047584A1 (en) * 2005-08-24 2007-03-01 Spink Aaron T Interleaving data packets in a packet-based communication system
US20070110088A1 (en) * 2005-11-12 2007-05-17 Liquid Computing Corporation Methods and systems for scalable interconnect
US20070168536A1 (en) * 2006-01-17 2007-07-19 International Business Machines Corporation Network protocol stack isolation
US20070248111A1 (en) * 2006-04-24 2007-10-25 Shaw Mark E System and method for clearing information in a stalled output queue of a crossbar
US20070283100A1 (en) * 2006-05-30 2007-12-06 Kabushiki Kaisha Toshiba Cache memory device and caching method
US20080089358A1 (en) * 2005-04-01 2008-04-17 International Business Machines Corporation Configurable ports for a host ethernet adapter
US20080109637A1 (en) * 2006-11-03 2008-05-08 Cornell Research Foundation, Inc. Systems and methods for reconfigurably multiprocessing
US20080162769A1 (en) * 2006-12-31 2008-07-03 Texas Instrument Incorporated Systems and Methods for Improving Data Transfer between Devices
US20080294862A1 (en) * 2004-02-05 2008-11-27 Micron Technology, Inc. Arbitration system having a packet memory and method for memory responses in a hub-based memory system
US20080291824A1 (en) * 2007-05-21 2008-11-27 Kendall Kris M Reassigning Virtual Lane Buffer Allocation During Initialization to Maximize IO Performance
US20080317027A1 (en) * 2005-04-01 2008-12-25 International Business Machines Corporation System for reducing latency in a host ethernet adapter (hea)
US7492771B2 (en) 2005-04-01 2009-02-17 International Business Machines Corporation Method for performing a packet header lookup
US7586936B2 (en) 2005-04-01 2009-09-08 International Business Machines Corporation Host Ethernet adapter for networking offload in server environment
US7606166B2 (en) 2005-04-01 2009-10-20 International Business Machines Corporation System and method for computing a blind checksum in a host ethernet adapter (HEA)
US20090300293A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Dynamically Partitionable Cache
WO2009145888A1 (en) * 2008-05-29 2009-12-03 Advanced Micro Devices, Inc. Dynamically partitionable cache
US7706409B2 (en) 2005-04-01 2010-04-27 International Business Machines Corporation System and method for parsing, filtering, and computing the checksum in a host Ethernet adapter (HEA)
US7788451B2 (en) 2004-02-05 2010-08-31 Micron Technology, Inc. Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US7809883B1 (en) * 2007-10-16 2010-10-05 Netapp, Inc. Cached reads for a storage system
US20100257320A1 (en) * 2009-04-07 2010-10-07 International Business Machines Corporation Cache Replacement Policy
US20110035530A1 (en) * 2009-08-10 2011-02-10 Fujitsu Limited Network system, information processing apparatus, and control method for network system
US7903687B2 (en) 2005-04-01 2011-03-08 International Business Machines Corporation Method for scheduling, writing, and reading data inside the partitioned buffer of a switch, router or packet processing device
US20110099333A1 (en) * 2007-12-31 2011-04-28 Eric Sprangle Mechanism for effectively caching streaming and non-streaming data patterns
US20110113199A1 (en) * 2009-11-09 2011-05-12 Tang Puqi P Prefetch optimization in shared resource multi-core systems
US8225188B2 (en) 2005-04-01 2012-07-17 International Business Machines Corporation Apparatus for blind checksum and correction for network transmissions
US20120259953A1 (en) * 1999-01-22 2012-10-11 Network Disk, Inc. Data Storage and Data Sharing in a Network of Heterogeneous Computers
US20120331106A1 (en) * 2011-06-24 2012-12-27 General Instrument Corporation Intelligent buffering of media streams delivered over internet
US20130111095A1 (en) * 2004-02-13 2013-05-02 Sharad Mehrotra Multi-chassis fabric-backplane enterprise servers
US20130205092A1 (en) * 2012-02-06 2013-08-08 Empire Technology Development Llc Multicore computer system with cache use based adaptive scheduling
US8621157B2 (en) 2011-06-13 2013-12-31 Advanced Micro Devices, Inc. Cache prefetching from non-uniform memories
US8713295B2 (en) 2004-07-12 2014-04-29 Oracle International Corporation Fabric-backplane enterprise servers with pluggable I/O sub-system
US8743872B2 (en) 2004-02-13 2014-06-03 Oracle International Corporation Storage traffic communication via a switch fabric in accordance with a VLAN
US8775764B2 (en) 2004-03-08 2014-07-08 Micron Technology, Inc. Memory hub architecture having programmable lane widths
US8848727B2 (en) 2004-02-13 2014-09-30 Oracle International Corporation Hierarchical transport protocol stack for data transfer between enterprise servers
US8856452B2 (en) 2011-05-31 2014-10-07 Illinois Institute Of Technology Timing-aware data prefetching for microprocessors
US20140310477A1 (en) * 2013-04-12 2014-10-16 International Business Machines Corporation Modification of prefetch depth based on high latency event
US8868790B2 (en) 2004-02-13 2014-10-21 Oracle International Corporation Processor-memory module performance acceleration in fabric-backplane enterprise servers
TWI502357B (en) * 2009-08-11 2015-10-01 Via Tech Inc Method and apparatus for pre-fetching data, and computer system
US20170123880A1 (en) * 2014-02-26 2017-05-04 Microsoft Technology Licensing, Llc Service metric analysis from structured logging schema of usage data
WO2017172235A1 (en) 2016-04-01 2017-10-05 Intel Corporation Technologies for quality of service based throttling in fabric architectures
US20180018267A1 (en) * 2014-12-23 2018-01-18 Intel Corporation Speculative reads in buffered memory
US10331583B2 (en) * 2013-09-26 2019-06-25 Intel Corporation Executing distributed memory operations using processing elements connected by distributed channels
US10380063B2 (en) 2017-09-30 2019-08-13 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator having a sequencer dataflow operator
US10387319B2 (en) 2017-07-01 2019-08-20 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with memory system performance, power reduction, and atomics support features
US10402168B2 (en) 2016-10-01 2019-09-03 Intel Corporation Low energy consumption mantissa multiplication for floating point multiply-add operations
US10416999B2 (en) 2016-12-30 2019-09-17 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10417175B2 (en) 2017-12-30 2019-09-17 Intel Corporation Apparatus, methods, and systems for memory consistency in a configurable spatial accelerator
US10445250B2 (en) 2017-12-30 2019-10-15 Intel Corporation Apparatus, methods, and systems with a configurable spatial accelerator
US10445451B2 (en) 2017-07-01 2019-10-15 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with performance, correctness, and power reduction features
US10445234B2 (en) 2017-07-01 2019-10-15 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with transactional and replay features
US10445098B2 (en) 2017-09-30 2019-10-15 Intel Corporation Processors and methods for privileged configuration in a spatial array
US10459866B1 (en) 2018-06-30 2019-10-29 Intel Corporation Apparatuses, methods, and systems for integrated control and data processing in a configurable spatial accelerator
US10467183B2 (en) 2017-07-01 2019-11-05 Intel Corporation Processors and methods for pipelined runtime services in a spatial array
US10469397B2 (en) 2017-07-01 2019-11-05 Intel Corporation Processors and methods with configurable network-based dataflow operator circuits
US10474375B2 (en) 2016-12-30 2019-11-12 Intel Corporation Runtime address disambiguation in acceleration hardware
US10496574B2 (en) 2017-09-28 2019-12-03 Intel Corporation Processors, methods, and systems for a memory fence in a configurable spatial accelerator
US10515046B2 (en) 2017-07-01 2019-12-24 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10515049B1 (en) 2017-07-01 2019-12-24 Intel Corporation Memory circuits and methods for distributed memory hazard detection and error recovery
US10558575B2 (en) 2016-12-30 2020-02-11 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10565134B2 (en) 2017-12-30 2020-02-18 Intel Corporation Apparatus, methods, and systems for multicast in a configurable spatial accelerator
US10564980B2 (en) 2018-04-03 2020-02-18 Intel Corporation Apparatus, methods, and systems for conditional queues in a configurable spatial accelerator
US10572376B2 (en) 2016-12-30 2020-02-25 Intel Corporation Memory ordering in acceleration hardware
US10613764B2 (en) 2017-11-20 2020-04-07 Advanced Micro Devices, Inc. Speculative hint-triggered activation of pages in memory
US10678724B1 (en) 2018-12-29 2020-06-09 Intel Corporation Apparatuses, methods, and systems for in-network storage in a configurable spatial accelerator
US10817291B2 (en) 2019-03-30 2020-10-27 Intel Corporation Apparatuses, methods, and systems for swizzle operations in a configurable spatial accelerator
US10831661B2 (en) 2019-04-10 2020-11-10 International Business Machines Corporation Coherent cache with simultaneous data requests in same addressable index
US10853073B2 (en) 2018-06-30 2020-12-01 Intel Corporation Apparatuses, methods, and systems for conditional operations in a configurable spatial accelerator
US10891240B2 (en) 2018-06-30 2021-01-12 Intel Corporation Apparatus, methods, and systems for low latency communication in a configurable spatial accelerator
US10915471B2 (en) 2019-03-30 2021-02-09 Intel Corporation Apparatuses, methods, and systems for memory interface circuit allocation in a configurable spatial accelerator
US10942737B2 (en) 2011-12-29 2021-03-09 Intel Corporation Method, device and system for control signalling in a data path module of a data stream processing engine
US10965536B2 (en) 2019-03-30 2021-03-30 Intel Corporation Methods and apparatus to insert buffers in a dataflow graph
US11029927B2 (en) 2019-03-30 2021-06-08 Intel Corporation Methods and apparatus to detect and annotate backedges in a dataflow graph
US11037050B2 (en) 2019-06-29 2021-06-15 Intel Corporation Apparatuses, methods, and systems for memory interface circuit arbitration in a configurable spatial accelerator
US11086816B2 (en) 2017-09-28 2021-08-10 Intel Corporation Processors, methods, and systems for debugging a configurable spatial accelerator
US11200186B2 (en) 2018-06-30 2021-12-14 Intel Corporation Apparatuses, methods, and systems for operations in a configurable spatial accelerator
US11307873B2 (en) 2018-04-03 2022-04-19 Intel Corporation Apparatus, methods, and systems for unstructured data flow in a configurable spatial accelerator with predicate propagation and merging
US11907713B2 (en) 2019-12-28 2024-02-20 Intel Corporation Apparatuses, methods, and systems for fused operations using sign modification in a processing element of a configurable spatial accelerator

Families Citing this family (180)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4093741B2 (en) * 2001-10-03 2008-06-04 シャープ株式会社 External memory control device and data driven information processing device including the same
US6829689B1 (en) * 2002-02-12 2004-12-07 Nvidia Corporation Method and system for memory access arbitration for minimizing read/write turnaround penalties
DE60211874T2 (en) * 2002-06-20 2007-05-24 Infineon Technologies Ag Arrangement of two devices connected by a crossover switch
US6970872B1 (en) * 2002-07-23 2005-11-29 Oracle International Corporation Techniques for reducing latency in a multi-node system when obtaining a resource that does not reside in cache
US7526595B2 (en) * 2002-07-25 2009-04-28 International Business Machines Corporation Data path master/slave data processing device apparatus and method
US20040059858A1 (en) * 2002-09-23 2004-03-25 Blankenship Robert G. Methods and arrangements to enhance a downbound path
US7062610B2 (en) * 2002-09-30 2006-06-13 Advanced Micro Devices, Inc. Method and apparatus for reducing overhead in a data processing system with a cache
US7254525B2 (en) * 2002-10-17 2007-08-07 Hitachi Global Storage Technologies Netherlands B.V. Method and apparatus for automated analysis of hard disk drive performance
US8185602B2 (en) 2002-11-05 2012-05-22 Newisys, Inc. Transaction processing using multiple protocol engines in systems having multiple multi-processor clusters
US7093079B2 (en) * 2002-12-17 2006-08-15 Intel Corporation Snoop filter bypass
US7155572B2 (en) * 2003-01-27 2006-12-26 Advanced Micro Devices, Inc. Method and apparatus for injecting write data into a cache
US7024510B2 (en) * 2003-03-17 2006-04-04 Hewlett-Packard Development Company, L.P. Supporting a host-to-input/output (I/O) bridge
US7069387B2 (en) * 2003-03-31 2006-06-27 Sun Microsystems, Inc. Optimized cache structure for multi-texturing
US7284014B2 (en) 2003-04-07 2007-10-16 Hitachi, Ltd. Pre-fetch computer system
US7398359B1 (en) * 2003-04-30 2008-07-08 Silicon Graphics, Inc. System and method for performing memory operations in a computing system
US7334102B1 (en) 2003-05-09 2008-02-19 Advanced Micro Devices, Inc. Apparatus and method for balanced spinlock support in NUMA systems
JP3973597B2 (en) * 2003-05-14 2007-09-12 株式会社ソニー・コンピュータエンタテインメント Prefetch instruction control method, prefetch instruction control device, cache memory control device, object code generation method and device
US20040267919A1 (en) * 2003-06-30 2004-12-30 International Business Machines Corporation Method and system for providing server management peripheral caching using a shared bus
US7028130B2 (en) * 2003-08-14 2006-04-11 Texas Instruments Incorporated Generating multiple traffic classes on a PCI Express fabric from PCI devices
US8463996B2 (en) * 2003-08-19 2013-06-11 Oracle America, Inc. Multi-core multi-thread processor crossbar architecture
US7822105B2 (en) * 2003-09-02 2010-10-26 Sirf Technology, Inc. Cross-correlation removal of carrier wave jamming signals
KR20070019940A (en) 2003-09-02 2007-02-16 서프 테크놀러지, 인코포레이티드 Control and features for satellite positioning system receivers
US7574341B1 (en) * 2003-11-12 2009-08-11 Hewlett-Packard Development Company, L.P. Speculative expectation based event verification
US7904663B2 (en) * 2003-12-18 2011-03-08 International Business Machines Corporation Secondary path for coherency controller to interconnection network(s)
US7363427B2 (en) * 2004-01-12 2008-04-22 Hewlett-Packard Development Company, L.P. Memory controller connection to RAM using buffer interface
US7165131B2 (en) * 2004-04-27 2007-01-16 Intel Corporation Separating transactions into different virtual channels
US7143221B2 (en) * 2004-06-08 2006-11-28 Arm Limited Method of arbitrating between a plurality of transfers to be routed over a corresponding plurality of paths provided by an interconnect circuit of a data processing apparatus
CA2509001A1 (en) * 2004-06-22 2005-12-22 Textron Inc. Blind bolt installation tool
US8904458B2 (en) * 2004-07-29 2014-12-02 At&T Intellectual Property I, L.P. System and method for pre-caching a first portion of a video file on a set-top box
JP2006079394A (en) * 2004-09-10 2006-03-23 Renesas Technology Corp Data processor
KR100671234B1 (en) * 2004-10-07 2007-01-18 한국전자통신연구원 Communication apparatus using the transmission medium and a method for the same
US7380052B2 (en) * 2004-11-18 2008-05-27 International Business Machines Corporation Reuse of functional data buffers for pattern buffers in XDR DRAM
US7360008B2 (en) * 2004-12-30 2008-04-15 Intel Corporation Enforcing global ordering through a caching bridge in a multicore multiprocessor system
US7370133B2 (en) * 2005-01-20 2008-05-06 International Business Machines Corporation Storage controller and methods for using the same
US7243194B2 (en) * 2005-02-09 2007-07-10 International Business Machines Corporation Method to preserve ordering of read and write operations in a DMA system by delaying read access
US7404046B2 (en) * 2005-02-10 2008-07-22 International Business Machines Corporation Cache memory, processing unit, data processing system and method for filtering snooped operations
US7415030B2 (en) * 2005-02-10 2008-08-19 International Business Machines Corporation Data processing system, method and interconnect fabric having an address-based launch governor
US20060200597A1 (en) * 2005-03-03 2006-09-07 Christenson Bruce A Method, system, and apparatus for memory controller utilization of an AMB write FIFO to improve FBD memory channel efficiency
JP2006251923A (en) * 2005-03-08 2006-09-21 Oki Electric Ind Co Ltd Look-ahead control method
US20060236039A1 (en) * 2005-04-19 2006-10-19 International Business Machines Corporation Method and apparatus for synchronizing shared data between components in a group
US7721011B1 (en) * 2005-05-09 2010-05-18 Oracle America, Inc. Method and apparatus for reordering memory accesses to reduce power consumption in computer systems
US7716388B2 (en) * 2005-05-13 2010-05-11 Texas Instruments Incorporated Command re-ordering in hub interface unit based on priority
WO2007002717A2 (en) * 2005-06-27 2007-01-04 Arithmosys, Inc. Specifying stateful, transaction-oriented systems and apparatus for flexible mapping
US7624221B1 (en) * 2005-08-01 2009-11-24 Nvidia Corporation Control device for data stream optimizations in a link interface
US8307147B2 (en) * 2005-09-09 2012-11-06 Freescale Semiconductor, Inc. Interconnect and a method for designing an interconnect
US20070073979A1 (en) * 2005-09-29 2007-03-29 Benjamin Tsien Snoop processing for multi-processor computing system
US20070094432A1 (en) * 2005-10-24 2007-04-26 Silicon Integrated Systems Corp. Request transmission mechanism and method thereof
CA2562634A1 (en) * 2005-11-28 2007-05-28 Tundra Semiconductor Corporation Method and switch for broadcasting packets
US8341360B2 (en) 2005-12-30 2012-12-25 Intel Corporation Method and apparatus for memory write performance optimization in architectures with out-of-order read/request-for-ownership response
JP4297969B2 (en) * 2006-02-24 2009-07-15 富士通株式会社 Recording control apparatus and recording control method
US7617368B2 (en) * 2006-06-14 2009-11-10 Nvidia Corporation Memory interface with independent arbitration of precharge, activate, and read/write
JP4829038B2 (en) * 2006-08-17 2011-11-30 富士通株式会社 Multiprocessor system
US8285893B2 (en) * 2006-10-13 2012-10-09 Dell Products L.P. System and method for adaptively setting connections to input/output hubs within an information handling system
KR100773445B1 (en) * 2006-11-22 2007-11-05 삼성전자주식회사 Apparatus for transmitting and receiving data of wireless local network and method using the same
US20080168161A1 (en) * 2007-01-10 2008-07-10 International Business Machines Corporation Systems and methods for managing faults within a high speed network employing wide ports
US20080168302A1 (en) * 2007-01-10 2008-07-10 International Business Machines Corporation Systems and methods for diagnosing faults in a multiple domain storage system
US20080270638A1 (en) * 2007-04-30 2008-10-30 International Business Machines Corporation Systems and methods for monitoring high speed network traffic via simultaneously multiplexed data streams
US7936767B2 (en) * 2007-04-30 2011-05-03 International Business Machines Corporation Systems and methods for monitoring high speed network traffic via sequentially multiplexed data streams
US7827391B2 (en) * 2007-06-26 2010-11-02 International Business Machines Corporation Method and apparatus for single-stepping coherence events in a multiprocessor system under software control
US8509255B2 (en) 2007-06-26 2013-08-13 International Business Machines Corporation Hardware packet pacing using a DMA in a parallel computer
US8230433B2 (en) 2007-06-26 2012-07-24 International Business Machines Corporation Shared performance monitor in a multiprocessor system
US8103832B2 (en) * 2007-06-26 2012-01-24 International Business Machines Corporation Method and apparatus of prefetching streams of varying prefetch depth
US8458282B2 (en) 2007-06-26 2013-06-04 International Business Machines Corporation Extended write combining using a write continuation hint flag
US8010875B2 (en) 2007-06-26 2011-08-30 International Business Machines Corporation Error correcting code with chip kill capability and power saving enhancement
US8468416B2 (en) 2007-06-26 2013-06-18 International Business Machines Corporation Combined group ECC protection and subgroup parity protection
US7793038B2 (en) 2007-06-26 2010-09-07 International Business Machines Corporation System and method for programmable bank selection for banked memory subsystems
US7886084B2 (en) 2007-06-26 2011-02-08 International Business Machines Corporation Optimized collectives using a DMA on a parallel computer
US8108738B2 (en) 2007-06-26 2012-01-31 International Business Machines Corporation Data eye monitor method and apparatus
US8140925B2 (en) 2007-06-26 2012-03-20 International Business Machines Corporation Method and apparatus to debug an integrated circuit chip via synchronous clock stop and scan
US8032892B2 (en) * 2007-06-26 2011-10-04 International Business Machines Corporation Message passing with a limited number of DMA byte counters
US7877551B2 (en) * 2007-06-26 2011-01-25 International Business Machines Corporation Programmable partitioning for high-performance coherence domains in a multiprocessor system
US7802025B2 (en) 2007-06-26 2010-09-21 International Business Machines Corporation DMA engine for repeating communication patterns
US8756350B2 (en) 2007-06-26 2014-06-17 International Business Machines Corporation Method and apparatus for efficiently tracking queue entries relative to a timestamp
US7984448B2 (en) * 2007-06-26 2011-07-19 International Business Machines Corporation Mechanism to support generic collective communication across a variety of programming models
CN101335883B (en) * 2007-06-29 2011-01-12 国际商业机器公司 Method and apparatus for processing video stream in digital video broadcast system
US20090077325A1 (en) * 2007-09-19 2009-03-19 On Demand Microelectronics Method and arrangements for memory access
US20100042751A1 (en) * 2007-11-09 2010-02-18 Kouichi Ishino Data transfer control device, data transfer device, data transfer control method, and semiconductor integrated circuit using reconfigured circuit
US7870351B2 (en) * 2007-11-15 2011-01-11 Micron Technology, Inc. System, apparatus, and method for modifying the order of memory accesses
US8255635B2 (en) * 2008-02-01 2012-08-28 International Business Machines Corporation Claiming coherency ownership of a partial cache line of data
US8250307B2 (en) * 2008-02-01 2012-08-21 International Business Machines Corporation Sourcing differing amounts of prefetch data in response to data prefetch requests
US8140771B2 (en) * 2008-02-01 2012-03-20 International Business Machines Corporation Partial cache line storage-modifying operation based upon a hint
US8117401B2 (en) * 2008-02-01 2012-02-14 International Business Machines Corporation Interconnect operation indicating acceptability of partial data delivery
US8266381B2 (en) * 2008-02-01 2012-09-11 International Business Machines Corporation Varying an amount of data retrieved from memory based upon an instruction hint
US20090198910A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Data processing system, processor and method that support a touch of a partial cache line of data
US8108619B2 (en) * 2008-02-01 2012-01-31 International Business Machines Corporation Cache management for partial cache line operations
US9032113B2 (en) 2008-03-27 2015-05-12 Apple Inc. Clock control for DMA busses
JP5414209B2 (en) * 2008-06-30 2014-02-12 キヤノン株式会社 Memory controller and control method thereof
US8108584B2 (en) * 2008-10-15 2012-01-31 Intel Corporation Use of completer knowledge of memory region ordering requirements to modify transaction attributes
TWI385672B (en) * 2008-11-05 2013-02-11 Lite On It Corp Adaptive multi-channel controller and method for storage device
US8601205B1 (en) * 2008-12-31 2013-12-03 Synopsys, Inc. Dynamic random access memory controller
KR101250666B1 (en) * 2009-01-30 2013-04-03 후지쯔 가부시끼가이샤 Information processing system, information processing device, control method for information processing device, and computer-readable recording medium
US8117390B2 (en) * 2009-04-15 2012-02-14 International Business Machines Corporation Updating partial cache lines in a data processing system
US8140759B2 (en) * 2009-04-16 2012-03-20 International Business Machines Corporation Specifying an access hint for prefetching partial cache block data in a cache hierarchy
US8199759B2 (en) * 2009-05-29 2012-06-12 Intel Corporation Method and apparatus for enabling ID based streams over PCI express
US20100332762A1 (en) * 2009-06-30 2010-12-30 Moga Adrian C Directory cache allocation based on snoop response information
US8665601B1 (en) 2009-09-04 2014-03-04 Bitmicro Networks, Inc. Solid state drive with improved enclosure assembly
US8447908B2 (en) 2009-09-07 2013-05-21 Bitmicro Networks, Inc. Multilevel memory bus system for solid-state mass storage
US8560804B2 (en) 2009-09-14 2013-10-15 Bitmicro Networks, Inc. Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device
US8392665B2 (en) 2010-09-25 2013-03-05 Intel Corporation Allocation and write policy for a glueless area-efficient directory cache for hotly contested cache lines
EP2442231A1 (en) * 2010-09-29 2012-04-18 STMicroelectronics (Grenoble 2) SAS Reordering arrangement
EP2444903A1 (en) 2010-09-29 2012-04-25 STMicroelectronics (Grenoble 2) SAS A transaction reordering arrangement
US8649609B1 (en) 2011-03-24 2014-02-11 The United States Of America As Represented By The Adminstrator Of The National Aeronautics And Space Administration Field programmable gate array apparatus, method, and computer program
US8544029B2 (en) * 2011-05-24 2013-09-24 International Business Machines Corporation Implementing storage adapter performance optimization with chained hardware operations minimizing hardware/firmware interactions
US8930641B1 (en) 2011-06-14 2015-01-06 Altera Corporation Systems and methods for providing memory controllers with scheduler bypassing capabilities
US8635411B2 (en) 2011-07-18 2014-01-21 Arm Limited Data processing apparatus and method for managing coherency of cached data
US9176913B2 (en) * 2011-09-07 2015-11-03 Apple Inc. Coherence switch for I/O traffic
US9372755B1 (en) 2011-10-05 2016-06-21 Bitmicro Networks, Inc. Adaptive power cycle sequences for data recovery
US9632954B2 (en) 2011-11-07 2017-04-25 International Business Machines Corporation Memory queue handling techniques for reducing impact of high-latency memory operations
US9256564B2 (en) 2012-01-17 2016-02-09 Qualcomm Incorporated Techniques for improving throughput and performance of a distributed interconnect peripheral bus
US8909874B2 (en) 2012-02-13 2014-12-09 International Business Machines Corporation Memory reorder queue biasing preceding high latency operations
US9043669B1 (en) 2012-05-18 2015-05-26 Bitmicro Networks, Inc. Distributed ECC engine for storage media
ITTO20120470A1 (en) 2012-05-30 2013-12-01 St Microelectronics Srl PROCEDURE FOR MANAGING ACCESS AND RELATIONSHIP SYSTEM TRANSACTIONS
US9262318B1 (en) * 2013-03-13 2016-02-16 Marvell International Ltd. Serial flash XIP with caching mechanism for fast program execution in embedded systems
US9423457B2 (en) 2013-03-14 2016-08-23 Bitmicro Networks, Inc. Self-test solution for delay locked loops
US9734067B1 (en) 2013-03-15 2017-08-15 Bitmicro Networks, Inc. Write buffering
US9971524B1 (en) 2013-03-15 2018-05-15 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US10489318B1 (en) 2013-03-15 2019-11-26 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US9875205B1 (en) 2013-03-15 2018-01-23 Bitmicro Networks, Inc. Network of memory systems
US9916213B1 (en) 2013-03-15 2018-03-13 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US10120694B2 (en) 2013-03-15 2018-11-06 Bitmicro Networks, Inc. Embedded system boot from a storage device
US9720603B1 (en) * 2013-03-15 2017-08-01 Bitmicro Networks, Inc. IOC to IOC distributed caching architecture
US9430386B2 (en) 2013-03-15 2016-08-30 Bitmicro Networks, Inc. Multi-leveled cache management in a hybrid storage system
US9934045B1 (en) 2013-03-15 2018-04-03 Bitmicro Networks, Inc. Embedded system boot from a storage device
US9842024B1 (en) 2013-03-15 2017-12-12 Bitmicro Networks, Inc. Flash electronic disk with RAID controller
US9501436B1 (en) 2013-03-15 2016-11-22 Bitmicro Networks, Inc. Multi-level message passing descriptor
US9672178B1 (en) 2013-03-15 2017-06-06 Bitmicro Networks, Inc. Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system
US9400617B2 (en) 2013-03-15 2016-07-26 Bitmicro Networks, Inc. Hardware-assisted DMA transfer with dependency table configured to permit-in parallel-data drain from cache without processor intervention when filled or drained
US9798688B1 (en) 2013-03-15 2017-10-24 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
KR102214511B1 (en) 2014-02-17 2021-02-09 삼성전자 주식회사 Data storage device for filtering page using 2-steps, system having the same, and operation method thereof
US9600277B2 (en) 2014-02-21 2017-03-21 International Business Machines Corporation Asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation
US9535610B2 (en) * 2014-02-21 2017-01-03 International Business Machines Corporation Optimizing peer-to-peer remote copy (PPRC) transfers for partial write operations using a modified sectors bitmap
US9507527B2 (en) 2014-02-21 2016-11-29 International Business Machines Corporation Efficient cache management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap
CN103870400B (en) * 2014-03-06 2018-11-30 华为技术有限公司 A kind of voltage adjusting method of super capacitor, apparatus and system
US10078604B1 (en) 2014-04-17 2018-09-18 Bitmicro Networks, Inc. Interrupt coalescing
US10025736B1 (en) 2014-04-17 2018-07-17 Bitmicro Networks, Inc. Exchange message protocol message transmission between two devices
US9952991B1 (en) 2014-04-17 2018-04-24 Bitmicro Networks, Inc. Systematic method on queuing of descriptors for multiple flash intelligent DMA engine operation
US10042792B1 (en) 2014-04-17 2018-08-07 Bitmicro Networks, Inc. Method for transferring and receiving frames across PCI express bus for SSD device
US9811461B1 (en) 2014-04-17 2017-11-07 Bitmicro Networks, Inc. Data storage system
US10055150B1 (en) 2014-04-17 2018-08-21 Bitmicro Networks, Inc. Writing volatile scattered memory metadata to flash device
US20150331608A1 (en) * 2014-05-16 2015-11-19 Samsung Electronics Co., Ltd. Electronic system with transactions and method of operation thereof
JP5936152B2 (en) * 2014-05-17 2016-06-15 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Memory access trace method
CN109376123B (en) * 2014-08-12 2022-08-19 华为技术有限公司 Method for managing files, distributed storage system and management node
US9448937B1 (en) * 2014-08-18 2016-09-20 Xilinx, Inc. Cache coherency
US9665280B2 (en) * 2014-09-30 2017-05-30 International Business Machines Corporation Cache coherency verification using ordered lists
US10528253B2 (en) 2014-11-05 2020-01-07 International Business Machines Corporation Increased bandwidth of ordered stores in a non-uniform memory subsystem
US10013385B2 (en) 2014-11-13 2018-07-03 Cavium, Inc. Programmable validation of transaction requests
US9569362B2 (en) * 2014-11-13 2017-02-14 Cavium, Inc. Programmable ordering and prefetch
US9977750B2 (en) * 2014-12-12 2018-05-22 Nxp Usa, Inc. Coherent memory interleaving with uniform latency
US9974176B2 (en) 2015-07-10 2018-05-15 Cisco Technology, Inc. Mass storage integration over central processing unit interfaces
US20170024344A1 (en) * 2015-07-22 2017-01-26 Microchip Technology Incorporated Method and System for USB 2.0 Bandwidth Reservation
US10042749B2 (en) 2015-11-10 2018-08-07 International Business Machines Corporation Prefetch insensitive transactional memory
US9900260B2 (en) 2015-12-10 2018-02-20 Arm Limited Efficient support for variable width data channels in an interconnect network
US10157133B2 (en) 2015-12-10 2018-12-18 Arm Limited Snoop filter for cache coherency in a data processing system
US20170185516A1 (en) * 2015-12-28 2017-06-29 Arm Limited Snoop optimization for multi-ported nodes of a data processing system
US9990292B2 (en) 2016-06-29 2018-06-05 Arm Limited Progressive fine to coarse grain snoop filter
WO2018034682A1 (en) * 2016-08-13 2018-02-22 Intel Corporation Apparatuses, methods, and systems for neural networks
KR102532645B1 (en) * 2016-09-20 2023-05-15 삼성전자 주식회사 Method and apparatus for providing data to streaming application in adaptive streaming service
US10042766B1 (en) 2017-02-02 2018-08-07 Arm Limited Data processing apparatus with snoop request address alignment and snoop response time alignment
US10331582B2 (en) * 2017-02-13 2019-06-25 Intel Corporation Write congestion aware bypass for non-volatile memory, last level cache (LLC) dropping from write queue responsive to write queue being full and read queue threshold wherein the threshold is derived from latency of write to LLC and main memory retrieval time
US10552050B1 (en) 2017-04-07 2020-02-04 Bitmicro Llc Multi-dimensional computer storage system
NO344681B1 (en) * 2017-09-05 2020-03-02 Numascale As Coherent Node Controller
US10417215B2 (en) * 2017-09-29 2019-09-17 Hewlett Packard Enterprise Development Lp Data storage over immutable and mutable data stages
US10976966B2 (en) * 2018-06-29 2021-04-13 Weka.IO Ltd. Implementing coherency and page cache support in a distributed way for files
US10901848B2 (en) 2018-08-03 2021-01-26 Western Digital Technologies, Inc. Storage systems with peer data recovery
US10877906B2 (en) 2018-09-17 2020-12-29 Micron Technology, Inc. Scheduling of read operations and write operations based on a data bus mode
US11100040B2 (en) 2018-10-17 2021-08-24 Cisco Technology, Inc. Modular remote direct memory access interfaces
US10776276B2 (en) 2018-11-30 2020-09-15 Hewlett Packard Enterprise Development Lp Bypass storage class memory read cache based on a queue depth threshold
US11182258B2 (en) * 2019-01-04 2021-11-23 Western Digital Technologies, Inc. Data rebuild using dynamic peer work allocation
US11030107B2 (en) 2019-04-19 2021-06-08 Hewlett Packard Enterprise Development Lp Storage class memory queue depth threshold adjustment
US11137914B2 (en) 2019-05-07 2021-10-05 Western Digital Technologies, Inc. Non-volatile storage system with hybrid command
US11593281B2 (en) * 2019-05-08 2023-02-28 Hewlett Packard Enterprise Development Lp Device supporting ordered and unordered transaction classes
US11301386B2 (en) 2019-08-01 2022-04-12 International Business Machines Corporation Dynamically adjusting prefetch depth
US11163683B2 (en) 2019-08-01 2021-11-02 International Business Machines Corporation Dynamically adjusting prefetch depth
US11137941B2 (en) 2019-12-30 2021-10-05 Advanced Micro Devices, Inc. Command replay for non-volatile dual inline memory modules
US11531601B2 (en) * 2019-12-30 2022-12-20 Advanced Micro Devices, Inc. Error recovery for non-volatile memory modules
US11099786B2 (en) * 2019-12-30 2021-08-24 Advanced Micro Devices, Inc. Signaling for heterogeneous memory systems
US11126369B1 (en) 2020-02-28 2021-09-21 Western Digital Technologies, Inc. Data storage with improved suspend resume performance
US11928472B2 (en) 2020-09-26 2024-03-12 Intel Corporation Branch prefetch mechanisms for mitigating frontend branch resteers
US20220197506A1 (en) * 2020-12-17 2022-06-23 Advanced Micro Devices, Inc. Data placement with packet metadata
US11630723B2 (en) * 2021-01-12 2023-04-18 Qualcomm Incorporated Protected data streaming between memories

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5737565A (en) * 1995-08-24 1998-04-07 International Business Machines Corporation System and method for diallocating stream from a stream buffer
US6412046B1 (en) * 2000-05-01 2002-06-25 Hewlett Packard Company Verification of cache prefetch mechanism
US6487627B1 (en) * 1999-12-22 2002-11-26 Intel Corporation Method and apparatus to manage digital bus traffic
US6571318B1 (en) * 2001-03-02 2003-05-27 Advanced Micro Devices, Inc. Stride based prefetcher with confidence counter and dynamic prefetch-ahead mechanism

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06314264A (en) * 1993-05-06 1994-11-08 Nec Corp Self-routing cross bar switch
US5577204A (en) * 1993-12-15 1996-11-19 Convex Computer Corporation Parallel processing computer system interconnections utilizing unidirectional communication links with separate request and response lines for direct communication or using a crossbar switching device
JPH0981508A (en) * 1995-08-31 1997-03-28 Internatl Business Mach Corp <Ibm> Method and apparatus for communication
US6125429A (en) * 1998-03-12 2000-09-26 Compaq Computer Corporation Cache memory exchange optimized memory organization for a computer system
DE19815097C2 (en) * 1998-04-03 2002-03-14 Siemens Ag bus master
JP3721283B2 (en) * 1999-06-03 2005-11-30 株式会社日立製作所 Main memory shared multiprocessor system
US6629220B1 (en) * 1999-08-20 2003-09-30 Intel Corporation Method and apparatus for dynamic arbitration between a first queue and a second queue based on a high priority transaction type
US6807172B1 (en) * 1999-12-21 2004-10-19 Cisco Technology, Inc. Method and apparatus for learning and switching frames in a distributed network switch
US6571332B1 (en) * 2000-04-11 2003-05-27 Advanced Micro Devices, Inc. Method and apparatus for combined transaction reordering and buffer management
US6654860B1 (en) * 2000-07-27 2003-11-25 Advanced Micro Devices, Inc. Method and apparatus for removing speculative memory accesses from a memory access queue for issuance to memory or discarding
US6564304B1 (en) * 2000-09-01 2003-05-13 Ati Technologies Inc. Memory processing system and method for accessing memory including reordering memory requests to reduce mode switching
US6801976B2 (en) * 2001-08-27 2004-10-05 Intel Corporation Mechanism for preserving producer-consumer ordering across an unordered interface
US6839794B1 (en) * 2001-10-12 2005-01-04 Agilent Technologies, Inc. Method and system to map a service level associated with a packet to one of a number of data streams at an interconnect device
US6715046B1 (en) * 2001-11-29 2004-03-30 Cisco Technology, Inc. Method and apparatus for reading from and writing to storage using acknowledged phases of sets of data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5737565A (en) * 1995-08-24 1998-04-07 International Business Machines Corporation System and method for diallocating stream from a stream buffer
US6487627B1 (en) * 1999-12-22 2002-11-26 Intel Corporation Method and apparatus to manage digital bus traffic
US6412046B1 (en) * 2000-05-01 2002-06-25 Hewlett Packard Company Verification of cache prefetch mechanism
US6571318B1 (en) * 2001-03-02 2003-05-27 Advanced Micro Devices, Inc. Stride based prefetcher with confidence counter and dynamic prefetch-ahead mechanism

Cited By (281)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160134702A1 (en) * 1999-01-22 2016-05-12 Ilya Gertner Data sharing using distributed cache in a network of heterogeneous computers
US20120259953A1 (en) * 1999-01-22 2012-10-11 Network Disk, Inc. Data Storage and Data Sharing in a Network of Heterogeneous Computers
US10154092B2 (en) * 1999-01-22 2018-12-11 Ls Cloud Storage Technologies, Llc Data sharing using distributed cache in a network of heterogeneous computers
US9811463B2 (en) * 1999-01-22 2017-11-07 Ls Cloud Storage Technologies, Llc Apparatus including an I/O interface and a network interface and related method of use
US20170161189A1 (en) * 1999-01-22 2017-06-08 Ls Cloud Storage Technologies, Llc Apparatus including an i/o interface and a network interface and related method of use
US20050030313A1 (en) * 2000-06-23 2005-02-10 William Radke Apparatus and method for distributed memory control in a graphics processing system
US20030101312A1 (en) * 2001-11-26 2003-05-29 Doan Trung T. Machine state storage apparatus and method
US6842827B2 (en) 2002-01-02 2005-01-11 Intel Corporation Cache coherency arrangement to enhance inbound bandwidth
US20070055817A1 (en) * 2002-06-07 2007-03-08 Jeddeloh Joseph M Memory hub with internal cache and/or memory access prediction
US7945737B2 (en) 2002-06-07 2011-05-17 Round Rock Research, Llc Memory hub with internal cache and/or memory access prediction
US8499127B2 (en) 2002-06-07 2013-07-30 Round Rock Research, Llc Memory hub with internal cache and/or memory access prediction
US8195918B2 (en) 2002-06-07 2012-06-05 Round Rock Research, Llc Memory hub with internal cache and/or memory access prediction
US20090125688A1 (en) * 2002-06-07 2009-05-14 Jeddeloh Joseph M Memory hub with internal cache and/or memory access prediction
US20030229770A1 (en) * 2002-06-07 2003-12-11 Jeddeloh Joseph M. Memory hub with internal cache and/or memory access prediction
US20070035980A1 (en) * 2002-08-02 2007-02-15 Taylor George R System and method for optically interconnecting memory devices
US20070025133A1 (en) * 2002-08-02 2007-02-01 Taylor George R System and method for optical interconnecting memory devices
US20040024959A1 (en) * 2002-08-02 2004-02-05 Taylor George R. System and method for optically interconnecting memory devices
US8954687B2 (en) 2002-08-05 2015-02-10 Micron Technology, Inc. Memory hub and access method having a sequencer and internal row caching
US20040024978A1 (en) * 2002-08-05 2004-02-05 Jeddeloh Joseph M. Memory hub and access method having internal row caching
US20050223161A1 (en) * 2002-08-05 2005-10-06 Jeddeloh Joseph M Memory hub and access method having internal row caching
US20040028412A1 (en) * 2002-08-09 2004-02-12 Tim Murphy System and method for multiple bit optical data transmission in memory systems
US20060204247A1 (en) * 2002-08-09 2006-09-14 Tim Murphy System and method for multiple bit optical data transmission in memory systems
US20040034753A1 (en) * 2002-08-16 2004-02-19 Jeddeloh Joseph M. Memory hub bypass circuit and method
US20060174070A1 (en) * 2002-08-16 2006-08-03 Jeddeloh Joseph M Memory hub bypass circuit and method
US7836252B2 (en) 2002-08-29 2010-11-16 Micron Technology, Inc. System and method for optimizing interconnections of memory devices in a multichip module
US20070271435A1 (en) * 2002-08-29 2007-11-22 Jeddeloh Joseph M Method and system for controlling memory accesses to memory modules having a memory hub architecture
US8086815B2 (en) 2002-08-29 2011-12-27 Round Rock Research, Llc System for controlling memory accesses to memory modules having a memory hub architecture
US8190819B2 (en) 2002-08-29 2012-05-29 Micron Technology, Inc. System and method for optimizing interconnections of memory devices in a multichip module
US20050066137A1 (en) * 2002-08-29 2005-03-24 Jeddeloh Joseph M. Method and system for controlling memory accesses to memory modules having a memory hub architecture
US7716444B2 (en) 2002-08-29 2010-05-11 Round Rock Research, Llc Method and system for controlling memory accesses to memory modules having a memory hub architecture
US8234479B2 (en) 2002-08-29 2012-07-31 Round Rock Research, Llc System for controlling memory accesses to memory modules having a memory hub architecture
US20040044833A1 (en) * 2002-08-29 2004-03-04 Ryan Kevin J. System and method for optimizing interconnections of memory devices in a multichip module
US7908452B2 (en) 2002-08-29 2011-03-15 Round Rock Research, Llc Method and system for controlling memory accesses to memory modules having a memory hub architecture
US7805586B2 (en) 2002-08-29 2010-09-28 Micron Technology, Inc. System and method for optimizing interconnections of memory devices in a multichip module
US20060206667A1 (en) * 2002-08-29 2006-09-14 Ryan Kevin J System and method for optimizing interconnections of memory devices in a multichip module
US20040047169A1 (en) * 2002-09-09 2004-03-11 Lee Terry R. Wavelength division multiplexed memory module, memory system and method
US20040257890A1 (en) * 2002-09-09 2004-12-23 Lee Terry R. Wavelength division multiplexed memory module, memory system and method
US7843951B2 (en) * 2002-11-11 2010-11-30 Rambus Inc. Packet storage system for traffic handling
US20050265368A1 (en) * 2002-11-11 2005-12-01 Anthony Spencer Packet storage system for traffic handling
US20110069716A1 (en) * 2002-11-11 2011-03-24 Anthony Spencer Method and apparatus for queuing variable size data packets in a communication system
US8472457B2 (en) * 2002-11-11 2013-06-25 Rambus Inc. Method and apparatus for queuing variable size data packets in a communication system
US20050243829A1 (en) * 2002-11-11 2005-11-03 Clearspeed Technology Pic Traffic management architecture
US20040193771A1 (en) * 2003-03-31 2004-09-30 Ebner Sharon M. Method, apparatus, and system for processing a plurality of outstanding data requests
US20040210722A1 (en) * 2003-04-21 2004-10-21 Sharma Debendra Das Directory-based coherency scheme for reducing memory bandwidth loss
US7051166B2 (en) * 2003-04-21 2006-05-23 Hewlett-Packard Development Company, L.P. Directory-based cache coherency scheme for reducing memory bandwidth loss
US20040243743A1 (en) * 2003-05-30 2004-12-02 Brian Smith History FIFO with bypass
US7117287B2 (en) * 2003-05-30 2006-10-03 Sun Microsystems, Inc. History FIFO with bypass wherein an order through queue is maintained irrespective of retrieval of data
US20040251929A1 (en) * 2003-06-11 2004-12-16 Pax George E. Memory module and method having improved signal routing topology
US7746095B2 (en) 2003-06-11 2010-06-29 Round Rock Research, Llc Memory module and method having improved signal routing topology
US20060023528A1 (en) * 2003-06-11 2006-02-02 Pax George E Memory module and method having improved signal routing topology
US20090243649A1 (en) * 2003-06-11 2009-10-01 Pax George E Memory module and method having improved signal routing topology
US20050030797A1 (en) * 2003-06-11 2005-02-10 Pax George E. Memory module and method having improved signal routing topology
US20080036492A1 (en) * 2003-06-11 2008-02-14 Micron Technology, Inc. Memory module and method having improved signal routing topology
US8200884B2 (en) 2003-06-19 2012-06-12 Round Rock Research, Llc Reconfigurable memory module and method
US20080140952A1 (en) * 2003-06-19 2008-06-12 Micro Technology, Inc. Reconfigurable memory module and method
US20070011392A1 (en) * 2003-06-19 2007-01-11 Lee Terry R Reconfigurable memory module and method
US20040260864A1 (en) * 2003-06-19 2004-12-23 Lee Terry R. Reconfigurable memory module and method
US7966444B2 (en) 2003-06-19 2011-06-21 Round Rock Research, Llc Reconfigurable memory module and method
US7818712B2 (en) 2003-06-19 2010-10-19 Round Rock Research, Llc Reconfigurable memory module and method
US8732383B2 (en) 2003-06-19 2014-05-20 Round Rock Research, Llc Reconfigurable memory module and method
US8127081B2 (en) 2003-06-20 2012-02-28 Round Rock Research, Llc Memory hub and access method having internal prefetch buffers
US20060288172A1 (en) * 2003-06-20 2006-12-21 Lee Terry R Memory hub and access method having internal prefetch buffers
US20060206738A1 (en) * 2003-06-20 2006-09-14 Jeddeloh Joseph M System and method for selective memory module power management
US20040260909A1 (en) * 2003-06-20 2004-12-23 Lee Terry R. Memory hub and access method having internal prefetch buffers
US20040260891A1 (en) * 2003-06-20 2004-12-23 Jeddeloh Joseph M. Posted write buffers and methods of posting write requests in memory modules
US20040260957A1 (en) * 2003-06-20 2004-12-23 Jeddeloh Joseph M. System and method for selective memory module power management
US20060212655A1 (en) * 2003-06-20 2006-09-21 Jeddeloh Joseph M Posted write buffers and method of posting write requests in memory modules
US20090327532A1 (en) * 2003-07-22 2009-12-31 Micron Technology, Inc. Apparatus and method for direct memory access in a hub-based memory system
US7966430B2 (en) 2003-07-22 2011-06-21 Round Rock Research, Llc Apparatus and method for direct memory access in a hub-based memory system
US8209445B2 (en) 2003-07-22 2012-06-26 Round Rock Research, Llc Apparatus and method for direct memory access in a hub-based memory system
US20050021884A1 (en) * 2003-07-22 2005-01-27 Jeddeloh Joseph M. Apparatus and method for direct memory access in a hub-based memory system
US20050160201A1 (en) * 2003-07-22 2005-07-21 Jeddeloh Joseph M. Apparatus and method for direct memory access in a hub-based memory system
US7913122B2 (en) 2003-08-19 2011-03-22 Round Rock Research, Llc System and method for on-board diagnostics of memory modules
US20080016401A1 (en) * 2003-08-19 2008-01-17 Micro Technology, Inc. System and method for on-board diagnostics of memory modules
US20090106591A1 (en) * 2003-08-19 2009-04-23 Micron Technology, Inc. System and method for on-board diagnostics of memory modules
US20060206766A1 (en) * 2003-08-19 2006-09-14 Jeddeloh Joseph M System and method for on-board diagnostics of memory modules
US20050044304A1 (en) * 2003-08-20 2005-02-24 Ralph James Method and system for capturing and bypassing memory transactions in a hub-based memory system
US20060200602A1 (en) * 2003-08-20 2006-09-07 Ralph James Method and system for capturing and bypassing memory transactions in a hub-based memory system
US9082461B2 (en) 2003-08-28 2015-07-14 Round Rock Research, Llc Multiple processor system and method including multiple memory hub modules
US20050146944A1 (en) * 2003-08-28 2005-07-07 Jeddeloh Joseph M. Memory module and method having on-board data search capabilities and processor-based system using such memory modules
US7873775B2 (en) 2003-08-28 2011-01-18 Round Rock Research, Llc Multiple processor system and method including multiple memory hub modules
US20050050255A1 (en) * 2003-08-28 2005-03-03 Jeddeloh Joseph M. Multiple processor system and method including multiple memory hub modules
US20070033317A1 (en) * 2003-08-28 2007-02-08 Jeddeloh Joseph M Multiple processor system and method including multiple memory hub modules
US8244952B2 (en) 2003-08-28 2012-08-14 Round Rock Research, Llc Multiple processor system and method including multiple memory hub modules
US20050146943A1 (en) * 2003-08-28 2005-07-07 Jeddeloh Joseph M. Memory module and method having on-board data search capabilities and processor-based system using such memory modules
US20050060600A1 (en) * 2003-09-12 2005-03-17 Jeddeloh Joseph M. System and method for on-board timing margin testing of memory modules
US7958412B2 (en) 2003-09-12 2011-06-07 Round Rock Research, Llc System and method for on-board timing margin testing of memory modules
US7689879B2 (en) 2003-09-12 2010-03-30 Micron Technology, Inc. System and method for on-board timing margin testing of memory modules
US20060200620A1 (en) * 2003-09-18 2006-09-07 Schnepper Randy L Memory hub with integrated non-volatile memory
US7975122B2 (en) 2003-09-18 2011-07-05 Round Rock Research, Llc Memory hub with integrated non-volatile memory
US8832404B2 (en) 2003-09-18 2014-09-09 Round Rock Research, Llc Memory hub with integrated non-volatile memory
US20090132781A1 (en) * 2003-09-18 2009-05-21 Schnepper Randy L Memory hub with integrated non-volatile memory
US7120743B2 (en) * 2003-10-20 2006-10-10 Micron Technology, Inc. Arbitration system and method for memory responses in a hub-based memory system
US20060271746A1 (en) * 2003-10-20 2006-11-30 Meyer James W Arbitration system and method for memory responses in a hub-based memory system
US8589643B2 (en) 2003-10-20 2013-11-19 Round Rock Research, Llc Arbitration system and method for memory responses in a hub-based memory system
US20050086441A1 (en) * 2003-10-20 2005-04-21 Meyer James W. Arbitration system and method for memory responses in a hub-based memory system
US7032041B2 (en) 2003-11-18 2006-04-18 Hitachi, Ltd. Information processing performing prefetch with load balancing
US20050108450A1 (en) * 2003-11-18 2005-05-19 Hirofumi Sahara Information processing system and method
US20060206679A1 (en) * 2003-12-29 2006-09-14 Jeddeloh Joseph M System and method for read synchronization of memory modules
US8880833B2 (en) 2003-12-29 2014-11-04 Micron Technology, Inc. System and method for read synchronization of memory modules
US8392686B2 (en) 2003-12-29 2013-03-05 Micron Technology, Inc. System and method for read synchronization of memory modules
US20050149774A1 (en) * 2003-12-29 2005-07-07 Jeddeloh Joseph M. System and method for read synchronization of memory modules
US8788765B2 (en) 2004-01-30 2014-07-22 Micron Technology, Inc. Buffer control system and method for a memory system having outstanding read and write request buffers
US20050172084A1 (en) * 2004-01-30 2005-08-04 Jeddeloh Joseph M. Buffer control system and method for a memory system having memory request buffers
US8504782B2 (en) 2004-01-30 2013-08-06 Micron Technology, Inc. Buffer control system and method for a memory system having outstanding read and write request buffers
US8694735B2 (en) 2004-02-05 2014-04-08 Micron Technology, Inc. Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US20050177690A1 (en) * 2004-02-05 2005-08-11 Laberge Paul A. Dynamic command and/or address mirroring system and method for memory modules
US20080294862A1 (en) * 2004-02-05 2008-11-27 Micron Technology, Inc. Arbitration system having a packet memory and method for memory responses in a hub-based memory system
US8291173B2 (en) 2004-02-05 2012-10-16 Micron Technology, Inc. Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US20070143553A1 (en) * 2004-02-05 2007-06-21 Micron Technology, Inc. Dynamic command and/or address mirroring system and method for memory modules
US7788451B2 (en) 2004-02-05 2010-08-31 Micron Technology, Inc. Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US9164937B2 (en) 2004-02-05 2015-10-20 Micron Technology, Inc. Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US20130111095A1 (en) * 2004-02-13 2013-05-02 Sharad Mehrotra Multi-chassis fabric-backplane enterprise servers
US8868790B2 (en) 2004-02-13 2014-10-21 Oracle International Corporation Processor-memory module performance acceleration in fabric-backplane enterprise servers
US8743872B2 (en) 2004-02-13 2014-06-03 Oracle International Corporation Storage traffic communication via a switch fabric in accordance with a VLAN
US8848727B2 (en) 2004-02-13 2014-09-30 Oracle International Corporation Hierarchical transport protocol stack for data transfer between enterprise servers
US8601053B2 (en) * 2004-02-13 2013-12-03 Oracle International Corporation Multi-chassis fabric-backplane enterprise servers
US9274991B2 (en) 2004-03-08 2016-03-01 Micron Technology, Inc. Memory hub architecture having programmable lane widths
US8775764B2 (en) 2004-03-08 2014-07-08 Micron Technology, Inc. Memory hub architecture having programmable lane widths
US20080294856A1 (en) * 2004-03-24 2008-11-27 Micron Technology, Inc. Memory arbitration system and method having an arbitration packet protocol
US20050216677A1 (en) * 2004-03-24 2005-09-29 Jeddeloh Joseph M Memory arbitration system and method having an arbitration packet protocol
US9032166B2 (en) 2004-03-24 2015-05-12 Micron Technology, Inc. Memory arbitration system and method having an arbitration packet protocol
US20070180171A1 (en) * 2004-03-24 2007-08-02 Micron Technology, Inc. Memory arbitration system and method having an arbitration packet protocol
US8082404B2 (en) 2004-03-24 2011-12-20 Micron Technology, Inc. Memory arbitration system and method having an arbitration packet protocol
US8555006B2 (en) 2004-03-24 2013-10-08 Micron Technology, Inc. Memory arbitration system and method having an arbitration packet protocol
US7899969B2 (en) 2004-03-25 2011-03-01 Round Rock Research, Llc System and method for memory hub-based expansion bus
US8117371B2 (en) 2004-03-25 2012-02-14 Round Rock Research, Llc System and method for memory hub-based expansion bus
US20070168595A1 (en) * 2004-03-25 2007-07-19 Micron Technology, Inc. System and method for memory hub-based expansion bus
US20060195647A1 (en) * 2004-03-25 2006-08-31 Jeddeloh Joseph M System and method for memory hub-based expansion bus
US20100036989A1 (en) * 2004-03-25 2010-02-11 Micron Technology, Inc. System and method for memory hub-based expansion bus
US20050216648A1 (en) * 2004-03-25 2005-09-29 Jeddeloh Joseph M System and method for memory hub-based expansion bus
US20060179203A1 (en) * 2004-03-25 2006-08-10 Jeddeloh Joseph M System and method for memory hub-based expansion bus
US20060218318A1 (en) * 2004-03-29 2006-09-28 Ralph James Method and system for synchronizing communications links in a hub-based memory system
US20050213611A1 (en) * 2004-03-29 2005-09-29 Ralph James Method and system for synchronizing communications links in a hub-based memory system
US20090086733A1 (en) * 2004-03-29 2009-04-02 Conexant Systems, Inc. Compact Packet Switching Node Storage Architecture Employing Double Data Rate Synchronous Dynamic RAM
US7760726B2 (en) 2004-03-29 2010-07-20 Ikanos Communications, Inc. Compact packet switching node storage architecture employing double data rate synchronous dynamic RAM
US7486688B2 (en) * 2004-03-29 2009-02-03 Conexant Systems, Inc. Compact packet switching node storage architecture employing Double Data Rate Synchronous Dynamic RAM
US20050213571A1 (en) * 2004-03-29 2005-09-29 Zarlink Semiconductor Inc. Compact packet switching node storage architecture employing double data rate synchronous dynamic RAM
US8164375B2 (en) 2004-04-05 2012-04-24 Round Rock Research, Llc Delay line synchronizer apparatus and method
US20050218956A1 (en) * 2004-04-05 2005-10-06 Laberge Paul A Delay line synchronizer apparatus and method
US20060066375A1 (en) * 2004-04-05 2006-03-30 Laberge Paul A Delay line synchronizer apparatus and method
US20100019822A1 (en) * 2004-04-05 2010-01-28 Laberge Paul A Delay line synchronizer apparatus and method
US20050228939A1 (en) * 2004-04-08 2005-10-13 Janzen Jeffery W System and method for optimizing interconnections of components in a multichip memory module
US20110103122A1 (en) * 2004-04-08 2011-05-05 Janzen Jeffery W System and method for optimizing interconnections of components in a multichip memory module
US7870329B2 (en) 2004-04-08 2011-01-11 Micron Technology, Inc. System and method for optimizing interconnections of components in a multichip memory module
US8438329B2 (en) 2004-04-08 2013-05-07 Micron Technology, Inc. System and method for optimizing interconnections of components in a multichip memory module
US20050240736A1 (en) * 2004-04-23 2005-10-27 Mark Shaw System and method for coherency filtering
US7434008B2 (en) * 2004-04-23 2008-10-07 Hewlett-Packard Development Company, L.P. System and method for coherency filtering
US20080133853A1 (en) * 2004-05-14 2008-06-05 Jeddeloh Joseph M Memory hub and method for memory sequencing
US20050257005A1 (en) * 2004-05-14 2005-11-17 Jeddeloh Joseph M Memory hub and method for memory sequencing
US20070033353A1 (en) * 2004-05-14 2007-02-08 Jeddeloh Joseph M Memory hub and method for memory sequencing
US20060218331A1 (en) * 2004-05-17 2006-09-28 Ralph James System and method for communicating the synchronization status of memory modules during initialization of the memory modules
US20050257021A1 (en) * 2004-05-17 2005-11-17 Ralph James System and method for communicating the synchronization status of memory modules during initialization of the memory modules
US20050268060A1 (en) * 2004-05-28 2005-12-01 Cronin Jeffrey J Method and system for terminating write commands in a hub-based memory system
US7774559B2 (en) 2004-05-28 2010-08-10 Micron Technology, Inc. Method and system for terminating write commands in a hub-based memory system
US7594088B2 (en) * 2004-06-04 2009-09-22 Micron Technology, Inc. System and method for an asynchronous data buffer having buffer write and read pointers
US8239607B2 (en) * 2004-06-04 2012-08-07 Micron Technology, Inc. System and method for an asynchronous data buffer having buffer write and read pointers
US20060200642A1 (en) * 2004-06-04 2006-09-07 Laberge Paul A System and method for an asynchronous data buffer having buffer write and read pointers
US20050286506A1 (en) * 2004-06-04 2005-12-29 Laberge Paul A System and method for an asynchronous data buffer having buffer write and read pointers
US7823024B2 (en) 2004-06-04 2010-10-26 Micron Technology, Inc. Memory hub tester interface and method for use thereof
US20090319745A1 (en) * 2004-06-04 2009-12-24 Laberge Paul A System and method for an asynchronous data buffer having buffer write and read pointers
US20050283681A1 (en) * 2004-06-04 2005-12-22 Jeddeloh Joseph M Memory hub tester interface and method for use thereof
US8713295B2 (en) 2004-07-12 2014-04-29 Oracle International Corporation Fabric-backplane enterprise servers with pluggable I/O sub-system
US20110191517A1 (en) * 2004-08-31 2011-08-04 Ralph James System and method for transmitting data packets in a computer system having a memory hub architecture
US8346998B2 (en) 2004-08-31 2013-01-01 Micron Technology, Inc. System and method for transmitting data packets in a computer system having a memory hub architecture
US7949803B2 (en) 2004-08-31 2011-05-24 Micron Technology, Inc. System and method for transmitting data packets in a computer system having a memory hub architecture
US20060271720A1 (en) * 2004-08-31 2006-11-30 Ralph James System and method for transmitting data packets in a computer system having a memory hub architecture
US20060047891A1 (en) * 2004-08-31 2006-03-02 Ralph James System and method for transmitting data packets in a computer system having a memory hub architecture
US20060146864A1 (en) * 2004-12-30 2006-07-06 Rosenbluth Mark B Flexible use of compute allocation in a multi-threaded compute engines
US20060168407A1 (en) * 2005-01-26 2006-07-27 Micron Technology, Inc. Memory hub system and method having large virtual page size
US7380066B2 (en) 2005-02-10 2008-05-27 International Business Machines Corporation Store stream prefetching in a microprocessor
US7716427B2 (en) 2005-02-10 2010-05-11 International Business Machines Corporation Store stream prefetching in a microprocessor
US20060179238A1 (en) * 2005-02-10 2006-08-10 Griswell John B Jr Store stream prefetching in a microprocessor
US20090070556A1 (en) * 2005-02-10 2009-03-12 Griswell Jr John Barry Store stream prefetching in a microprocessor
US20060179239A1 (en) * 2005-02-10 2006-08-10 Fluhr Eric J Data stream prefetching in a microprocessor
US7350029B2 (en) * 2005-02-10 2008-03-25 International Business Machines Corporation Data stream prefetching in a microprocessor
US7904661B2 (en) 2005-02-10 2011-03-08 International Business Machines Corporation Data stream prefetching in a microprocessor
US7606166B2 (en) 2005-04-01 2009-10-20 International Business Machines Corporation System and method for computing a blind checksum in a host ethernet adapter (HEA)
US7492771B2 (en) 2005-04-01 2009-02-17 International Business Machines Corporation Method for performing a packet header lookup
US20080089358A1 (en) * 2005-04-01 2008-04-17 International Business Machines Corporation Configurable ports for a host ethernet adapter
US8225188B2 (en) 2005-04-01 2012-07-17 International Business Machines Corporation Apparatus for blind checksum and correction for network transmissions
US7586936B2 (en) 2005-04-01 2009-09-08 International Business Machines Corporation Host Ethernet adapter for networking offload in server environment
US7903687B2 (en) 2005-04-01 2011-03-08 International Business Machines Corporation Method for scheduling, writing, and reading data inside the partitioned buffer of a switch, router or packet processing device
US20060221961A1 (en) * 2005-04-01 2006-10-05 International Business Machines Corporation Network communications for operating system partitions
US7706409B2 (en) 2005-04-01 2010-04-27 International Business Machines Corporation System and method for parsing, filtering, and computing the checksum in a host Ethernet adapter (HEA)
US20060221977A1 (en) * 2005-04-01 2006-10-05 International Business Machines Corporation Method and apparatus for providing a network connection table
US20080317027A1 (en) * 2005-04-01 2008-12-25 International Business Machines Corporation System for reducing latency in a host ethernet adapter (hea)
US7881332B2 (en) 2005-04-01 2011-02-01 International Business Machines Corporation Configurable ports for a host ethernet adapter
US7577151B2 (en) 2005-04-01 2009-08-18 International Business Machines Corporation Method and apparatus for providing a network connection table
US7697536B2 (en) * 2005-04-01 2010-04-13 International Business Machines Corporation Network communications for operating system partitions
US7782888B2 (en) 2005-04-01 2010-08-24 International Business Machines Corporation Configurable ports for a host ethernet adapter
US7508771B2 (en) 2005-04-01 2009-03-24 International Business Machines Corporation Method for reducing latency in a host ethernet adapter (HEA)
US20070033369A1 (en) * 2005-08-02 2007-02-08 Fujitsu Limited Reconfigurable integrated circuit device
US20070047584A1 (en) * 2005-08-24 2007-03-01 Spink Aaron T Interleaving data packets in a packet-based communication system
US8885673B2 (en) 2005-08-24 2014-11-11 Intel Corporation Interleaving data packets in a packet-based communication system
US8325768B2 (en) * 2005-08-24 2012-12-04 Intel Corporation Interleaving data packets in a packet-based communication system
US20070110088A1 (en) * 2005-11-12 2007-05-17 Liquid Computing Corporation Methods and systems for scalable interconnect
WO2007144698A3 (en) * 2005-11-12 2008-07-10 Liquid Computing Corp Methods and systems for scalable interconnect
WO2007144698A2 (en) * 2005-11-12 2007-12-21 Liquid Computing Corporation Methods and systems for scalable interconnect
US20070168536A1 (en) * 2006-01-17 2007-07-19 International Business Machines Corporation Network protocol stack isolation
US20070248111A1 (en) * 2006-04-24 2007-10-25 Shaw Mark E System and method for clearing information in a stalled output queue of a crossbar
US20070283100A1 (en) * 2006-05-30 2007-12-06 Kabushiki Kaisha Toshiba Cache memory device and caching method
US7809926B2 (en) * 2006-11-03 2010-10-05 Cornell Research Foundation, Inc. Systems and methods for reconfiguring on-chip multiprocessors
US20080109637A1 (en) * 2006-11-03 2008-05-08 Cornell Research Foundation, Inc. Systems and methods for reconfigurably multiprocessing
US7711888B2 (en) * 2006-12-31 2010-05-04 Texas Instruments Incorporated Systems and methods for improving data transfer between devices
US20080162769A1 (en) * 2006-12-31 2008-07-03 Texas Instrument Incorporated Systems and Methods for Improving Data Transfer between Devices
US20080291824A1 (en) * 2007-05-21 2008-11-27 Kendall Kris M Reassigning Virtual Lane Buffer Allocation During Initialization to Maximize IO Performance
US7809883B1 (en) * 2007-10-16 2010-10-05 Netapp, Inc. Cached reads for a storage system
US8065488B2 (en) 2007-12-31 2011-11-22 Intel Corporation Mechanism for effectively caching streaming and non-streaming data patterns
US20110099333A1 (en) * 2007-12-31 2011-04-28 Eric Sprangle Mechanism for effectively caching streaming and non-streaming data patterns
WO2009145888A1 (en) * 2008-05-29 2009-12-03 Advanced Micro Devices, Inc. Dynamically partitionable cache
US20090300293A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Dynamically Partitionable Cache
US8918588B2 (en) * 2009-04-07 2014-12-23 International Business Machines Corporation Maintaining a cache of blocks from a plurality of data streams
US20100257320A1 (en) * 2009-04-07 2010-10-07 International Business Machines Corporation Cache Replacement Policy
US20110035530A1 (en) * 2009-08-10 2011-02-10 Fujitsu Limited Network system, information processing apparatus, and control method for network system
EP2288084A3 (en) * 2009-08-10 2011-08-03 Fujitsu Limited Network system, information processing apparatus, and control method for network system
US8589614B2 (en) 2009-08-10 2013-11-19 Fujitsu Limited Network system with crossbar switch and bypass route directly coupling crossbar interfaces
JP2011039744A (en) * 2009-08-10 2011-02-24 Fujitsu Ltd Network system, information processing apparatus, and method of controlling network system
TWI502357B (en) * 2009-08-11 2015-10-01 Via Tech Inc Method and apparatus for pre-fetching data, and computer system
US8443151B2 (en) 2009-11-09 2013-05-14 Intel Corporation Prefetch optimization in shared resource multi-core systems
US20110113199A1 (en) * 2009-11-09 2011-05-12 Tang Puqi P Prefetch optimization in shared resource multi-core systems
US8856452B2 (en) 2011-05-31 2014-10-07 Illinois Institute Of Technology Timing-aware data prefetching for microprocessors
US8621157B2 (en) 2011-06-13 2013-12-31 Advanced Micro Devices, Inc. Cache prefetching from non-uniform memories
US9942585B2 (en) * 2011-06-24 2018-04-10 Google Technology Holdings LLC Intelligent buffering of media streams delivered over internet
US20120331106A1 (en) * 2011-06-24 2012-12-27 General Instrument Corporation Intelligent buffering of media streams delivered over internet
US20170111672A1 (en) * 2011-06-24 2017-04-20 Google Technology Holdings LLC Intelligent buffering of media streams delivered over internet
US9615126B2 (en) * 2011-06-24 2017-04-04 Google Technology Holdings LLC Intelligent buffering of media streams delivered over internet
US10942737B2 (en) 2011-12-29 2021-03-09 Intel Corporation Method, device and system for control signalling in a data path module of a data stream processing engine
US9053029B2 (en) * 2012-02-06 2015-06-09 Empire Technology Development Llc Multicore computer system with cache use based adaptive scheduling
US20130205092A1 (en) * 2012-02-06 2013-08-08 Empire Technology Development Llc Multicore computer system with cache use based adaptive scheduling
US9384136B2 (en) * 2013-04-12 2016-07-05 International Business Machines Corporation Modification of prefetch depth based on high latency event
US9378144B2 (en) * 2013-04-12 2016-06-28 International Business Machines Corporation Modification of prefetch depth based on high latency event
US20140310478A1 (en) * 2013-04-12 2014-10-16 International Business Machines Corporation Modification of prefetch depth based on high latency event
US20140310477A1 (en) * 2013-04-12 2014-10-16 International Business Machines Corporation Modification of prefetch depth based on high latency event
US10331583B2 (en) * 2013-09-26 2019-06-25 Intel Corporation Executing distributed memory operations using processing elements connected by distributed channels
US10853276B2 (en) 2013-09-26 2020-12-01 Intel Corporation Executing distributed memory operations using processing elements connected by distributed channels
US20170123880A1 (en) * 2014-02-26 2017-05-04 Microsoft Technology Licensing, Llc Service metric analysis from structured logging schema of usage data
US20180018267A1 (en) * 2014-12-23 2018-01-18 Intel Corporation Speculative reads in buffered memory
EP3437274A4 (en) * 2016-04-01 2019-11-06 Intel Corporation Technologies for quality of service based throttling in fabric architectures
WO2017172235A1 (en) 2016-04-01 2017-10-05 Intel Corporation Technologies for quality of service based throttling in fabric architectures
CN108702339A (en) * 2016-04-01 2018-10-23 英特尔公司 Technology in structure framework for being throttled based on service quality
EP3790240A1 (en) * 2016-04-01 2021-03-10 INTEL Corporation Technologies for quality of service based throttling in fabric architectures
US10951516B2 (en) 2016-04-01 2021-03-16 Intel Corporation Technologies for quality of service based throttling in fabric architectures
US11343177B2 (en) 2016-04-01 2022-05-24 Intel Corporation Technologies for quality of service based throttling in fabric architectures
US10402168B2 (en) 2016-10-01 2019-09-03 Intel Corporation Low energy consumption mantissa multiplication for floating point multiply-add operations
US10416999B2 (en) 2016-12-30 2019-09-17 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10572376B2 (en) 2016-12-30 2020-02-25 Intel Corporation Memory ordering in acceleration hardware
US10558575B2 (en) 2016-12-30 2020-02-11 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10474375B2 (en) 2016-12-30 2019-11-12 Intel Corporation Runtime address disambiguation in acceleration hardware
US10445451B2 (en) 2017-07-01 2019-10-15 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with performance, correctness, and power reduction features
US10469397B2 (en) 2017-07-01 2019-11-05 Intel Corporation Processors and methods with configurable network-based dataflow operator circuits
US10467183B2 (en) 2017-07-01 2019-11-05 Intel Corporation Processors and methods for pipelined runtime services in a spatial array
US10515046B2 (en) 2017-07-01 2019-12-24 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10515049B1 (en) 2017-07-01 2019-12-24 Intel Corporation Memory circuits and methods for distributed memory hazard detection and error recovery
US10445234B2 (en) 2017-07-01 2019-10-15 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with transactional and replay features
US10387319B2 (en) 2017-07-01 2019-08-20 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with memory system performance, power reduction, and atomics support features
US10496574B2 (en) 2017-09-28 2019-12-03 Intel Corporation Processors, methods, and systems for a memory fence in a configurable spatial accelerator
US11086816B2 (en) 2017-09-28 2021-08-10 Intel Corporation Processors, methods, and systems for debugging a configurable spatial accelerator
US10445098B2 (en) 2017-09-30 2019-10-15 Intel Corporation Processors and methods for privileged configuration in a spatial array
US10380063B2 (en) 2017-09-30 2019-08-13 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator having a sequencer dataflow operator
US10613764B2 (en) 2017-11-20 2020-04-07 Advanced Micro Devices, Inc. Speculative hint-triggered activation of pages in memory
US11429281B2 (en) 2017-11-20 2022-08-30 Advanced Micro Devices, Inc. Speculative hint-triggered activation of pages in memory
US10417175B2 (en) 2017-12-30 2019-09-17 Intel Corporation Apparatus, methods, and systems for memory consistency in a configurable spatial accelerator
US10565134B2 (en) 2017-12-30 2020-02-18 Intel Corporation Apparatus, methods, and systems for multicast in a configurable spatial accelerator
US10445250B2 (en) 2017-12-30 2019-10-15 Intel Corporation Apparatus, methods, and systems with a configurable spatial accelerator
US10564980B2 (en) 2018-04-03 2020-02-18 Intel Corporation Apparatus, methods, and systems for conditional queues in a configurable spatial accelerator
US11307873B2 (en) 2018-04-03 2022-04-19 Intel Corporation Apparatus, methods, and systems for unstructured data flow in a configurable spatial accelerator with predicate propagation and merging
US10853073B2 (en) 2018-06-30 2020-12-01 Intel Corporation Apparatuses, methods, and systems for conditional operations in a configurable spatial accelerator
US10891240B2 (en) 2018-06-30 2021-01-12 Intel Corporation Apparatus, methods, and systems for low latency communication in a configurable spatial accelerator
US11593295B2 (en) 2018-06-30 2023-02-28 Intel Corporation Apparatuses, methods, and systems for operations in a configurable spatial accelerator
US10459866B1 (en) 2018-06-30 2019-10-29 Intel Corporation Apparatuses, methods, and systems for integrated control and data processing in a configurable spatial accelerator
US11200186B2 (en) 2018-06-30 2021-12-14 Intel Corporation Apparatuses, methods, and systems for operations in a configurable spatial accelerator
US10678724B1 (en) 2018-12-29 2020-06-09 Intel Corporation Apparatuses, methods, and systems for in-network storage in a configurable spatial accelerator
US10817291B2 (en) 2019-03-30 2020-10-27 Intel Corporation Apparatuses, methods, and systems for swizzle operations in a configurable spatial accelerator
US11029927B2 (en) 2019-03-30 2021-06-08 Intel Corporation Methods and apparatus to detect and annotate backedges in a dataflow graph
US10965536B2 (en) 2019-03-30 2021-03-30 Intel Corporation Methods and apparatus to insert buffers in a dataflow graph
US10915471B2 (en) 2019-03-30 2021-02-09 Intel Corporation Apparatuses, methods, and systems for memory interface circuit allocation in a configurable spatial accelerator
US11693633B2 (en) 2019-03-30 2023-07-04 Intel Corporation Methods and apparatus to detect and annotate backedges in a dataflow graph
US10831661B2 (en) 2019-04-10 2020-11-10 International Business Machines Corporation Coherent cache with simultaneous data requests in same addressable index
US11037050B2 (en) 2019-06-29 2021-06-15 Intel Corporation Apparatuses, methods, and systems for memory interface circuit arbitration in a configurable spatial accelerator
US11907713B2 (en) 2019-12-28 2024-02-20 Intel Corporation Apparatuses, methods, and systems for fused operations using sign modification in a processing element of a configurable spatial accelerator

Also Published As

Publication number Publication date
US20030177320A1 (en) 2003-09-18
US6912612B2 (en) 2005-06-28
US20030163649A1 (en) 2003-08-28
US7047374B2 (en) 2006-05-16

Similar Documents

Publication Publication Date Title
US7047374B2 (en) Memory read/write reordering
EP2430551B1 (en) Cache coherent support for flash in a memory hierarchy
US9465767B2 (en) Multi-processor, multi-domain, multi-protocol cache coherent speculation aware shared memory controller and interconnect
US5881303A (en) Multiprocessing system configured to perform prefetch coherency activity with separate reissue queue for each processing subnode
EP0817073B1 (en) A multiprocessing system configured to perform efficient write operations
US5848254A (en) Multiprocessing system using an access to a second memory space to initiate software controlled data prefetch into a first address space
US5892970A (en) Multiprocessing system configured to perform efficient block copy operations
US5265235A (en) Consistency protocols for shared memory multiprocessors
CA2051222C (en) Consistent packet switched memory bus for shared memory multiprocessors
US5440698A (en) Arbitration of packet switched busses, including busses for shared memory multiprocessors
US5983326A (en) Multiprocessing system including an enhanced blocking mechanism for read-to-share-transactions in a NUMA mode
US20190138452A1 (en) Autonomous prefetch engine
US20020087614A1 (en) Programmable tuning for flow control and support for CPU hot plug
US8015364B2 (en) Method and apparatus for filtering snoop requests using a scoreboard
CA2051209C (en) Consistency protocols for shared memory multiprocessors
US20200371970A1 (en) Multiple-requestor memory access pipeline and arbiter
US7003628B1 (en) Buffered transfer of data blocks between memory and processors independent of the order of allocation of locations in the buffer
US20010037426A1 (en) Interrupt handling via a proxy processor
Briggs et al. Intel 870: A building block for cost-effective, scalable servers

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RADHAKRISHNAN, SIVAKUMAR;NATARAJAN, CHITRA;CRETA, KENNETH;AND OTHERS;REEL/FRAME:014076/0865;SIGNING DATES FROM 20030421 TO 20030513

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION