WO2016082169A1 - 内存访问方法、交换机及多处理器系统 - Google Patents

内存访问方法、交换机及多处理器系统 Download PDF

Info

Publication number
WO2016082169A1
WO2016082169A1 PCT/CN2014/092421 CN2014092421W WO2016082169A1 WO 2016082169 A1 WO2016082169 A1 WO 2016082169A1 CN 2014092421 W CN2014092421 W CN 2014092421W WO 2016082169 A1 WO2016082169 A1 WO 2016082169A1
Authority
WO
WIPO (PCT)
Prior art keywords
flow entry
data packet
switch
flow
operation command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2014/092421
Other languages
English (en)
French (fr)
Inventor
陶怡栋
何睿
董晓文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201480037772.0A priority Critical patent/CN105874758B/zh
Priority to PCT/CN2014/092421 priority patent/WO2016082169A1/zh
Priority to JP2017528517A priority patent/JP6514329B2/ja
Priority to EP14907135.9A priority patent/EP3217616B1/en
Publication of WO2016082169A1 publication Critical patent/WO2016082169A1/zh
Priority to US15/607,200 priority patent/US10282293B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0813Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/1652Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/005Network, LAN, Remote Access, Distributed System
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/154Networked environment

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a memory access method, a switch, and a multiprocessor system.
  • An interconnect network typically consists of a number of switches that connect to the processor responsible for the computation and also the memory responsible for storage.
  • the request is forwarded to the memory through the interconnection network.
  • the size of the interconnection network increases, and the processor accesses the remote memory. The access latency also increases, resulting in a decrease in system performance.
  • a method for reducing access latency when a processor accesses remote memory ie, memory connected to a switch port
  • a switch in the interconnect network has a cache ( Cache) function, which can cache a part of the memory data.
  • Cache cache
  • the data that the processor needs to access exists in the switch the data can be directly returned by the cache in the switch without having to access the remote memory, thereby reducing the access delay.
  • Each switch has a Cache.
  • the data cached in each Cache may include shared data, that is, data used by multiple processors.
  • shared data that is, data used by multiple processors.
  • the shared data in the Cache of a switch is modified and the shared data is on other switches.
  • the copy in the other switch cannot be modified in time, then the data will be wrong if accessed by other processors. Therefore, in order to avoid processor errors, the consistency of data in the Cache must be guaranteed, and the maintenance of Cache consistency is usually very complicated.
  • the embodiment of the present invention provides a memory access method, a switch, and a plurality of Processor system.
  • the technical solution is as follows:
  • an embodiment of the present invention provides a memory access method, where the method includes:
  • a data packet where the data packet includes source node information, destination node information, and a protocol type, where the protocol type is used to indicate a type of the data packet;
  • the flow table includes at least one flow entry, the flow entry includes a matching domain and an action domain, and the at least one flow entry includes a first flow entry, the first flow
  • the matching field of the entry is used to match the source node information, the destination node information, and the protocol type in the data packet, and the action domain of the first flow entry is used to indicate an operation command to the storage device built in the switch;
  • the storage device When the data packet is successfully matched with the first flow entry, the storage device is operated according to an operation command in the action domain of the first flow entry that matches the success.
  • the operating, by the operation command in the action field of the first flow entry that matches the success includes:
  • the operation command in the action field of the successfully matched first flow entry is a read operation command
  • the data is read from the storage device, and the read data is returned to the source node information. node;
  • the operation command in the action field of the successfully matched first flow entry is a write operation command
  • the data in the data packet is written into the storage device.
  • the at least one flow entry further includes a second flow entry, where the matching domain of the second flow entry is used to match the source node in the data packet.
  • the information, the destination node information, and the protocol type, and the action field of the second flow entry is used to indicate an operation command for performing calculation processing on the data in the data packet.
  • the method further includes:
  • the data in the data packet is calculated and processed according to an operation command in the action domain of the second flow entry that is successfully matched, and the calculation is performed. result;
  • the method may further include:
  • the flow entry is configured according to the flow table configuration message.
  • an embodiment of the present invention provides a switch, where the switch includes:
  • a first receiving module configured to receive a data packet, where the data packet includes source node information, destination node information, and a protocol type, where the protocol type is used to indicate a type of the data packet;
  • a matching module configured to perform flow table matching on the data packet received by the first receiving module, where the flow table includes at least one flow entry, the flow entry includes a matching domain and an action domain, and the at least one The flow entry includes a first flow entry, and the matching domain of the first flow entry is used to match the source node information, the destination node information, and the protocol type in the data packet, and the action domain of the first flow entry is used to indicate An operation command for a storage device built in the switch;
  • the operation module is configured to operate the storage device according to an operation command in the action domain of the first flow entry that is successfully matched, when the data packet is successfully matched with the first flow entry.
  • the operation module includes:
  • a reading unit configured to read data from the storage device when the operation command is a read operation command
  • a sending unit configured to return the data read by the reading unit to a node corresponding to the source node information
  • a writing unit configured to write data in the data packet into the storage device when the operation command is a write operation command.
  • the at least one flow entry further includes a second flow entry, where the matching domain of the second flow entry is used to match the source node in the data packet.
  • the information, the destination node information, and the protocol type, and the action field of the second flow entry is used to indicate an operation command for performing calculation processing on the data in the data packet.
  • the switch further includes:
  • a processing module configured to: when the data packet matches the second flow entry, the data in the data packet is performed according to an operation command in the action domain of the second flow entry that is successfully matched Calculating the processing and obtaining the calculation result;
  • a sending module configured to send the calculation result obtained by the processing module to a node corresponding to the source node information in the data packet.
  • the switch further includes:
  • a second receiving module configured to receive a flow table configuration message sent by the controller, where the flow table configuration message is used to configure the flow entry for the switch;
  • a configuration module configured to configure, according to the flow table configuration message received by the second receiving module, Flow entry.
  • an embodiment of the present invention provides a switch, where the switch includes: a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer to execute an instruction, and the processor and the memory pass the A bus connection, the processor executing the computer-executed instructions stored by the memory to cause the switch to perform the method provided by the first aspect described above when the computer is running.
  • an embodiment of the present invention provides a switch, where the switch includes:
  • An input port configured to receive a data packet, where the data packet includes source node information, destination node information, and a protocol type, where the protocol type is used to indicate a type of the data packet;
  • the flow table includes at least one flow entry, the flow entry includes a matching domain and an action domain, and the at least one flow entry includes a first flow entry, the first flow entry
  • the matching field is used to match the source node information, the destination node information, and the protocol type in the data packet, and the action domain of the first flow entry is used to indicate an operation command to the storage device built in the switch;
  • a storage device for storing data
  • the table lookup logic circuit is configured to perform flow table matching on the data packet received by the input port by using the flow table stored by the memory;
  • the operation logic circuit is configured to operate the storage device according to an operation command in the action domain of the first flow entry that is successfully matched, when the data packet is successfully matched with the first flow entry;
  • a crossbar switch bus for selecting an output port for a data packet transmitted by the operation logic circuit
  • An output port for transmitting data packets transmitted by the crossbar switch bus is an output port for transmitting data packets transmitted by the crossbar switch bus.
  • an embodiment of the present invention provides a multi-processor system, where the multi-processor system includes: a plurality of processors and an interconnection network, where the plurality of processors are communicatively connected through the interconnection network,
  • the interconnection network includes a plurality of switches, including the aforementioned second aspect or the switch provided by the third aspect or the fourth aspect.
  • the multiprocessor system further includes a plurality of external storage devices, wherein the plurality of external storage devices are communicably connected to the plurality of processors through the interconnection network .
  • the multiprocessor system is a system on a chip.
  • the technical solution provided by the embodiment of the present invention has the beneficial effects of: setting a storage device in the switch And the flow table matching is performed on the received data packet.
  • the storage device built in the switch directly performs the operation according to the operation command in the action field of the first flow entry. Operation, thereby reducing or even avoiding access to the remote memory of the processor, reducing the delay of memory access.
  • the storage device in each switch since the storage device in each switch separately stores data, there is no copy in the storage device of other switches. The situation, so there is no need to maintain the Cache consistency problem, the implementation is simple.
  • FIG. 1 is a schematic structural diagram of a multiprocessor system
  • FIG. 2 is a flowchart of a memory access method according to Embodiment 1 of the present invention.
  • FIG. 3 is a flowchart of a memory access method according to Embodiment 2 of the present invention.
  • FIG. 4 is a schematic diagram of an access example of a memory access method according to Embodiment 2 of the present invention.
  • FIG. 5 is a structural block diagram of a switch according to Embodiment 3 of the present invention.
  • FIG. 6 is a structural block diagram of a switch according to Embodiment 4 of the present invention.
  • FIG. 7 is a structural block diagram of a switch according to Embodiment 5 of the present invention.
  • FIG. 9 is a hardware structural diagram of a switch according to Embodiment 6 of the present invention.
  • FIG. 10 is a structural block diagram of a multiprocessor system according to Embodiment 7 of the present invention.
  • Embodiments of the present invention provide a memory access method, a switch, and a multiprocessor system.
  • the network architecture of the multiprocessor system will be described below with reference to FIG.
  • a plurality of processors 2 are interconnected by an interconnection network 1, which includes a plurality of switches responsible for forwarding communication data between the processors 2.
  • the multiprocessor system may further include a plurality of independent storage devices 3 through which the storage device 3 passes
  • the network 1 is connected to the processor 2, and therefore, the switch in the interconnection network 1 is also responsible for forwarding the access request of the processor 2 to the storage device 3 and the response message returned by the storage device 3 to the processor 2, and the like.
  • the network architecture of the above multi-processor system is only an example, and is not limited to the embodiment of the present invention.
  • the storage device 3 may not be included in the processor system.
  • the embodiment of the present invention provides a memory access method, which is applicable to the foregoing multi-processor system.
  • the method can be performed by a switch.
  • the switch can be an OpenFlow Switch (OFS) or other A switch with matching capabilities.
  • OFS OpenFlow Switch
  • the method includes:
  • Step 101 The switch receives a data packet, where the data packet includes source node information, destination node information, and protocol type.
  • the source node information may be the source node identifier, the MAC address of the source node, and the like; the destination node information may be the destination node identifier, the MAC address of the destination node, and the like; the protocol type is used to indicate the type of the data packet, such as a read request packet, Write request packets, calculate request packets, and so on.
  • a storage device is built in the switch, and the storage device may be a static random access memory (SRAM), a dynamic random access memory (DRAM), or the like.
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • the OFS has a flow table, and the flow table includes at least one flow entry, and each flow entry includes a matching domain and an action domain.
  • the flow entry in the flow table usually includes a forwarding flow entry, which is used to determine a forwarding exit of the data packet.
  • the flow entry in the flow table may further include a packet modification flow entry, where the modification flow entry is used to determine to modify information in the data packet, for example, to modify a header field of the data packet.
  • Step 102 Perform flow table matching on the data packet.
  • the flow table in the switch may include multiple entries, and in the embodiment of the present invention, the flow entry further includes a first flow entry, and the matching domain of the first flow entry is used to match the source in the data packet.
  • the action field of the first flow entry is used to indicate an operation command to the storage device built in the switch.
  • the operation commands include, but are not limited to, a read operation command and a write operation command.
  • the operation command in the action field of the first flow entry usually corresponds to the protocol type in the matching domain, for example, if the protocol type in the matching domain of the first flow entry is used to indicate the read request packet,
  • the operation command in the action field of the first flow entry is a storage device built into the switch. Read operation command.
  • various flow entries may be configured in one flow table, or different flows may be performed according to the function of the flow entry.
  • Table entries are configured in different flow tables.
  • the various flow tables are usually disposed in a Ternary Content Addressable Memory (TCAM) or a Reduced Latency Dynamic Random Access Memory (RLDRAM).
  • TCAM Ternary Content Addressable Memory
  • RLDRAM Reduced Latency Dynamic Random Access Memory
  • Step 103 When the data packet matches the first flow entry, the storage device is operated according to the operation command in the action domain of the first flow entry that matches the success.
  • the step 103 may include:
  • the operation command in the action field of the successfully matched first flow entry is a read operation command
  • the data is read from the storage device, and the read data is returned to the node corresponding to the source node information
  • the operation command in the action field of the successfully matched first flow entry is a write operation command
  • the data in the data packet is written into the storage device.
  • the storage device is built in the switch, and the received data packet is matched by the flow table.
  • the operation command directly operates the storage device built in the switch, thereby reducing or even avoiding access to the remote memory of the processor, reducing the delay of memory access.
  • the storage device in each switch separately stores data, There is a case where a copy exists in the storage device of another switch, so there is no problem that the Cache consistency needs to be maintained, and the implementation is simple.
  • the first flow entry is successfully matched, since each switch directly accesses its own internal storage device, only one access to the interconnection network (ie, receiving the access request and returning the response message) is required for one access request. This saves network resources.
  • the embodiment of the invention provides a memory access method, which can be applied to the foregoing multi-processor system.
  • the flow table is configured in the switch, and the flow table includes at least one flow entry, where each flow entry includes a matching domain and an action domain, where the matching domain is used to match information in the data packet received by the switch.
  • the field is used to indicate what to do with the packet when the flow entry matches successfully.
  • the flow table in the flow table The entry usually includes a forwarding flow entry, which is used to determine a forwarding exit of the data packet.
  • the flow entry in the flow table may further include a packet modification flow entry, where the modification flow entry is used to determine to modify information in the data packet, for example, to modify a header field of the data packet.
  • the switch also has built-in storage devices (also called memory), such as SRAM or DRAM. When implemented, the switch can be an OFS or other switch with matching capabilities.
  • the method includes:
  • Step 201 The switch receives a data packet sent by the processor, where the data packet includes source node information, destination node information, and a protocol type.
  • the source node information may be the source node identifier, the MAC address of the source node, and the like; the destination node information may be the destination node identifier, the MAC address of the destination node, and the like; the protocol type is used to indicate the type of the data packet, such as a read request packet, Write request packets, calculate request packets, and so on.
  • the packet may also include an operation address, such as a read/write address. It can be understood that when the data packet is a write data packet, the data packet further includes data to be written.
  • Step 202 The switch performs flow table matching on the data packet.
  • the step 203 is performed.
  • the step 204 and the step 205 are performed.
  • the flow table in the switch may include multiple entries. It is emphasized that, in the embodiment of the present invention, the flow entry further includes a first flow entry, and the matching domain of the first flow entry is used to match data.
  • the action field of the first flow entry is used to indicate an operation command for the storage device built in the switch.
  • the operation commands include, but are not limited to, a read operation command and a write operation command.
  • the operation command in the action field of the first flow entry usually corresponds to the protocol type in the matching domain, for example, if the protocol type in the matching domain of the first flow entry is used to indicate the read request packet,
  • the operation command in the action field of the first flow entry is a read operation command to the storage device built in the switch.
  • the flow entry may further include a second flow entry, where the matching domain of the second flow entry is used to match the source node information, the destination node information, and the protocol type in the data packet, and the second flow table
  • the action field of the item is used to indicate an operation command for performing calculation processing on the data in the data packet, and the calculation process includes but is not limited to a Cyclic Redundancy Check (CRC) or a Fast Fourier Transformation (referred to as Fast Fourier Transformation). FFT).
  • CRC Cyclic Redundancy Check
  • FFT Fast Fourier Transformation
  • a flow entry can be configured in a flow table or a flow entry can be configured in a different flow table.
  • the first flow entry is configured in a flow table.
  • the forwarding flow entry is configured in a flow table, which is not limited in this embodiment of the present invention.
  • Various flow meters can be placed in the TCAM or RLDRAM of the switch.
  • step 202 the received data packet is matched with each flow entry.
  • the flow table in a switch may also be set according to a port of the switch.
  • one port of the switch may correspond to a set of flow tables, and each set of flow tables may include a flow table (in the flow table) Including all kinds of flow entries, each flow table may also include multiple flow tables (the multiple flow tables respectively include different kinds of flow entries), and the data packets received from one port are only in the corresponding flow of the port.
  • the table performs matching; for example, only one set of flow tables can be configured in the switch, and all the data packets received by the port are matched in the set of flow tables.
  • Step 203 The switch operates the storage device according to an operation command in the action domain of the first flow entry that matches the success.
  • the step 203 can include:
  • the operation command in the action field of the successfully matched first flow entry is a read operation command
  • the data is read from the storage device, and the read data is returned to the node corresponding to the source node information
  • the operation command in the action field of the successfully matched first flow entry is a write operation command
  • the data in the data packet is written into the storage device.
  • the processor 04 and the processor 42 collectively run an application that interacts to determine that the data required by the processor 42 to run the application needs to be retrieved from the processor 04.
  • the application requests the controller for a shared storage space (which is the storage space of the storage device built in the switch), so that the processor 04 can write the data required by the processor 42 to the shared storage space for the processor 42. access.
  • the controller learns the storage devices in all switches and manages the storage devices in all switches.
  • the controller allocates the storage space on the switch 11 to the application, and notifies the processor 04 and the processor 42 of the address of the allocated storage space (for example, the storage space allocation information is sent, and the storage space allocation information may include the switch identifier. And the storage space address), and the flow table configuration information for configuring the first flow entry is sent to the switch 11, and the configured first flow entry includes at least two, and the matching domain of the one includes the source node identifier (04), Destination node identifier (11) and protocol type (WR),
  • the corresponding action field is a write operation command (MEM WR), and the other matching field includes a source node identifier (xx), a destination node identifier (11), and a protocol type (RD), and the corresponding action domain is a read operation command (MEM RD).
  • controller also sends flow table configuration information for configuring a forwarding flow entry to other switches (eg, switches 00, 01, etc.) for indicating that processor 04 and processor 42 are sent to the switch.
  • the packet of 11 is forwarded to the switch 11.
  • the processor 04 After the processor 04 obtains the storage space allocated by the controller, as shown in FIG. 4, the processor 04 sends the write request packet 2 to the switch 00 in the interconnection network, the write request packet includes the source node identifier (04), and the destination Node identifier (11), protocol type (WR, indicating write request), write address (00), and data to be written (xx), the data to be written (xx) is the processor 42 running the application The data needed.
  • Switch 00 forwards the write request packet to switch 01, which forwards the write request packet to switch 11 (i.e., the destination node). It is easy to know that after receiving the write request data packet, the switch 00 and the switch 01 perform flow table matching, and forward the write request data packet according to the action domain in the successfully matched forwarding flow entry.
  • the flow table 1 in the switch 11 includes two first flow entries, wherein the matching domain of the first flow entry includes a source node identifier (04), a destination node identifier (11), and a protocol type (WR).
  • the corresponding action domain is a write operation command (MEM WR), wherein the matching domain of the second flow entry includes a source node identifier (xx), a destination node identifier (11), and a protocol type (RD), and the corresponding action domain is Read operation command (MEM RD).
  • MEM RD write operation command
  • the write request data packet can be successfully matched with the first flow entry. Therefore, the write operation command is executed on the storage device in the switch 11, and the data to be written xx Address 00 in the storage device in switch 11.
  • processor 42 sends a read request packet 3 to switch 02 in the interconnection network, the read request packet including source node identification (42), destination node identification (11), protocol type (RD, indicating read request). And reading the address (00), the switch 02 forwards the read request packet to the switch 12, and the switch 12 forwards the read request packet to the switch 11 (i.e., the destination node). It is easy to know that after receiving the read request data packet, the switch 02 and the switch 12 perform flow table matching, and forward the read request data packet according to the action domain in the matching successful forwarding flow entry.
  • the read request data packet can be successfully matched with the second flow entry, and therefore, a read operation command is executed on the storage device in the switch 11, and the storage device in the slave switch 11 Address 00 in the middle reads data xx.
  • the response packet includes a source node identifier (11), a destination node identifier 42, and a protocol type (RD RLY, indicating read In response, the address (00) and the read data (xx) are read, and the response packet is returned to the processor 42 via the switch 12 and the switch 02.
  • the read request packet sent by the processor first enters the interconnection network, and then is forwarded to the outside by the interconnection network
  • the storage device then the external storage device sends the read response data packet to the interconnection network, and is forwarded by the interconnection network to the processor, and an access process needs to have the data packet entering and leaving the interconnection network twice
  • the data access method of the embodiment of the present invention can save network resources.
  • the controller first allocates a shared storage space for the application, and then allocates the corresponding application to the processor for processing. At this time, the application allocated to the processor already knows that the shared storage space can be used in advance, so that the shared storage space can be directly accessed.
  • Step 204 The switch performs calculation processing on the data in the data packet according to the operation command in the action domain of the second flow entry that is successfully matched, and obtains the calculation result.
  • the computing module can be set in the switch, or a special computing device can be set up, and the switch sends the data to the special computing device, and receives the calculation result returned by the computing device.
  • Step 205 The switch sends the calculation result to the processor.
  • the processor is the node corresponding to the source node information in the data packet.
  • the calculation result is sent to the processor in step 201.
  • the flow entry in the switch can be configured by the OFC, so the method can also include:
  • the storage device is built in the switch, and the received data packet is matched by the flow table.
  • the operation command directly operates the storage device built in the switch, thereby reducing or even avoiding access to the remote memory of the processor, reducing the delay of memory access.
  • the storage device in each switch separately stores data, There is a case where a copy exists in the storage device of another switch, so there is no problem that the Cache consistency needs to be maintained, and the implementation is simple.
  • each switch directly accesses its internal storage device.
  • the switch further includes a second flow entry.
  • the data in the data packet can be calculated and processed, thereby enhancing the hardware computing capability of the network.
  • the embodiment of the present invention provides a switch, where a flow table is provided, where the flow table includes at least one flow entry, and each flow entry includes a matching domain and an action domain, where the matching domain is used to match the received by the switch.
  • the information in the data packet, the action field is used to indicate what operation is performed on the data packet when the flow entry matches successfully.
  • the flow entry in the flow table usually includes a forwarding flow entry, which is used to determine a forwarding exit of the data packet.
  • the flow entry in the flow table may further include a packet modification flow entry, where the modification flow entry is used to determine to modify information in the data packet, for example, to modify a header field of the data packet.
  • the switch also has built-in storage devices such as SRAM or DRAM. When implemented, the switch can be OFS.
  • the switch includes: a first receiving module 301, a matching module 302, and an operating module 303.
  • the first receiving module 301 is configured to receive a data packet, where the data packet includes source node information, destination node information, and a protocol type, where the protocol type is used to indicate a type of the data packet.
  • the source node information may be the source node identifier, the MAC address of the source node, and the like;
  • the destination node information may be the destination node identifier, the MAC address of the destination node, and the like;
  • the protocol type is used to indicate the type of the data packet, such as a read request packet, Write request packets, calculate request packets, and so on.
  • the matching module 302 is configured to perform flow table matching on the data packet received by the first receiving module 301, where the flow table includes at least one flow entry, the flow entry includes a matching domain and an action domain, and the at least one flow entry includes the first flow table.
  • the matching field of the first flow entry is used to match the source node information, the destination node information, and the protocol type in the data packet.
  • the action field of the first flow entry is used to indicate an operation command for the storage device built in the switch.
  • the operation commands include, but are not limited to, a read operation command and a write operation command.
  • the operation command in the action field of the first flow entry usually corresponds to the protocol type in the matching domain, for example, if the protocol type in the matching domain of the first flow entry is used to indicate the read request packet,
  • the operation command in the action field of the first flow entry is a read operation command to the storage device built in the switch.
  • a flow entry can be configured in a flow table or a different flow entry can be configured in a different flow table according to the function of the flow entry.
  • the various flow tables are usually disposed in a Ternary Content Addressable Memory (TCAM) or a Reduced Latency Dynamic Random Access Memory (RLDRAM).
  • TCAM Ternary Content Addressable Memory
  • RLDRAM Reduced Latency Dynamic Random Access Memory
  • the operation module 303 is configured to: when the data packet matches the first flow entry, the operation is performed on the storage device according to the operation command in the action domain of the first flow entry that matches the success.
  • the storage device is built in the switch, and the received data packet is matched by the flow table.
  • the operation command directly operates the storage device built in the switch, thereby reducing or even avoiding access to the remote memory of the processor, reducing the delay of memory access.
  • the storage device in each switch separately stores data, There is a case where a copy exists in the storage device of another switch, so there is no problem that the Cache consistency needs to be maintained, and the implementation is simple.
  • the first flow entry is successfully matched, since each switch directly accesses its own internal storage device, only one access to the interconnection network (ie, receiving the access request and returning the response message) is required for one access request. This saves network resources.
  • the embodiment of the present invention provides a switch, where a flow table is provided, where the flow table includes at least one flow entry, and each flow entry includes a matching domain and an action domain, where the matching domain is used to match the received by the switch.
  • the information in the data packet, the action field is used to indicate what operation is performed on the data packet when the flow entry matches successfully.
  • the flow entry in the flow table usually includes a forwarding flow entry, which is used to determine a forwarding exit of the data packet.
  • the flow entry in the flow table may further include a packet modification flow entry, where the modification flow entry is used to determine to modify information in the data packet, for example, to modify a header field of the data packet.
  • the switch also has built-in storage devices such as SRAM or DRAM. When implemented, the switch can be an OFS or other switch with matching capabilities.
  • the switch includes: a first receiving module 401, a matching module 402, and an operation module 403.
  • the first receiving module 401 is configured to receive a data packet, where the data packet includes source node information, destination node information, and a protocol type, where the protocol type is used to indicate a type of the data packet.
  • the matching module 402 is configured to perform flow table matching on the data packet received by the first receiving module 401, where the flow table includes at least one flow table.
  • the flow entry includes a matching domain and an action domain, and at least one flow entry includes a first flow entry, and the matching domain of the first flow entry is used to match source node information, destination node information, and protocol type in the data packet,
  • the action field of the first-class entry is used to indicate the operation command for the storage device built into the switch.
  • the operation module 403 is configured to: when the data packet matches the first flow entry, the operation is performed on the storage device according to the operation command in the action domain of the first flow entry that matches the success.
  • the source node information may be the source node identifier, the MAC address of the source node, and the like; the destination node information may be the destination node identifier, the MAC address of the destination node, and the like; the protocol type is used to indicate the type of the data packet, such as a read request packet, Write request packets, calculate request packets, and so on.
  • the packet may also include an operation address, such as a read/write address. It can be understood that when the data packet is a write data packet, the data packet further includes data to be written.
  • the operation module 402 may include:
  • a reading unit configured to read data from the storage device when the operation command in the first flow entry is a read operation command
  • a sending unit configured to return data read by the reading unit to a node corresponding to the source node information
  • the writing unit is configured to write data in the data packet to the storage device when the operation command in the first flow entry is a write operation command.
  • the operation commands in the first flow entry include, but are not limited to, a read operation command and a write operation command. It is easy to know that the operation command in the action field of the first flow entry usually corresponds to the protocol type in the matching domain, for example, if the protocol type in the matching domain of the first flow entry is used to indicate the read request packet,
  • the operation command in the action field of the first flow entry is a read operation command to the storage device built in the switch.
  • the flow entry in the switch further includes a second flow entry, where the matching domain of the second flow entry is used to match the source node information, the destination node information, and the protocol type in the data packet, and the second flow table
  • the action field of the item is used to indicate an operation command for performing calculation processing on the data in the data packet.
  • This calculation process includes, but is not limited to, CRC or FFT.
  • various flow entries may be configured in one flow table, or different flows may be performed according to the function of the flow entry.
  • the entry is configured in a different flow table, for example, the first flow entry is configured in a flow table, and the forwarding flow entry is configured in a flow table, which is not limited in this embodiment of the present invention.
  • Various flow meters can be placed in the TCAM or RLDRAM of the switch.
  • the matching module 402 will receive the data packets received by the switch and each stream in the switch.
  • the table entry is matched.
  • the flow table in a switch may also be set according to a port of the switch.
  • one port of the switch may correspond to a set of flow tables, and each set of flow tables may include a flow table (in the flow table) Including all kinds of flow entries, each flow table may also include multiple flow tables (the multiple flow tables respectively include different kinds of flow entries), and the data packets received from one port are only in the corresponding flow of the port.
  • the table performs matching; for example, only one set of flow tables can be configured in the switch, and all the data packets received by the port are matched in the set of flow tables.
  • the switch may further include:
  • the processing module 404 is configured to perform calculation processing on the data in the data packet according to an operation command in the action domain of the second flow entry that is successfully matched, and obtain a calculation result when the data packet is successfully matched with the second flow entry.
  • the sending module 405 is configured to send the calculation result obtained by the processing module 404 to a node corresponding to the source node information in the data packet, such as a processor.
  • the computing module can be set in the switch, or a special computing device can be set up, and the switch sends the data to the special computing device, and receives the calculation result returned by the computing device.
  • the switch may further include:
  • the second receiving module 406 is configured to receive a flow table configuration message sent by the controller, where the flow table configuration message is used to configure a flow entry for the switch;
  • the configuration module 407 is configured to configure a flow entry according to the flow table configuration message received by the second receiving module 406.
  • the storage device is built in the switch, and the received data packet is matched by the flow table.
  • the data packet matches the first flow entry in the flow table, according to the action field of the first flow entry.
  • the operation command directly operates the storage device built in the switch, thereby reducing or even avoiding access to the remote memory of the processor, reducing the delay of memory access.
  • the storage device in each switch separately stores data, There is a case where a copy exists in the storage device of another switch, so there is no problem that the Cache consistency needs to be maintained, and the implementation is simple.
  • the switch further includes a second flow entry, where the data in the data packet can be calculated when the data packet matches the second flow entry. Processing enhances the hardware computing power of the network.
  • the switch includes a processor 501, a memory 502, a bus 503, and a communication interface 504.
  • the memory 502 is configured to store computer execution instructions
  • the processor 501 is coupled to the memory 502 via a bus 503, and when the computer is running, the processor 501 executes the computer execution instructions stored in the memory 502 to cause the switch to perform the implementation.
  • the method performed by the switch in the first embodiment or the second embodiment.
  • the switch also includes a built-in storage device, which may be SRAM or DRAM.
  • the storage device can be a memory 502 or a storage device independent of the memory 502.
  • the hardware structure of the switch may be as shown in FIG. 8, including: an input port 601, a processor 602, a built-in storage device 603, a memory 604, a crossbar switch (CrossBar) bus 605, and an output port 606.
  • an input port 601 a processor 602, a built-in storage device 603, a memory 604, a crossbar switch (CrossBar) bus 605, and an output port 606.
  • a crossbar switch Crossbar
  • the input port 601 is configured to receive a data packet, where the data packet includes source node information, destination node information, and protocol type.
  • the memory 604 is used to store a flow table.
  • the flow table includes at least one flow entry, and each flow entry includes a matching domain and an action domain.
  • the flow entry in the flow table usually includes a forwarding flow entry, which is used to determine a forwarding exit of the data packet.
  • the flow entry in the flow table may further include a packet modification flow entry, where the modification flow entry is used to determine to modify information in the data packet, for example, to modify a header field of the data packet.
  • the flow entry further includes a first flow entry, where the matching domain of the first flow entry is used to match the source node information, the destination node information, and the protocol type in the data packet, and the action domain of the first flow entry is used.
  • the operation commands include, but are not limited to, a read operation command and a write operation command.
  • the operation command in the action field of the first flow entry usually corresponds to the protocol type in the matching domain, for example, if the protocol type in the matching domain of the first flow entry is used to indicate the read request packet,
  • the operation command in the action field of the first flow entry is a read operation command to the storage device built in the switch.
  • various flow entries may be configured in one flow table, or different flows may be performed according to the function of the flow entry. Entry Configured in different flow tables.
  • the processor 602 is configured to perform flow table matching on the data packet of the input port by using the flow table in the memory 604, and when the data packet matches the first flow entry in the flow table, the first flow entry is successfully matched.
  • the operation command in the action field operates on the built-in storage device 603, for example, a read operation and a write operation.
  • the response packet is output to the output port 606 through the CrossBar bus 605, thereby being fed back to the requesting device.
  • the processor 602 is further configured to: when the data packet is successfully matched with the second flow entry, perform calculation processing on the data in the data packet according to an operation command in the action domain of the second flow entry that is successfully matched, to obtain a calculation result, The result of the calculation is sent to the requesting device via the CrossBar bus 605 and output port 606.
  • the storage device is built in the switch, and the received data packet is matched by the flow table.
  • the data packet matches the first flow entry in the flow table, according to the action field of the first flow entry.
  • the operation command directly operates the storage device built in the switch, thereby reducing or even avoiding access to the remote memory of the processor, reducing the delay of memory access.
  • the storage device in each switch separately stores data, There is a case where a copy exists in the storage device of another switch, so there is no problem that the Cache consistency needs to be maintained, and the implementation is simple.
  • the switch further includes a second flow entry.
  • the data in the data packet can be calculated and processed, thereby enhancing the hardware computing capability of the network.
  • the embodiment of the present invention provides a switch.
  • the switch includes an input port 701, a memory 702, a lookup table logic circuit 703, a storage device 704, an operation logic circuit 705, a CrossBar bus 706, and an output port 707.
  • the input port 701 is configured to receive a data packet, where the data packet includes source node information, destination node information, and protocol type.
  • the memory 702 is used to store a flow table.
  • the flow table includes at least one flow entry, and each flow entry includes a matching domain and an action domain.
  • the flow entry in the flow table usually includes a forwarding flow entry, which is used to determine a forwarding exit of the data packet.
  • the flow entry in the flow table may further include a packet modification flow entry, the data The packet modification flow entry is used to determine the modification of the information in the data packet, such as modifying the header field of the data packet.
  • the flow entry further includes a first flow entry, where the matching domain of the first flow entry is used to match the source node information, the destination node information, and the protocol type in the data packet, and the action domain of the first flow entry is used.
  • the operation commands include, but are not limited to, a read operation command and a write operation command.
  • the operation command in the action field of the first flow entry usually corresponds to the protocol type in the matching domain, for example, if the protocol type in the matching domain of the first flow entry is used to indicate the read request packet,
  • the operation command in the action field of the first flow entry is a read operation command to the storage device built in the switch.
  • various flow entries may be configured in one flow table, or different flows may be performed according to the function of the flow entry.
  • Table entries are configured in different flow tables.
  • the lookup table logic circuit 703 is configured to perform flow table matching on the data packets of the input port by using the flow table in the memory 702, and the operation logic circuit 705 is configured to match the first flow entry that is successfully matched according to the output result of the lookup table logic circuit 703.
  • the operation commands in the action domain operate on the built-in storage device 704, such as read operations and write operations.
  • the response packet is output to the output port 707 through the CrossBar bus 706, thereby being fed back to the requesting device.
  • the operation logic circuit 705 is further configured to: when the data packet matches the second flow entry, the data in the data packet is calculated and processed according to the operation command in the action domain of the second flow entry that is successfully matched, and the calculation result is obtained. And the calculation result is sent to the requesting device through the CrossBar bus 706 and the output port 707.
  • the memory 702 can be TCAM or RLDRAM, and the storage device 704 can be SRAM or DRAM.
  • the specific circuits of the lookup table logic circuit 703 and the operation logic circuit 705 are conventional circuits of the art, and thus detailed structural description is omitted here.
  • the storage device is built in the switch, and the received data packet is matched by the flow table.
  • the data packet matches the first flow entry in the flow table, according to the action field of the first flow entry.
  • the operation command directly operates the storage device built in the switch, thereby reducing or even avoiding access to the remote memory of the processor, reducing the delay of memory access.
  • the storage device in each switch separately stores data, There is a copy in the storage device of other switches The situation, so there is no need to maintain the Cache consistency problem, the implementation is simple.
  • the switch further includes a second flow entry.
  • the data in the data packet can be calculated and processed, thereby enhancing the hardware computing capability of the network.
  • the embodiment of the present invention provides a multiprocessor system.
  • the system includes: a plurality of processors 801 and an interconnection network 800.
  • the plurality of processors 801 are communicatively connected through an interconnection network 800.
  • the interconnection network 800 includes A plurality of switches 802, 802 are switches provided in the fourth, fifth or sixth embodiment.
  • the multiprocessor system may further include a plurality of external storage devices 803, and the plurality of external storage devices 803 are communicably connected to the plurality of processors 801 through the interconnection network 800.
  • the multiprocessor system may be a system on chip (SoC). It will be readily appreciated that switch 802 can also be a standalone communication device.
  • SoC system on chip
  • the storage device is built in the switch, and the received data packet is matched by the flow table.
  • the data packet matches the first flow entry in the flow table, according to the action field of the first flow entry.
  • the operation command directly operates the storage device built in the switch, thereby reducing or even avoiding access to the remote memory of the processor, reducing the delay of memory access.
  • the storage device in each switch separately stores data, There is a case where a copy exists in the storage device of another switch, so there is no problem that the Cache consistency needs to be maintained, and the implementation is simple.
  • the switch further includes a second flow entry.
  • the data in the data packet can be calculated and processed, thereby enhancing the hardware computing capability of the network.
  • the switch provided by the foregoing embodiment performs memory access according to the received data packet
  • only the division of each functional module is used as an example.
  • the above function assignment is performed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the embodiment of the switch and the memory management method provided by the foregoing embodiments are in the same concept, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Multi Processors (AREA)

Abstract

本发明实施例提供了一种内存访问方法、交换机及多处理器系统。该内存访问方法包括:交换机接收数据包;对数据包进行流表匹配,所述流表包括至少一个流表项,流表项包括匹配域和动作域,至少一个流表项包括第一流表项,第一流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第一流表项的动作域用于指示对交换机内置的存储设备的操作命令;当数据包与第一流表项匹配成功时,按照匹配成功的第一流表项的动作域中的操作命令,对所述存储设备进行操作。本发明通过在交换机中内置存储设备,处理器对其进行访问,可以减少甚至避免多处理器系统中处理器访问远端内存,进而减少了内存访问延时。

Description

内存访问方法、交换机及多处理器系统 技术领域
本发明涉及计算机技术领域,特别涉及一种内存访问方法、交换机及多处理器系统。
背景技术
随着人类对计算机速度和计算规模需求不断提高,多处理器系统应运而生。在多处理器系统中,多个处理器通过互连网络进行通信。互连网络通常由多个交换机构成,该互连网络连接负责计算的处理器,同时也可以连接负责存储的内存。当处理器需要访问内存时,请求经过互连网络转发至内存,然而,随着处理器和内存的数量越来越多,互连网络的规模也越来越大,处理器访问远端内存时的访问延时也随之增大,从而导致系统性能的下降。
现有技术中提出了一种减少处理器访问远端内存(即连接在交换机端口上的内存)时的访问延时的方法,在该方法中,互连网络中的交换机内均带有缓存(Cache)功能,从而可以缓存一部分内存的数据。当处理器需要访问的数据存在于交换机中时,可以直接由该交换机中的缓存返回数据,而不必访问远端的内存,进而减少了访问延时。
在实现本发明的过程中,发明人发现现有技术至少存在以下问题:
各个交换机中都带有Cache,各个Cache中缓存的数据可能包括共享数据,即被多个处理器使用的数据,当某个交换机的Cache内的共享数据被修改且该共享数据在其他的交换机的Cache中存有副本时,如果其他的交换机中的副本不能得到及时的修改,这时该数据若被其他处理器访问就会出现错误。因此,为了避免出现处理器错误,必须保证Cache内数据的一致性,而Cache一致性的维护通常十分复杂。
发明内容
为了解决现有技术中采用交换机内的Cache减少访问延时时存在的Cache一致性维护困难的问题,本发明实施例提供了一种内存访问方法、交换机及多 处理器系统。所述技术方案如下:
第一方面,本发明实施例提供了一种内存访问方法,所述方法包括:
交换机接收数据包,所述数据包包括源节点信息、目的节点信息和协议类型,所述协议类型用于指示所述数据包的类型;
对所述数据包进行流表匹配,所述流表包括至少一个流表项,所述流表项包括匹配域和动作域,所述至少一个流表项包括第一流表项,所述第一流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第一流表项的动作域用于指示对所述交换机内置的存储设备的操作命令;
当所述数据包与所述第一流表项匹配成功时,按照匹配成功的所述第一流表项的动作域中的操作命令,对所述存储设备进行操作。
在第一方面的第一种可能的实现方式中,所述按照匹配成功的第一流表项的动作域中的操作命令,对所述存储设备进行操作,包括:
当匹配成功的第一流表项的动作域中的所述操作命令为读操作命令时,从所述存储设备中读取数据,并将读取的所述数据返回给所述源节点信息对应的节点;
当匹配成功的第一流表项的动作域中的所述操作命令为写操作命令时,将所述数据包中的数据写入所述存储设备中。
在第一方面的第二种可能的实施方式中,所述至少一个流表项还包括第二流表项,所述第二流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第二流表项的动作域用于指示对所述数据包中的数据进行计算处理的操作命令。
进一步地,在该第二种可能的实施方式中,所述方法还包括:
当所述数据包与所述第二流表项匹配成功时,按照匹配成功的所述第二流表项的动作域中的操作命令,对所述数据包中的数据进行计算处理,得到计算结果;
将所述计算结果发送给所述数据包中的所述源节点信息对应的节点。
在第一方面的第三种可能的实施方式中,所述方法还可以包括:
接收控制器发送的流表配置消息,所述流表配置消息用于为所述交换机配置所述流表项;
根据所述流表配置消息配置所述流表项。
第二方面,本发明实施例提供了一种交换机,所述交换机包括:
第一接收模块,用于接收数据包,所述数据包包括源节点信息、目的节点信息和协议类型,所述协议类型用于指示所述数据包的类型;
匹配模块,用于对所述第一接收模块接收的所述数据包进行流表匹配,所述流表包括至少一个流表项,所述流表项包括匹配域和动作域,所述至少一个流表项包括第一流表项,所述第一流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第一流表项的动作域用于指示对所述交换机内置的存储设备的操作命令;
操作模块,用于当所述数据包与所述第一流表项匹配成功时,按照匹配成功的所述第一流表项的动作域中的操作命令,对所述存储设备进行操作。
在第二方面的第一种可能的实施方式中,所述操作模块包括:
读取单元,用于当所述操作命令为读操作命令时,从所述存储设备中读取数据;
发送单元,用于将所述读取单元读取的所述数据返回给所述源节点信息对应的节点;
写入单元,用于当所述操作命令为写操作命令时,将所述数据包中的数据写入所述存储设备中。
在第二方面的第二种可能的实施方式中,所述至少一个流表项还包括第二流表项,所述第二流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第二流表项的动作域用于指示对所述数据包中的数据进行计算处理的操作命令。
进一步地,在该第二种可能的实施方式中,所述交换机还包括:
处理模块,用于当所述数据包与所述第二流表项匹配成功时,按照匹配成功的所述第二流表项的动作域中的操作命令,对所述数据包中的数据进行计算处理,得到计算结果;
发送模块,用于将所述处理模块得到的所述计算结果发送给所述数据包中的所述源节点信息对应的节点。
在第二方面的第三种可能的实施方式中,所述交换机还包括:
第二接收模块,用于接收控制器发送的流表配置消息,所述流表配置消息用于为所述交换机配置所述流表项;
配置模块,用于根据所述第二接收模块收到的所述流表配置消息配置所述 流表项。
第三方面,本发明实施例提供了一种交换机,所述交换机包括:处理器、存储器、总线和通信接口;所述存储器用于存储计算机执行指令,所述处理器与所述存储器通过所述总线连接,当所述计算机运行时,所述处理器执行所述存储器存储的所述计算机执行指令,以使所述交换机执行前述第一方面提供的方法。
第四方面,本发明实施例提供了一种交换机,所述交换机包括:
输入端口,用于接收数据包,所述数据包包括源节点信息、目的节点信息和协议类型,所述协议类型用于指示所述数据包的类型;
存储器,用于存储流表,所述流表包括至少一个流表项,所述流表项包括匹配域和动作域,所述至少一个流表项包括第一流表项,所述第一流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第一流表项的动作域用于指示对所述交换机内置的存储设备的操作命令;
存储设备,用于存储数据;
查表逻辑电路,用于采用所述存储器存储的流表对所述输入端口接收的数据包进行流表匹配;
操作逻辑电路,用于当所述数据包与所述第一流表项匹配成功时,按照匹配成功的所述第一流表项的动作域中的操作命令,对所述存储设备进行操作;
纵横开关总线,用于为所述操作逻辑电路传输的数据包选择输出端口;
输出端口,用于发送所述纵横开关总线传输的数据包。
第五方面,本发明实施例提供了一种多处理器系统,所述多处理器系统包括:多个处理器和互连网络,所述多个处理器通过所述互连网络通信连接,所述互连网络包括多个交换机,所述交换机包括前述第二方面或第三方面或第四方面提供的交换机。
在第五方面的第一种可能的实施方式中,所述多处理器系统还包括多个外部存储设备,所述多个外部存储设备通过所述互连网络与所述多个处理器通信连接。
在第五方面的第二种可能的实施方式中,所述多处理器系统为片上系统。
本发明实施例提供的技术方案的有益效果是:通过在交换机中内置存储设 备,并对接收到的数据包进行流表匹配,当数据包与流表中的第一流表项匹配成功时,根据第一流表项的动作域中的操作命令直接对交换机内置的存储设备进行操作,从而减少甚至避免了对处理器对远端内存的访问,可以减少内存访问的延时,同时,由于采用各交换机内的存储设备单独存储数据,不存在在其他交换机的存储设备中存在副本的情况,因此不存在需要维护Cache一致性的问题,实现简单。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是多处理器系统的结构示意图;
图2是本发明实施例一提供的内存访问方法的流程图;
图3是本发明实施例二提供的内存访问方法的流程图;
图4是本发明实施例二提供的内存访问方法的访问实例示意图;
图5是本发明实施例三提供的交换机的结构框图;
图6是本发明实施例四提供的交换机的结构框图;
图7是本发明实施例五提供的交换机的结构框图;
图8是本实施例五提供的交换机的具体实现结构图;
图9是本发明实施例六提供的交换机的硬件结构图;
图10是本发明实施例七提供的多处理器系统的结构框图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。
本发明实施例提供了一种内存访问方法、交换机和多处理器系统。下面先结合图1介绍多处理器系统的网络架构。
参见图1,在多处理器系统中,多个处理器2通过互连网络1相互连接,互连网络包括多个交换机,这些交换机负责转发处理器2之间的通信数据。同时,该多处理器系统还可以包括多个独立的存储设备3,存储设备3通过该互 连网络1与处理器2连接,因此,该互连网络1中的交换机还负责转发处理器2对存储设备3的访问请求以及存储设备3向处理器2返回的响应消息等。
容易知道,以上多处理器系统的网络架构仅为举例,并不作为对本发明实施例的限制,例如,该处理器系统中也可以不包括存储设备3。
实施例一
本发明实施例提供了一种内存访问方法,适用于前述多处理器系统,该方法可以由交换机执行,在具体实现中,交换机可以为开放流交换机(OpenFlow Switch,简称OFS),也可以为其他具有匹配功能的交换机。如图2所示,该方法包括:
步骤101:交换机接收数据包,该数据包包括源节点信息、目的节点信息和协议类型。
其中,源节点信息可以为源节点标识、源节点的MAC地址等;目的节点信息可以为目的节点标识、目的节点的MAC地址等;协议类型用于指示数据包的类型,例如读请求数据包、写请求数据包、计算请求数据包等。
在本发明实施例中,交换机中内置有存储设备,该存储设备可以为静态随机存取存储器(Static Random Access Memory,简称SRAM)、动态随机存取存储器(Dynamic Random Access Memory,简称DRAM)等。
容易知道,OFS中设有流表,流表包括至少一个流表项,每个流表项都包括匹配域和动作域。流表中的流表项通常包括转发流表项,该转发流表项用于确定数据包的转发出口。流表中的流表项还可以包括数据包修改流表项,该数据包修改流表项用于确定对数据包内的信息进行修改,例如修改数据包的头域等。
步骤102:对该数据包进行流表匹配。
如前所述,交换机中的流表可以包括多种表项,而在本发明实施例中,流表项还包括第一流表项,第一流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第一流表项的动作域用于指示对交换机内置的存储设备的操作命令。该操作命令包括但不限于读操作命令、写操作命令。
容易知道,第一流表项的动作域中的操作命令与其匹配域中的协议类型通常是相互对应的,例如,如果第一流表项的匹配域中的协议类型用于指示读请求数据包,则该第一流表项的动作域中的操作命令为对交换机内置的存储设备 的读操作命令。
在实现时,各种流表项(例如前述第一流表项、转发流表项、数据包修改流表项)可以配置在一个流表中,也可以按照流表项的功能,将不同的流表项配置在不同的流表中。
各种流表通常设置在交换机的三态内容寻址存储器(Ternary Content Addressable Memory,简称TCAM)或低延迟动态随机存取存储器(Reduced Latency Dynamic Random Access Memory,简称RLDRAM)中。
容易知道,交换机中的流表项可以由开放流控制器(OpenFlow Controller,简称OFC)配置。
步骤103:当数据包与第一流表项匹配成功时,按照匹配成功的第一流表项的动作域中的操作命令,对存储设备进行操作。
具体地,该步骤103可以包括:
当匹配成功的第一流表项的动作域中的操作命令为读操作命令时,从存储设备中读取数据,并将读取的数据返回给源节点信息对应的节点;
当匹配成功的第一流表项的动作域中的操作命令为写操作命令时,将数据包中的数据写入存储设备中。
本发明实施例通过在交换机中内置存储设备,并对接收到的数据包进行流表匹配,当数据包与流表中的第一流表项匹配成功时,根据第一流表项的动作域中的操作命令直接对交换机内置的存储设备进行操作,从而减少甚至避免了对处理器对远端内存的访问,可以减少内存访问的延时,同时,由于采用各交换机内的存储设备单独存储数据,不存在在其他交换机的存储设备中存在副本的情况,因此不存在需要维护Cache一致性的问题,实现简单。并且,当第一流表项匹配成功时,由于各个交换机直接对自身内部的存储设备进行访问,所以对于一个访问请求而言,只需要出入互连网络一次(即接收访问请求和返回响应消息),从而可以节省网络资源。
实施例二
本发明实施例提供了一种内存访问方法,可以应用于前述多处理器系统。其中,交换机中设有流表,流表中包括至少一条流表项,每条流表项均包括匹配域和动作域,其中,匹配域用于匹配交换机接收到的数据包中的信息,动作域用于指示当流表项匹配成功时对数据包进行何种操作。通常,流表中的流表 项通常包括转发流表项,该转发流表项用于确定数据包的转发出口。流表中的流表项还可以包括数据包修改流表项,该数据包修改流表项用于确定对数据包内的信息进行修改,例如修改数据包的头域等。同时,交换机中还内置有存储设备(也可称内存),例如SRAM或DRAM等。在实现时,交换机可以为OFS,也可以为其他具有匹配功能的交换机。
下面结合图3对本实施例的方法进行详细说明,如图3所示,该方法包括:
步骤201:交换机接收处理器发送的数据包,该数据包包括包括源节点信息、目的节点信息和协议类型。
其中,源节点信息可以为源节点标识、源节点的MAC地址等;目的节点信息可以为目的节点标识、目的节点的MAC地址等;协议类型用于指示数据包的类型,例如读请求数据包、写请求数据包、计算请求数据包等。
该数据包中还可以包括操作地址,例如读写地址。可以理解地,当数据包为写数据包时,该数据包中还包括待写入的数据。
步骤202:交换机对该数据包进行流表匹配。当数据包与第一流表项匹配成功时,执行步骤203;当数据包与第二流表项匹配成功时,执行步骤204和步骤205。
如前所述,交换机中的流表可以包括多种表项,需要强调的是,在本发明实施例中,流表项还包括第一流表项,第一流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第一流表项的动作域用于指示对交换机内置的存储设备的操作命令。该操作命令包括但不限于读操作命令、写操作命令。
容易知道,第一流表项的动作域中的操作命令与其匹配域中的协议类型通常是相互对应的,例如,如果第一流表项的匹配域中的协议类型用于指示读请求数据包,则该第一流表项的动作域中的操作命令为对交换机内置的存储设备的读操作命令。
此外,在本实施例中,流表项还可以包括第二流表项,第二流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第二流表项的动作域用于指示对数据包中的数据进行计算处理的操作命令,该计算处理包括但不限于循环冗余校验(Cyclic Redundancy Check,简称CRC)或快速傅立叶变换(Fast Fourier Transformation,简称FFT)。
在实现时,各种流表项(例如前述第一流表项、转发流表项、数据包修改 流表项)可以配置在一个流表中,也可以按照流表项的功能,将不同的流表项配置在不同的流表中,例如,将第一流表项配置在一个流表中,将转发流表项配置在一个流表中,本发明实施例对此不做限制。
各种流表可以设置在交换机的TCAM或RLDRAM中。
容易知道,该步骤202中,会对收到的数据包与每一条流表项进行匹配。
进一步地,在具体实现中,一个交换机中的流表还可以根据交换机的端口来设置,例如,可以交换机的一个端口对应一组流表,每组流表可以包括一个流表(该流表中包括所有种类的流表项),每组流表也可以包括多个流表(该多个流表分别包括不同种类的流表项),从一个端口接收的数据包仅在该端口对应的流表中进行匹配;又例如,可以交换机中仅配置一组流表,所有端口接收到的数据包均在该组流表中进行匹配。
步骤203:交换机按照匹配成功的第一流表项的动作域中的操作命令,对存储设备进行操作。
具体地,该步骤203可以包括:
当匹配成功的第一流表项的动作域中的操作命令为读操作命令时,从存储设备中读取数据,并将读取的数据返回给源节点信息对应的节点;
当匹配成功的第一流表项的动作域中的操作命令为写操作命令时,将数据包中的数据写入存储设备中。
下面结合图4对本实施例的具体应用场景以及交换机的具体操作流程进行详细说明。
处理器04和处理器42共同运行一应用程序,两者交互,确定处理器42运行该应用程序需要的数据需要从处理器04获取。该应用程序向控制器申请共享存储空间(该共享存储空间即为交换机内置的存储设备的存储空间),以便于处理器04能将处理器42需要的数据写入共享存储空间,供处理器42访问。该控制器在系统建立时,即获知所有交换机中的存储设备,并对所有交换机中的存储设备进行管理。
控制器为该应用程序分配了位于交换机11上的存储空间,会将分配的存储空间的地址告知处理器04和处理器42(例如,会发送存储空间分配信息,存储空间分配信息可以包括交换机标识、以及存储空间地址),并会向交换机11下发用于配置第一流表项的流表配置信息,配置的第一流表项至少包括两条,一条的匹配域包括源节点标识(04)、目的节点标识(11)和协议类型(WR), 对应的动作域为写操作命令(MEM WR),另一条的匹配域包括源节点标识(xx)、目的节点标识(11)和协议类型(RD),对应的动作域为读操作命令(MEM RD)。
此外,控制器还会向其他交换机(例如交换机00、01等)发送用于配置转发流表项的流表配置信息,该转发流表项用于指示将处理器04和处理器42发送给交换机11的数据包转发给交换机11。
处理器04获得控制器分配的存储空间后,如图4所示,处理器04发送写请求数据包②至互连网络中的交换机00,该写请求数据包包括源节点标识(04)、目的节点标识(11)、协议类型(WR,表示写请求)、写入地址(00)和待写入的数据(xx),该待写入的数据(xx)即为处理器42运行该应用程序需要的数据。交换机00将该写请求数据包转发至交换机01,交换机01将该写请求数据包转发至交换机11(即目的节点)。容易知道,交换机00和交换机01收到该写请求数据包后,对其进行流表匹配,根据匹配成功的转发流表项中的动作域对写请求数据包进行转发。
如前所述,交换机11中的流表①包括两条第一流表项,其中第一条流表项的匹配域包括源节点标识(04)、目的节点标识(11)和协议类型(WR),对应的动作域为写操作命令(MEM WR),其中第二条流表项的匹配域包括源节点标识(xx)、目的节点标识(11)和协议类型(RD),对应的动作域为读操作命令(MEM RD)。交换机11对写请求数据包进行流表匹配后,该写请求数据包可以与第一条流表项匹配成功,因此,对交换机11中的存储设备执行写操作命令,将待写入的数据xx写入交换机11中的存储设备中的地址00。
类似地,处理器42发送读请求数据包③至互连网络中的交换机02,该读请求数据包包括源节点标识(42)、目的节点标识(11)、协议类型(RD,表示读请求)和读取地址(00),交换机02将该读请求数据包转发至交换机12,交换机12将该读请求数据包转发至交换机11(即目的节点)。容易知道,交换机02和交换机12收到该读请求数据包后,对其进行流表匹配,根据匹配成功的转发流表项中的动作域对读请求数据包进行转发。
交换机11对读请求数据包进行流表匹配后,该读请求数据包可以与第二条流表项匹配成功,因此,对交换机11中的存储设备执行读操作命令,从交换机11中的存储设备中的地址00读取数据xx。并生成响应数据包④,该响应数据包包括源节点标识(11),目的节点标识42,协议类型(RD RLY,表示读 响应),读取地址(00)和读出的数据(xx),该响应数据包经由交换机12、交换机02返回给处理器42。
从该读请求数据包的处理流程可以看出,与现有的访问外部存储设备的读请求流程相比(处理器发送的读请求数据包先进入互连网络,然后由互连网络转发至外部存储设备,然后外部存储设备将读响应数据包发送至互连网络,由互连网络转发至处理器,一个访问过程需要有数据包进出互连网络各两次),本实施例的一个访问过程只需要数据包进出互连网络一次,所以本发明实施例的内存访问方法可以节省网络资源。
需要说明的是,前述应用场景仅为举例,并不以此为限,本发明实施例还适用于以下场景:控制器先为应用分配了共享存储空间,然后将相应的应用分配给处理器处理,这时分配到处理器上的应用事先就已经知道有共享存储空间可以使用,从而能够直接对该共享存储空间进行访问。
步骤204:交换机按照匹配成功的第二流表项的动作域中的操作命令,对数据包中的数据进行计算处理,得到计算结果。
在实现时,可以在交换机中设置计算模块,也可以设置一个专门的计算设备,交换机将该数据发送给该专门的计算设备,并接收计算设备返回的计算结果。
步骤205:交换机将计算结果发送给处理器。
该处理器即为数据包中的源节点信息对应的节点。
在本实施例中,即将计算结果发送给步骤201中的处理器。
容易知道,交换机中的流表项可以由OFC配置,所以该方法还可以包括:
接收OFC发送的流表配置消息,该流表配置消息用于为该交换机配置流表项;
根据该流表配置消息配置流表项。
本发明实施例通过在交换机中内置存储设备,并对接收到的数据包进行流表匹配,当数据包与流表中的第一流表项匹配成功时,根据第一流表项的动作域中的操作命令直接对交换机内置的存储设备进行操作,从而减少甚至避免了对处理器对远端内存的访问,可以减少内存访问的延时,同时,由于采用各交换机内的存储设备单独存储数据,不存在在其他交换机的存储设备中存在副本的情况,因此不存在需要维护Cache一致性的问题,实现简单。并且,当第一流表项匹配成功时,由于各个交换机直接对自身内部的存储设备进行访问,所 以对于一个访问请求而言,只需要出入互连网络一次(即接收访问请求和返回响应消息),从而可以节省网络资源。此外,本实施例中,交换机中还配置有第二流表项,当数据包与第二流表项匹配时,可以对数据包中的数据进行计算处理,增强了网络的硬件计算能力。
实施例三
本发明实施例提供了一种交换机,其中设有流表,流表中包括至少一条流表项,每条流表项均包括匹配域和动作域,其中,匹配域用于匹配交换机接收到的数据包中的信息,动作域用于指示当流表项匹配成功时对数据包进行何种操作。通常,流表中的流表项通常包括转发流表项,该转发流表项用于确定数据包的转发出口。流表中的流表项还可以包括数据包修改流表项,该数据包修改流表项用于确定对数据包内的信息进行修改,例如修改数据包的头域等。同时,交换机中还内置有存储设备,例如SRAM或DRAM等。在实现时,交换机可以为OFS。
参见图5,该交换机包括:第一接收模块301、匹配模块302和操作模块303。
其中,第一接收模块301用于接收数据包,该数据包包括源节点信息、目的节点信息和协议类型,该协议类型用于指示数据包的类型。其中,源节点信息可以为源节点标识、源节点的MAC地址等;目的节点信息可以为目的节点标识、目的节点的MAC地址等;协议类型用于指示数据包的类型,例如读请求数据包、写请求数据包、计算请求数据包等。
匹配模块302用于对第一接收模块301接收的数据包进行流表匹配,该流表包括至少一个流表项,该流表项包括匹配域和动作域,至少一个流表项包括第一流表项,第一流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第一流表项的动作域用于指示对交换机内置的存储设备的操作命令。该操作命令包括但不限于读操作命令、写操作命令。
容易知道,第一流表项的动作域中的操作命令与其匹配域中的协议类型通常是相互对应的,例如,如果第一流表项的匹配域中的协议类型用于指示读请求数据包,则该第一流表项的动作域中的操作命令为对交换机内置的存储设备的读操作命令。
在实现时,各种流表项(例如前述第一流表项、转发流表项、数据包修改 流表项)可以配置在一个流表中,也可以按照流表项的功能,将不同的流表项配置在不同的流表中。
各种流表通常设置在交换机的三态内容寻址存储器(Ternary Content Addressable Memory,简称TCAM)或低延迟动态随机存取存储器(Reduced Latency Dynamic Random Access Memory,简称RLDRAM)中。
操作模块303用于当数据包与第一流表项匹配成功时,按照匹配成功的第一流表项的动作域中的操作命令,对存储设备进行操作。
本发明实施例通过在交换机中内置存储设备,并对接收到的数据包进行流表匹配,当数据包与流表中的第一流表项匹配成功时,根据第一流表项的动作域中的操作命令直接对交换机内置的存储设备进行操作,从而减少甚至避免了对处理器对远端内存的访问,可以减少内存访问的延时,同时,由于采用各交换机内的存储设备单独存储数据,不存在在其他交换机的存储设备中存在副本的情况,因此不存在需要维护Cache一致性的问题,实现简单。并且,当第一流表项匹配成功时,由于各个交换机直接对自身内部的存储设备进行访问,所以对于一个访问请求而言,只需要出入互连网络一次(即接收访问请求和返回响应消息),从而可以节省网络资源。
实施例四
本发明实施例提供了一种交换机,其中设有流表,流表中包括至少一条流表项,每条流表项均包括匹配域和动作域,其中,匹配域用于匹配交换机接收到的数据包中的信息,动作域用于指示当流表项匹配成功时对数据包进行何种操作。通常,流表中的流表项通常包括转发流表项,该转发流表项用于确定数据包的转发出口。流表中的流表项还可以包括数据包修改流表项,该数据包修改流表项用于确定对数据包内的信息进行修改,例如修改数据包的头域等。同时,交换机中还内置有存储设备,例如SRAM或DRAM等。在实现时,交换机可以为OFS,也可以为其他具有匹配功能的交换机。
参见图6,该交换机包括:第一接收模块401、匹配模块402和操作模块403。
其中,第一接收模块401用于接收数据包,该数据包包括源节点信息、目的节点信息和协议类型,该协议类型用于指示数据包的类型。匹配模块402用于对第一接收模块401接收的数据包进行流表匹配,该流表包括至少一个流表 项,该流表项包括匹配域和动作域,至少一个流表项包括第一流表项,第一流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第一流表项的动作域用于指示对交换机内置的存储设备的操作命令。操作模块403用于当数据包与第一流表项匹配成功时,按照匹配成功的第一流表项的动作域中的操作命令,对存储设备进行操作。
其中,源节点信息可以为源节点标识、源节点的MAC地址等;目的节点信息可以为目的节点标识、目的节点的MAC地址等;协议类型用于指示数据包的类型,例如读请求数据包、写请求数据包、计算请求数据包等。
该数据包中还可以包括操作地址,例如读写地址。可以理解地,当数据包为写数据包时,该数据包中还包括待写入的数据。
在本实施例的一种实现方式中,该操作模块402可以包括:
读取单元,用于当第一流表项中的操作命令为读操作命令时,从存储设备中读取数据;
发送单元,用于将读取单元读取的数据返回给源节点信息对应的节点;
写入单元,用于当第一流表项中的操作命令为写操作命令时,将数据包中的数据写入存储设备中。
第一流表项中的操作命令包括但不限于读操作命令、写操作命令。容易知道,第一流表项的动作域中的操作命令与其匹配域中的协议类型通常是相互对应的,例如,如果第一流表项的匹配域中的协议类型用于指示读请求数据包,则该第一流表项的动作域中的操作命令为对交换机内置的存储设备的读操作命令。
在本实施例中,交换机中的流表项还包括第二流表项,第二流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第二流表项的动作域用于指示对数据包中的数据进行计算处理的操作命令。该计算处理包括但不限于CRC或FFT。
在实现时,各种流表项(例如前述第一流表项、转发流表项、数据包修改流表项)可以配置在一个流表中,也可以按照流表项的功能,将不同的流表项配置在不同的流表中,例如,将第一流表项配置在一个流表中,将转发流表项配置在一个流表中,本发明实施例对此不做限制。
各种流表可以设置在交换机的TCAM或RLDRAM中。
容易知道,匹配模块402会对交换机收到的数据包与交换机中的每一条流 表项进行匹配。
进一步地,在具体实现中,一个交换机中的流表还可以根据交换机的端口来设置,例如,可以交换机的一个端口对应一组流表,每组流表可以包括一个流表(该流表中包括所有种类的流表项),每组流表也可以包括多个流表(该多个流表分别包括不同种类的流表项),从一个端口接收的数据包仅在该端口对应的流表中进行匹配;又例如,可以交换机中仅配置一组流表,所有端口接收到的数据包均在该组流表中进行匹配。
相应地,该交换机还可以包括:
处理模块404,用于当数据包与第二流表项匹配成功时,按照匹配成功的第二流表项的动作域中的操作命令,对数据包中的数据进行计算处理,得到计算结果;
发送模块405,用于将处理模块404得到的计算结果发送给数据包中的源节点信息对应的节点,例如处理器。
在实现时,可以在交换机中设置计算模块,也可以设置一个专门的计算设备,交换机将该数据发送给该专门的计算设备,并接收计算设备返回的计算结果。
可选地,该交换机还可以包括:
第二接收模块406,用于接收控制器发送的流表配置消息,流表配置消息用于为交换机配置流表项;
配置模块407,用于根据第二接收模块406收到的流表配置消息配置流表项。
本发明实施例通过在交换机中内置存储设备,并对接收到的数据包进行流表匹配,当数据包与流表中的第一流表项匹配成功时,根据第一流表项的动作域中的操作命令直接对交换机内置的存储设备进行操作,从而减少甚至避免了对处理器对远端内存的访问,可以减少内存访问的延时,同时,由于采用各交换机内的存储设备单独存储数据,不存在在其他交换机的存储设备中存在副本的情况,因此不存在需要维护Cache一致性的问题,实现简单。并且,当第一流表项匹配成功时,由于各个交换机直接对自身内部的存储设备进行访问,所以对于一个访问请求而言,只需要出入互连网络一次(即接收访问请求和返回响应消息),从而可以节省网络资源。此外,本实施例中,交换机中还配置有第二流表项,当数据包与第二流表项匹配时,可以对数据包中的数据进行计算 处理,增强了网络的硬件计算能力。
实施例五
本发明实施例提供了一种交换机,参见图7,该交换机包括处理器501、存储器502、总线503和通信接口504。其中,存储器502用于存储计算机执行指令,处理器501与存储器502通过总线503连接,当所述计算机运行时,处理器501执行存储器502存储的所述计算机执行指令,以使所述交换机执行实施例一或实施例二中交换机所执行的方法。
该交换机还包括内置的存储设备,该内置的存储设备可以为SRAM或DRAM等。该存储设备可以为存储器502,也可以为独立于存储器502的存储设备。
在具体实现中,该交换机的硬件结构可以如图8所示,包括:输入端口601、处理器602、内置的存储设备603、存储器604、纵横开关(CrossBar)总线605、输出端口606。
其中,输入端口601用于接收数据包,该数据包包括源节点信息、目的节点信息和协议类型。
存储器604用于存储流表。流表包括至少一个流表项,每个流表项都包括匹配域和动作域。流表中的流表项通常包括转发流表项,该转发流表项用于确定数据包的转发出口。流表中的流表项还可以包括数据包修改流表项,该数据包修改流表项用于确定对数据包内的信息进行修改,例如修改数据包的头域等。
在本发明实施例中,流表项还包括第一流表项,第一流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第一流表项的动作域用于指示对交换机内置的存储设备的操作命令。该操作命令包括但不限于读操作命令、写操作命令。
容易知道,第一流表项的动作域中的操作命令与其匹配域中的协议类型通常是相互对应的,例如,如果第一流表项的匹配域中的协议类型用于指示读请求数据包,则该第一流表项的动作域中的操作命令为对交换机内置的存储设备的读操作命令。
在实现时,各种流表项(例如前述第一流表项、转发流表项、数据包修改流表项)可以配置在一个流表中,也可以按照流表项的功能,将不同的流表项 配置在不同的流表中。
处理器602用于采用存储器604中的流表,对输入端口的数据包进行流表匹配,并且,当数据包与流表中的第一流表项匹配成功时,按照匹配成功的第一流表项的动作域中的操作命令,对内置的存储设备603进行操作,例如,读操作和写操作。
操作完成后,将响应数据包通过CrossBar总线605输出到输出端口606,从而反馈给请求设备。
处理器602还用于当数据包与第二流表项匹配成功时,按照匹配成功的第二流表项的动作域中的操作命令,对数据包中的数据进行计算处理,得到计算结果,并将该计算结果通过CrossBar总线605和输出端口606发送给请求设备。
本发明实施例通过在交换机中内置存储设备,并对接收到的数据包进行流表匹配,当数据包与流表中的第一流表项匹配成功时,根据第一流表项的动作域中的操作命令直接对交换机内置的存储设备进行操作,从而减少甚至避免了对处理器对远端内存的访问,可以减少内存访问的延时,同时,由于采用各交换机内的存储设备单独存储数据,不存在在其他交换机的存储设备中存在副本的情况,因此不存在需要维护Cache一致性的问题,实现简单。并且,当第一流表项匹配成功时,由于各个交换机直接对自身内部的存储设备进行访问,所以对于一个访问请求而言,只需要出入互连网络一次(即接收访问请求和返回响应消息),从而可以节省网络资源。此外,本实施例中,交换机中还配置有第二流表项,当数据包与第二流表项匹配时,可以对数据包中的数据进行计算处理,增强了网络的硬件计算能力。
实施例六
本发明实施例提供了一种交换机,参见图9,该交换机包括:输入端口701、存储器702、查表逻辑电路703、存储设备704、操作逻辑电路705、CrossBar总线706、输出端口707。
其中,输入端口701用于接收数据包,该数据包包括源节点信息、目的节点信息和协议类型。
存储器702用于存储流表。流表包括至少一个流表项,每个流表项都包括匹配域和动作域。流表中的流表项通常包括转发流表项,该转发流表项用于确定数据包的转发出口。流表中的流表项还可以包括数据包修改流表项,该数据 包修改流表项用于确定对数据包内的信息进行修改,例如修改数据包的头域等。
在本发明实施例中,流表项还包括第一流表项,第一流表项的匹配域用于匹配数据包中的源节点信息、目的节点信息和协议类型,第一流表项的动作域用于指示对交换机内置的存储设备的操作命令。该操作命令包括但不限于读操作命令、写操作命令。
容易知道,第一流表项的动作域中的操作命令与其匹配域中的协议类型通常是相互对应的,例如,如果第一流表项的匹配域中的协议类型用于指示读请求数据包,则该第一流表项的动作域中的操作命令为对交换机内置的存储设备的读操作命令。
在实现时,各种流表项(例如前述第一流表项、转发流表项、数据包修改流表项)可以配置在一个流表中,也可以按照流表项的功能,将不同的流表项配置在不同的流表中。
查表逻辑电路703用于采用存储器702中的流表,对输入端口的数据包进行流表匹配,操作逻辑电路705用于根据查表逻辑电路703的输出结果,按照匹配成功的第一流表项的动作域中的操作命令,对内置的存储设备704进行操作,例如,读操作和写操作。
操作完成后,将响应数据包通过CrossBar总线706输出到输出端口707,从而反馈给请求设备。
操作逻辑电路705还用于当数据包与第二流表项匹配成功时,按照匹配成功的第二流表项的动作域中的操作命令,对数据包中的数据进行计算处理,得到计算结果,并将该计算结果通过CrossBar总线706和输出端口707发送给请求设备。
其中,存储器702可以为TCAM或RLDRAM,存储设备704可以为SRAM或DRAM等。查表逻辑电路703和操作逻辑电路705的具体电路为本领域技术的常规电路,故在此省略详细结构描述。
本发明实施例通过在交换机中内置存储设备,并对接收到的数据包进行流表匹配,当数据包与流表中的第一流表项匹配成功时,根据第一流表项的动作域中的操作命令直接对交换机内置的存储设备进行操作,从而减少甚至避免了对处理器对远端内存的访问,可以减少内存访问的延时,同时,由于采用各交换机内的存储设备单独存储数据,不存在在其他交换机的存储设备中存在副本 的情况,因此不存在需要维护Cache一致性的问题,实现简单。并且,当第一流表项匹配成功时,由于各个交换机直接对自身内部的存储设备进行访问,所以对于一个访问请求而言,只需要出入互连网络一次(即接收访问请求和返回响应消息),从而可以节省网络资源。此外,本实施例中,交换机中还配置有第二流表项,当数据包与第二流表项匹配时,可以对数据包中的数据进行计算处理,增强了网络的硬件计算能力。
实施例七
本发明实施例提供了一种多处理器系统,参见图10,该系统包括:多个处理器801和互连网络800,多个处理器801通过互连网络800通信连接,互连网络800包括多个交换机802,交换机802为实施例四、五或六提供的交换机。
在本实施例的一种实现方式中,该多处理器系统还可以包括多个外部存储设备803,多个外部存储设备803通过互连网络800与多个处理器801通信连接。
在本实施例的另一种实现方式中,该多处理器系统可以为片上系统(System on Chip,简称SoC)。容易知道,交换机802也可以为独立的通信设备。
本发明实施例通过在交换机中内置存储设备,并对接收到的数据包进行流表匹配,当数据包与流表中的第一流表项匹配成功时,根据第一流表项的动作域中的操作命令直接对交换机内置的存储设备进行操作,从而减少甚至避免了对处理器对远端内存的访问,可以减少内存访问的延时,同时,由于采用各交换机内的存储设备单独存储数据,不存在在其他交换机的存储设备中存在副本的情况,因此不存在需要维护Cache一致性的问题,实现简单。并且,当第一流表项匹配成功时,由于各个交换机直接对自身内部的存储设备进行访问,所以对于一个访问请求而言,只需要出入互连网络一次(即接收访问请求和返回响应消息),从而可以节省网络资源。此外,本实施例中,交换机中还配置有第二流表项,当数据包与第二流表项匹配时,可以对数据包中的数据进行计算处理,增强了网络的硬件计算能力。
需要说明的是:上述实施例提供的交换机根据收到的数据包进行内存访问时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而 将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的交换机与内存管理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (15)

  1. 一种内存访问方法,其特征在于,所述方法包括:
    交换机接收数据包,所述数据包包括源节点信息、目的节点信息和协议类型,所述协议类型用于指示所述数据包的类型;
    对所述数据包进行流表匹配,所述流表包括至少一个流表项,所述流表项包括匹配域和动作域,所述至少一个流表项包括第一流表项,所述第一流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第一流表项的动作域用于指示对所述交换机内置的存储设备的操作命令;
    当所述数据包与所述第一流表项匹配成功时,按照匹配成功的所述第一流表项的动作域中的操作命令,对所述存储设备进行操作。
  2. 根据权利要求1所述的方法,其特征在于,所述按照匹配成功的第一流表项的动作域中的操作命令,对所述存储设备进行操作,包括:
    当匹配成功的第一流表项的动作域中的所述操作命令为读操作命令时,从所述存储设备中读取数据,并将读取的所述数据返回给所述源节点信息对应的节点;
    当匹配成功的第一流表项的动作域中的所述操作命令为写操作命令时,将所述数据包中的数据写入所述存储设备中。
  3. 根据权利要求1或2所述的方法,其特征在于,所述至少一个流表项还包括第二流表项,所述第二流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第二流表项的动作域用于指示对所述数据包中的数据进行计算处理的操作命令。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    当所述数据包与所述第二流表项匹配成功时,按照匹配成功的所述第二流表项的动作域中的操作命令,对所述数据包中的数据进行计算处理,得到计算结果;
    将所述计算结果发送给所述数据包中的所述源节点信息对应的节点。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    接收控制器发送的流表配置消息,所述流表配置消息用于为所述交换机配置所述流表项;
    根据所述流表配置消息配置所述流表项。
  6. 一种交换机,其特征在于,所述交换机包括:
    第一接收模块,用于接收数据包,所述数据包包括源节点信息、目的节点信息和协议类型,所述协议类型用于指示所述数据包的类型;
    匹配模块,用于对所述第一接收模块接收的所述数据包进行流表匹配,所述流表包括至少一个流表项,所述流表项包括匹配域和动作域,所述至少一个流表项包括第一流表项,所述第一流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第一流表项的动作域用于指示对所述交换机内置的存储设备的操作命令;
    操作模块,用于当所述数据包与所述第一流表项匹配成功时,按照匹配成功的所述第一流表项的动作域中的操作命令,对所述存储设备进行操作。
  7. 根据权利要求6所述的交换机,其特征在于,所述操作模块包括:
    读取单元,用于当所述操作命令为读操作命令时,从所述存储设备中读取数据;
    发送单元,用于将所述读取单元读取的所述数据返回给所述源节点信息对应的节点;
    写入单元,用于当所述操作命令为写操作命令时,将所述数据包中的数据写入所述存储设备中。
  8. 根据权利要求6或7所述的交换机,其特征在于,所述至少一个流表项还包括第二流表项,所述第二流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第二流表项的动作域用于指示对所述数据包中的数据进行计算处理的操作命令。
  9. 根据权利要求8所述的交换机,其特征在于,所述交换机还包括:
    处理模块,用于当所述数据包与所述第二流表项匹配成功时,按照匹配成 功的所述第二流表项的动作域中的操作命令,对所述数据包中的数据进行计算处理,得到计算结果;
    发送模块,用于将所述处理模块得到的所述计算结果发送给所述数据包中的所述源节点信息对应的节点。
  10. 根据权利要求6-9任一项所述的交换机,其特征在于,所述交换机还包括:
    第二接收模块,用于接收控制器发送的流表配置消息,所述流表配置消息用于为所述交换机配置所述流表项;
    配置模块,用于根据所述第二接收模块收到的所述流表配置消息配置所述流表项。
  11. 一种交换机,其特征在于,所述交换机包括:处理器、存储器、总线和通信接口;所述存储器用于存储计算机执行指令,所述处理器与所述存储器通过所述总线连接,当所述计算机运行时,所述处理器执行所述存储器存储的所述计算机执行指令,以使所述交换机执行如权利要求1~5任意一项所述的方法。
  12. 一种交换机,其特征在于,所述交换机包括:
    输入端口,用于接收数据包,所述数据包包括源节点信息、目的节点信息和协议类型,所述协议类型用于指示所述数据包的类型;
    存储器,用于存储流表,所述流表包括至少一个流表项,所述流表项包括匹配域和动作域,所述至少一个流表项包括第一流表项,所述第一流表项的匹配域用于匹配所述数据包中的源节点信息、目的节点信息和协议类型,所述第一流表项的动作域用于指示对所述交换机内置的存储设备的操作命令;
    存储设备,用于存储数据;
    查表逻辑电路,用于采用所述存储器存储的流表对所述输入端口接收的数据包进行流表匹配;
    操作逻辑电路,用于当所述数据包与所述第一流表项匹配成功时,按照匹配成功的所述第一流表项的动作域中的操作命令,对所述存储设备进行操作;
    纵横开关总线,用于为所述操作逻辑电路传输的数据包选择输出端口;
    输出端口,用于发送所述纵横开关总线传输的数据包。
  13. 一种多处理器系统,所述多处理器系统包括:多个处理器和互连网络,所述多个处理器通过所述互连网络通信连接,所述互连网络包括多个交换机,其特征在于,所述交换机包括为权利要求6-12任一项所述的交换机。
  14. 根据权利要求13所述的多处理器系统,其特征在于,所述多处理器系统还包括多个外部存储设备,所述多个外部存储设备通过所述互连网络与所述多个处理器通信连接。
  15. 根据权利要求13或14所述的多处理器系统,其特征在于,所述多处理器系统为片上系统。
PCT/CN2014/092421 2014-11-28 2014-11-28 内存访问方法、交换机及多处理器系统 Ceased WO2016082169A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201480037772.0A CN105874758B (zh) 2014-11-28 2014-11-28 内存访问方法、交换机及多处理器系统
PCT/CN2014/092421 WO2016082169A1 (zh) 2014-11-28 2014-11-28 内存访问方法、交换机及多处理器系统
JP2017528517A JP6514329B2 (ja) 2014-11-28 2014-11-28 メモリアクセス方法、スイッチ、およびマルチプロセッサシステム
EP14907135.9A EP3217616B1 (en) 2014-11-28 2014-11-28 Memory access method and multi-processor system
US15/607,200 US10282293B2 (en) 2014-11-28 2017-05-26 Method, switch, and multiprocessor system using computations and local memory operations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/092421 WO2016082169A1 (zh) 2014-11-28 2014-11-28 内存访问方法、交换机及多处理器系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/607,200 Continuation US10282293B2 (en) 2014-11-28 2017-05-26 Method, switch, and multiprocessor system using computations and local memory operations

Publications (1)

Publication Number Publication Date
WO2016082169A1 true WO2016082169A1 (zh) 2016-06-02

Family

ID=56073370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/092421 Ceased WO2016082169A1 (zh) 2014-11-28 2014-11-28 内存访问方法、交换机及多处理器系统

Country Status (5)

Country Link
US (1) US10282293B2 (zh)
EP (1) EP3217616B1 (zh)
JP (1) JP6514329B2 (zh)
CN (1) CN105874758B (zh)
WO (1) WO2016082169A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672615A (zh) * 2017-10-17 2019-04-23 华为技术有限公司 数据报文缓存方法及装置
CN116684358A (zh) * 2023-07-31 2023-09-01 之江实验室 一种可编程网元设备的流表管理系统及方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016167806A1 (en) * 2015-04-17 2016-10-20 Hewlett Packard Enterprise Development Lp Organizing and storing network communications
US10541918B2 (en) * 2018-02-22 2020-01-21 Juniper Networks, Inc. Detecting stale memory addresses for a network device flow cache
CN109450811B (zh) * 2018-11-30 2022-08-12 新华三云计算技术有限公司 流量控制方法、装置及服务器
TWI688955B (zh) * 2019-03-20 2020-03-21 點序科技股份有限公司 記憶體裝置以及記憶體的存取方法
CN112383480B (zh) * 2020-10-29 2022-11-04 曙光网络科技有限公司 流表的处理方法、装置、监管设备和存储介质
CN113132358A (zh) * 2021-03-29 2021-07-16 井芯微电子技术(天津)有限公司 策略分发器、拟态交换机及网络系统
US20240338315A1 (en) * 2023-04-05 2024-10-10 Samsung Electronics Co., Ltd. Systems, methods, and apparatus for computational device communication using a coherent interface
CN121029102B (zh) * 2025-10-29 2026-01-27 苏州元脑智能科技有限公司 数据存储系统和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102946325A (zh) * 2012-11-14 2013-02-27 中兴通讯股份有限公司 一种基于软件定义网络的网络诊断方法、系统及设备
CN103036653A (zh) * 2012-12-26 2013-04-10 华中科技大学 一种对OpenFlow网络进行网络编码的方法
WO2013133227A1 (ja) * 2012-03-05 2013-09-12 日本電気株式会社 ネットワークシステム、スイッチ、及びネットワーク構築方法
CN103401784A (zh) * 2013-07-31 2013-11-20 华为技术有限公司 一种接收报文的方法及开放流交换机
CN103501280A (zh) * 2013-09-12 2014-01-08 电子科技大学 一种多层虚拟覆盖网络接入方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7000078B1 (en) * 1999-10-01 2006-02-14 Stmicroelectronics Ltd. System and method for maintaining cache coherency in a shared memory system
JPWO2010104033A1 (ja) * 2009-03-09 2012-09-13 日本電気株式会社 プロセッサ間通信システム及び通信方法、ネットワークスイッチ、及び並列計算システム
JP5757552B2 (ja) * 2010-02-19 2015-07-29 日本電気株式会社 コンピュータシステム、コントローラ、サービス提供サーバ、及び負荷分散方法
US8327047B2 (en) * 2010-03-18 2012-12-04 Marvell World Trade Ltd. Buffer manager and methods for managing memory
US20130142073A1 (en) 2010-08-17 2013-06-06 Nec Corporation Communication unit, communication system, communication method, and recording medium
CN103608791A (zh) 2011-06-16 2014-02-26 日本电气株式会社 通信系统、控制器、交换机、存储器管理设备和通信方法
JP5794320B2 (ja) 2012-02-02 2015-10-14 日本電気株式会社 コントローラ、負荷分散方法、プログラム、コンピュータシステム、制御装置
JP5966488B2 (ja) * 2012-03-23 2016-08-10 日本電気株式会社 ネットワークシステム、スイッチ、及び通信遅延短縮方法
WO2013146808A1 (ja) * 2012-03-28 2013-10-03 日本電気株式会社 コンピュータシステム、及び通信経路変更方法
JP5987920B2 (ja) 2013-01-21 2016-09-07 日本電気株式会社 通信システム、制御装置及びネットワークトポロジの管理方法
US8964752B2 (en) * 2013-02-25 2015-02-24 Telefonaktiebolaget L M Ericsson (Publ) Method and system for flow table lookup parallelization in a software defined networking (SDN) system
US10645032B2 (en) * 2013-02-28 2020-05-05 Texas Instruments Incorporated Packet processing match and action unit with stateful actions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013133227A1 (ja) * 2012-03-05 2013-09-12 日本電気株式会社 ネットワークシステム、スイッチ、及びネットワーク構築方法
CN102946325A (zh) * 2012-11-14 2013-02-27 中兴通讯股份有限公司 一种基于软件定义网络的网络诊断方法、系统及设备
CN103036653A (zh) * 2012-12-26 2013-04-10 华中科技大学 一种对OpenFlow网络进行网络编码的方法
CN103401784A (zh) * 2013-07-31 2013-11-20 华为技术有限公司 一种接收报文的方法及开放流交换机
CN103501280A (zh) * 2013-09-12 2014-01-08 电子科技大学 一种多层虚拟覆盖网络接入方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3217616A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672615A (zh) * 2017-10-17 2019-04-23 华为技术有限公司 数据报文缓存方法及装置
CN116684358A (zh) * 2023-07-31 2023-09-01 之江实验室 一种可编程网元设备的流表管理系统及方法
CN116684358B (zh) * 2023-07-31 2023-12-12 之江实验室 一种可编程网元设备的流表管理系统及方法

Also Published As

Publication number Publication date
JP6514329B2 (ja) 2019-05-15
JP2017537404A (ja) 2017-12-14
CN105874758A (zh) 2016-08-17
CN105874758B (zh) 2019-07-12
EP3217616B1 (en) 2018-11-21
US20170262371A1 (en) 2017-09-14
US10282293B2 (en) 2019-05-07
EP3217616A4 (en) 2017-09-13
EP3217616A1 (en) 2017-09-13

Similar Documents

Publication Publication Date Title
WO2016082169A1 (zh) 内存访问方法、交换机及多处理器系统
US11128555B2 (en) Methods and apparatus for SDI support for automatic and transparent migration
US9866479B2 (en) Technologies for concurrency of cuckoo hashing flow lookup
JP5841255B2 (ja) 仮想化入力/出力のためのプロセッサローカルコヒーレンシを有するコンピュータシステム
TWI591485B (zh) 用於減少多節點機箱系統之管理埠之電腦可讀取儲存裝置、系統及方法
CN105159753A (zh) 加速器虚拟化的方法、装置及集中资源管理器
WO2014206078A1 (zh) 内存访问方法、装置及系统
US20230144693A1 (en) Processing system that increases the memory capacity of a gpgpu
TWI505183B (zh) 共享記憶體系統
US9305618B2 (en) Implementing simultaneous read and write operations utilizing dual port DRAM
CN107209725A (zh) 处理写请求的方法、处理器和计算机
WO2021213209A1 (zh) 数据处理方法及装置、异构系统
CN115114042A (zh) 存储数据访问方法、装置、电子设备和存储介质
CN103500108B (zh) 系统内存访问方法、节点控制器和多处理器系统
JP2020017263A (ja) メモリーシステム
US20180039518A1 (en) Arbitrating access to a resource that is shared by multiple processors
US10909044B2 (en) Access control device, access control method, and recording medium containing access control program
KR20050080704A (ko) 프로세서간 데이터 전송 장치 및 방법
CN115114192A (zh) 存储器接口、功能核、众核系统和存储数据访问方法
JP2021026767A (ja) データメモリアクセスの方法、装置、電子機器及びコンピュータ読み取り可能な記憶媒体
CN106557429A (zh) 一种内存数据的迁移方法和节点控制器
US20240069755A1 (en) Computer system, memory expansion device and method for use in computer system
WO2022199357A1 (zh) 数据处理方法及装置、电子设备、计算机可读存储介质
US20240370374A1 (en) Computer system, method for computer system, and readable storage medium
CN119396764B (zh) 一种芯片互联方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14907135

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017528517

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014907135

Country of ref document: EP