Arifuzzaman et al., 2023 - Google Patents
Use only what you need: Judicious parallelism for file transfers in high performance networksArifuzzaman et al., 2023
View PDF- Document ID
- 1633135864949187447
- Author
- Arifuzzaman M
- Arslan E
- Publication year
- Publication venue
- Proceedings of the 37th International Conference on Supercomputing
External Links
Snippet
Parallelism is key to efficiently utilizing high-speed research networks when transferring large volumes of data. However, the monolithic design of existing transfer applications requires the same level of parallelism to be used for read, write, and network operations for …
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a programme unit and a register, e.g. for a simultaneous processing of several programmes
- G06F15/163—Interprocessor communication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/885—Monitoring specific for caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
- G06F3/0601—Dedicated interfaces to storage systems
- G06F3/0628—Dedicated interfaces to storage systems making use of a particular technique
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network-specific arrangements or communication protocols supporting networked applications
- H04L67/10—Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
- H04L67/1097—Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for distributed storage of data in a network, e.g. network file system [NFS], transport mechanisms for storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic regulation in packet switching networks
- H04L47/10—Flow control or congestion control
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Min et al. | Gimbal: enabling multi-tenant storage disaggregation on smartnic jbofs | |
| Berger et al. | {AdaptSize}: Orchestrating the hot object memory cache in a content delivery network | |
| Ananthanarayanan et al. | Scarlett: coping with skewed content popularity in mapreduce clusters | |
| US8997109B2 (en) | Apparatus and method for managing data stream distributed parallel processing service | |
| Islam et al. | Triple-h: A hybrid approach to accelerate hdfs on hpc clusters with heterogeneous storage architecture | |
| Yildirim et al. | End-to-end data-flow parallelism for throughput optimization in high-speed networks | |
| Arifuzzaman et al. | Online optimization of file transfers in high-speed networks | |
| Li et al. | OFScheduler: a dynamic network optimizer for MapReduce in heterogeneous cluster | |
| Wasi-ur-Rahman et al. | A comprehensive study of mapreduce over lustre for intermediate data placement and shuffle strategies on hpc clusters | |
| Li et al. | ASCAR: Automating contention management for high-performance storage systems | |
| Arifuzzaman et al. | Use only what you need: Judicious parallelism for file transfers in high performance networks | |
| Chu et al. | A distributed paging RAM grid system for wide-area memory sharing | |
| Kim et al. | Optimizing end-to-end big data transfers over terabits network infrastructure | |
| Chen et al. | PIMCloud: QoS-aware resource management of latency-critical applications in clouds with processing-in-memory | |
| Vuppalapati et al. | Understanding the host network | |
| Chen et al. | Resource abstraction and data placement for distributed hybrid memory pool | |
| Lee et al. | APS: adaptable prefetching scheme to different running environments for concurrent read streams in distributed file systems | |
| Ren et al. | Design, implementation, and evaluation of a NUMA-aware cache for iSCSI storage servers | |
| Huang et al. | Can i/o variability be reduced on qos-less hpc storage systems? | |
| US11599441B2 (en) | Throttling processing threads | |
| Huo et al. | Research on performance optimization of virtual data space across WAN | |
| Craik et al. | Investigating the viability of bufferless NoCs in modern chip multi-processor systems | |
| Fang et al. | Hybrid Network‐on‐Chip: An Application‐Aware Framework for Big Data | |
| Ray et al. | Adaptive data center network traffic management for distributed high speed storage | |
| Tan et al. | EML: An I/O scheduling algorithm in large-scale-application environments |