Centralized adaptive network memory engine

Electrical computers and digital processing systems: multicomput – Computer-to-computer direct memory accessing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S216000, C711S100000

Reexamination Certificate

active

07925711

ABSTRACT:
There is a constant battle to break even between continuing improvements in DRAM capacities and the growing memory demands of large-memory high-performance applications. Performance of such applications degrades quickly once the system hits the physical memory limit and starts swapping to the local disk. We present the design, implementation and evaluation of Anemone—an Adaptive Network Memory Engine—that virtualizes the collective unused memory of multiple machines across a gigabit Ethernet LAN, without requiring any modifications to the either the large memory applications or the Linux kernel. We have implemented a working prototype of Anemone and evaluated it using real-world unmodified applications such as ray-tracing and large in-memory sorting. Our results with the Anemone prototype show that unmodified single-process applications execute 2 to 3 times faster and multiple concurrent processes execute 6 to 7.7 times faster, when compared to disk based paging. The Anemone prototype reduces page-fault latencies by a factor of 19.6—from an average of 9.8 ms with disk based paging to 500 μs with Anemone. Most importantly, Anemone provides a virtualized low-latency access to potentially “unlimited” network memory resources.

REFERENCES:
patent: 6167490 (2000-12-01), Levy et al.
patent: 6298419 (2001-10-01), Kontothanassis et al.
patent: 6766313 (2004-07-01), Kromann
patent: 6886080 (2005-04-01), Carey
patent: 7188145 (2007-03-01), Lowery et al.
patent: 7320035 (2008-01-01), Kumar et al.
patent: 7386673 (2008-06-01), van Riel
patent: 7536462 (2009-05-01), Pandya
patent: 2002/0016840 (2002-02-01), Herzog et al.
patent: 2005/0039180 (2005-02-01), Fultheim et al.
patent: 2006/0184673 (2006-08-01), Liebman
patent: 2006/0190243 (2006-08-01), Barkai et al.
patent: 2007/0288530 (2007-12-01), Romem et al.
patent: 2009/0012932 (2009-01-01), Romem et al.
patent: 2009/0070337 (2009-03-01), Romem et al.
patent: 2009/0144388 (2009-06-01), Gross et al.
patent: 2009/0150511 (2009-06-01), Gross et al.
Lewandowski, Mark, “Latency Reduction Techniques for Remote Memory Access in ANEMONE”, Master's Thesis, Florida State University, Spring, 2006.
Hines, Michael R., “Anemone: An Adaptive Network Memory Engine”, Master's Thesis, Florida State University, Spring, 2005.
Hines, et al., “Anemone: Adaptive Network Memory Engine”, SOSP 2005 and NSDI 2005 (poster).
Hines, et al., “Fast Transparent Cluster-Wide Paging”, Spring 2006.
Hines, et al., “Anemone: Transparently Harnessing Cluster-Wide Memory”, In Proc. of International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS'06), Aug. 2006, Calgary, Alberta, Canada.
Acharya, et al., Availability and utility of idle memory in workstation clusters. In Measurement and Modeling of Computer Systems, pp. 35-46, 1999.
Anderson, et al., A case for NOW (Networks of Workstations). IEEE Micro, 15(1):54-64, 1995.
Anderson, et al., Serverless network file systems. In Proc. of the 15th Symp. on Operating System Principles, pp. 109-126, Copper Mountain, Colorado, Dec. 1995.
Bohannon, et al., The architecture of the dali main memory storage manager. Bell Labs Technical Journal, 2(1):36-47, 1997.
Brasileiro, et al., Using remote memory to stabilise data efficiently on an EXT2 linux file system. In Proc. of the 20th Brazilian Symposium on Computer Networks, May 2002.
Comer, et al., A new design for distributed systems: the remote memory model. Proceedings of the USENIX 1991 Summer Technical Conference, pp. 127-135, 1991.
Cuenca-Acuna, et al., Cooperative caching middleware for cluster-based servers. In Proc. of 10th IEEE Intl. Symp. on High Performance Distributed Computing (HPDC-10), Aug. 2001.
Dahlin, et al., Cooperative caching: Using remote client memory to improve file system performance. In Operating Systems Design and Implementation, pp. 267-280, 1994.
Dwarkadas, et al., Cashmere-VLM: Remote memory paging for software distributed shared memory. In Proc. of Intl. Parallel Processing Symposium, San Juan, Puerto Rico, pp. 153-159, Apr. 1999.
Feeley, et al., Implementing global memory management in a workstation cluster. Operating Systems Review, Fifteenth ACM Symposium on Operating Systems Principles, 29(5):201-212, 1995.
Flouris, et al., The network RamDisk: Using remote memory on heterogeneous NOWs. Cluster Computing, 2 (4):281-293, 1999.
Garcia-Molina, et al., Data management with massive memory: a summary. Parallel Database Systems. PRISMA Workshop, pp. 63-70, 1991.
Garcia-Molina, et al. A massive memory machine. IEEE Transactions on Computers, C-33 (5):391-399, 1984.
Ioannidis, et al., On using network memory to improve the performance of transaction-based systems. In International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA '98), 1998.
Koussih, et al., Dodo: A user-level system for exploiting idle memory in workstation clusters. In Proc. of the Eighth IEEE Intl. Symp. on High Performance Distributed Computing (HPDC-8), 1999.
Leslie, et al., The design and implementation of an operating system to support distributed multimedia applications. IEEE Journal of Selected Areas in Communications, 14(7):1280-1297, 1996.
Markatos, et al., Implementation of a reliable remote memory pager. In USENIX Annual Technical Conference, pp. 177-190, 1996.
McDonald. Remote paging in a single address space operating system supporting quality of service. Tech. Report, Dept. of Computing Science, University of Glasgow, Scotland, UK, 1999.
POV-RAY. The persistence of vision raytracer, 2005.
Silicon Graphics, Inc. STL Quicksort www.sgi.com/tech/stl/sort.html.
E. Stark. Samson: A scalable active memory server on a network, Aug. 2003.
Wong, et al., My cache or yours? Making storage more exclusive. In Proc. of the USENIX Annual Technical Conference, pp. 161-175, 2002.
Xiao, et al., Incorporating job migration and network RAM to share cluster memory resources. In Proc. of the 9th IEEE Intl. Symposium on High Performance Distributed Computing (HPDC-9), pp. 71-78, Aug. 2000.
Amza, et al., Treadmarks: Shared memory computing on networks of workstations.IEEE Computer, 29(2):18-28, Feb. 1996.
NS2: Network Simulator. http://www.isi.edu
snam
s/.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Centralized adaptive network memory engine does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Centralized adaptive network memory engine, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Centralized adaptive network memory engine will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2726150

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.