Wednesday, February 17, 2010


Sunday, January 18, 2009

World's fastest computers

A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company,

Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".

Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. The IBM Roadrunner, located at Los Alamos National Laboratory, is currently the fastest supercomputer in the world.




Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space.[2] Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery. Personal computers, in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer. Embedded computers are small, simple devices that are used to control other devices — for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and children's toys.The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational


Cloud computing
Cloud computing holds plenty of promise—one key to success is planning ahead.
Cloud computing is an umbrella term used loosely to describe the ability to connect to software and data via the Internet (the cloud) instead of your hard drive or local network. The following story is the second in a three-part series—aimed at helping IT decision-makers break through the hype to better understand cloud computing and its potential business benefits.
As more and more businesses consider the
benefits of cloud computing, IT leaders are suggesting the best strategy to this emerging technology model is a thoughtful plan that weighs its impact in all corners of the business.
Because cloud computing is a still-fledgling model, vendors and standards bodies are busy sorting through definitions and interoperability issues. That aside, businesses can be certain that cloud computing, at its core, is an outsourced service and outsourcing by its very nature implies potential risks.Cloud computing can offer real benefits, including lower data center and overall IT costs, streamlined operational efficiency, and a pathway to the latest technology. But that doesn't mean it's the best path for every company or every application.
Outlined below, Lori MacVittie, technical marketing manager at Seattle-based application delivery provider F5 Networks, and William Penn, chief architect at Detroit-based on-demand platform provider Covisint, weigh in on key risk issues companies should consider before making the move to cloud computing.
1. Lack of planning. The biggest risk is not having a roadmap, says Penn. Companies need to understand how external services fit into their enterprise as a whole.
"It can be a struggle for some people to have the vision to incorporate outside services into their business plan," says Penn. "But the outside network, the cloud, needs to be part of that roadmap."
2. Integration challenges. Most businesses aren't moving all of their applications to the cloud, and probably never will-and this causes data integration challenges. Penn says it's important to remember that it isn't just hardware or software that needs integration, but also processes, problem resolution, and employee interaction with data and systems.
3. Security concerns. Security is top of mind for IT executives—both the physical security of the data center, as well as the intervening network and security of the data itself. Data in the cloud is housed and accessed via an offsite server owned by a third party. Companies need to carefully consider the security and liability implications for proprietary data and overall business models.
4. Compliance guidelines needed. Cloud providers haven't yet addressed various industry standards such as HIPAA or Sarbanes Oxley, so companies with strict compliance or audit constraints are less likely to be able to use external applications.
5. Lack of technology standards. For now, there are no technology industry standards for coordination within and among data centers or vendors. Technology industry leaders are still debating the definition of cloud computing itself, so it will take some time before any standards are set.
"We'll need them, though," says MacVittie. She cautions that in the meantime, it's possible for early adopters to select a vendor now that may not be compliant with future standards. And, she says, vendor lock-in is common right now.

NEC SX-9 to be World's Fastest Vector Computer
An anonymous reader writes "NEC has announced the NEC SX-9 claiming it to be the fastest vector computer, with single core speeds of up to 102.4 GFLOPS and up to 1.6TFLOPS on a single node incorporating multiple CPUs. The machines can be used in complex large-scale computation, such as climates, aeronautics and space, environmental simulations, fluid dynamics, through the processing of array-handling with a single vector instruction. Yes, it runs a UNIX System V-compatible OS."

The architecture (a vector processor) is not in the vanilla kernel, but the kernel is fairly parallel, thread-safe and SMP-safe, so I really can't see any reason why you couldn't put Linux on such a platform. Because a lot of standard parallel software these days assumes a cluster of discrete nodes with shared resources, they'd be best borrowing code from Xen and possibly MOSIX to simulate a common structure.


(This would waste some of the compute power, but if the total time saved from not changing the application exceeds the time that could be saved using more of the cycles available, you win. It is this problem of creating illusions of whatever architecture happens to be application-friendly at a given time that has made much of my work in parallel architectures - such as the one produced by Lightfleet - so interesting... and so subject to office politics.)

Roadrunner is a supercomputer built by IBM at the Los Alamos National Laboratory in New Mexico, USA. Currently the world's fastest computer, the US$133-million Roadrunner is designed for a peak performance of 1.7 petaflops, achieving 1.026 on May 25, 2008,[1][2][3] and to be the world's first TOP500 Linpack sustained 1.0 petaflops system. It is a one-of-a-kind supercomputer, built from commodity parts, with many novel design features.
IBM built the computer for the U.S. Department of Energy's (DOE) National Nuclear Security Administration.[4][5] It is a hybrid design with 12,960 IBM PowerXCell[6] 8i CPUs and 6,480 AMD Opteron dual-core processors[7] in specially designed server blades connected by Infiniband. The Roadrunner uses Red Hat Enterprise Linux along with Fedora as its operating systems and is managed with xCAT distributed computing software. It also uses the Open MPI Message Passing Interface implementation.[8]


The world's fastest computers are Linux computers

There are fast computers, and then there are Linux fast computers. Every six months, the Top 500 organization announces "its ranked list of general purpose systems that are in common use for high end applications." In other words, supercomputers. And, as has been the case for years now, the fastest of the fast are Linux computers.


As Jay Lyman, an analyst at The 451 Group points out, Linux is only growing stronger in supercomputing. "When considered as the primary OS or part of a mixed-OS supersystem, Linux is now present in 469 of the supercomputer sites, 93.8% of the Top500 list. This represents about 10 more sites than in November 2007, when Linux had presence in 91.8% of the systems. In fact, Linux is the only operating system that managed gains in the November 2008 list. A year ago, Linux was the OS for 84.6% of the top supercomputers. In November 2008, the open source OS was used in 87.8% of the systems. Compare this to Unix, which dropped from 6% to 4.6%, mixed-OS use which dropped from 7.2% to 6.2% and other operating systems, including BSD, Mac OS X and Windows, which were all down this year from the November 2007 list."

Microsoft is proud that a system running Windows HPC Server 2008 took 10th place... behind nine supercomputers running Linux. Even then, this was really more of a stunt than a demonstration that the HPC Server system is ready to compete with the big boys.

You see, there are no Microsoft programming tools to write supercomputer compatible applications. That will come years from now with Visual Studio 2010 and when Microsoft's F# is more than a research project language. In short, Windows HPC isn't ready for prime-time.

In the meantime, the real work is being done on the Linux computers. The number one supercomputer? Once more it's IBM's Linux-powered Roadrunner That's the same supercomputer, which this summer broke supercomputing's sound barrier: a sustained run of more than one petaflop per second or 1.026 quadrillion calculations per second. Beat that Microsoft!