Big Traffic Guide is ONLY 5 Dollars !
Awsome value.

Comodo SSL Expand
Collapse Study


Business Computers

Objective   6/16/2016

To provide resource information on the most popular Business Computers.

Mainframe computers    (colloquially referred to as "big iron")  6/16/2016


Mainframe computers (colloquially referred to as "big iron") are computers used primarily by large organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and transaction processing.

The term originally referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers.

Later, the term was used to distinguish high-end commercial machines from less powerful units.

Most large-scale computer system architectures were established in the 1960s, but continue to evolve.


IBM mainframes dominate the mainframe market at well over 90% market share.

Unisys manufactures ClearPath Libra mainframes, based on earlier Burroughs products and ClearPath Dorado mainframes based on Sperry Univac OS 1100 product lines.

In 2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models.

Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes.

Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe.

Fujitsu, Hitachi, and NEC (the "JCMs") still maintain mainframe hardware businesses in the Japanese market.

The amount of vendor investment in mainframe development varies with market share.

Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPUs (including POWER and Xeon) for lower-end systems.

Bull uses a mixture of Itanium and Xeon processors.

NEC uses Xeon processors for its low-end ACOS-2 line, but develops the custom NOAH-6 processor for its high-end ACOS-4 series.

IBM continues to pursue a different business strategy of mainframe investment and growth.

IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2012's 5.5 GHz six-core zEC12 mainframe microprocessor.

Unisys produces code compatible mainframe systems that range from laptops to cabinet sized mainframes that utilize homegrown CPUs as well as Xeon processors.

IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.

Furthermore, there exists a market for software applications to manage the performance of mainframe implementations.

In addition to IBM, significant players in this market include BMC, Compuware, and CA Technologies.

Supercomputer  6/16/2016


A supercomputer is a computer with a high-level computational capacity compared to a general-purpose computer.

Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS).

As of 2015, there are supercomputers which can perform up to quadrillions of FLOPS.

Supercomputers were introduced in the 1960s, made initially, and for decades primarily, by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram.

While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm.

Since its introduction in June 2013, China's Tianhe-2 supercomputer is, as of 2015, the fastest in the world at 33.86 petaFLOPS (PFLOPS), or 33.86 quadrillions of FLOPS.

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion).

Throughout their history, they have been essential in the field of cryptanalysis.

Systems with massive numbers of processors generally take one of the two paths: in one approach (e.g., in distributed computing), hundreds or thousands of discrete computers (e.g., laptops) distributed across a network (e.


, the Internet) devote some or all of their time to solving a common problem; each individual computer (client) receives and completes many small tasks, reporting the results to a central server which integrates the task results from all the clients into the overall solution.

In another approach, thousands of dedicated processors are placed in proximity to each other (e.g., in a computer cluster); this saves considerable time moving data around and makes it possible for the processors to work together (rather than on separate tasks), for example in mesh and hypercube architectures.

The use of multi-core processors combined with centralization is an emerging trend; one can think of this as a small cluster (the multicore processor in a smartphone, tablet, laptop, etc. that both depends upon and contributes to the cloud.

Alexa.png dmoz.png is hosted on a re-seller Virtual Private Server

This page was last updated April 30th, 2017 by kim

Where wealth like fruit on precipices grew.

SEO Links   .   Traffic   .   Traffup YouTube.png google+.png Twitter