0% found this document useful (0 votes)
324 views20 pages

History of Computer

1) Charles Babbage conceptualized the first mechanical computer in the early 19th century, known as the Analytical Engine, which was programmable and used punch cards, though it was never fully completed. 2) Computers evolved through five generations - from vacuum tubes in the 1940s, to transistors in the 1950s, integrated circuits in the 1960s, microprocessors in the 1970s, and today's systems with artificial intelligence. 3) Each generation saw computers become smaller, faster, cheaper to produce, more energy efficient and reliable through technological advancements like the transistor, integrated circuit and microprocessor.

Uploaded by

mohamed jama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
324 views20 pages

History of Computer

1) Charles Babbage conceptualized the first mechanical computer in the early 19th century, known as the Analytical Engine, which was programmable and used punch cards, though it was never fully completed. 2) Computers evolved through five generations - from vacuum tubes in the 1940s, to transistors in the 1950s, integrated circuits in the 1960s, microprocessors in the 1970s, and today's systems with artificial intelligence. 3) Each generation saw computers become smaller, faster, cheaper to produce, more energy efficient and reliable through technological advancements like the transistor, integrated circuit and microprocessor.

Uploaded by

mohamed jama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

TISQAAD COMPUTER SCIENCE 2020

History of Computer: From First Generation Of Computer To Third


Generation
Short Bytes: Story of the history of computer from a mechanical device to smartphones in
modern days computing — how the history of computer saw the replacement of different
mechanical parts with electrical ones and then eventually with electronic ICs and
Microprocessors — everything in detail.

If you ask a computer engineer 'what is a computer in layman's term?' -- A Computer is an


enhanced version of a calculator with little extra functionalities like doing analytical calculations
and produce results based on the analytical inputs. With the time, a computer turned into a
programmable computer where a user or better say a developer, can program and ask the
computer to perform certain operations. These operations can be performed by the computer
taking inputs and cues from the programs stored inside its memory

Charles Babbage and the Birth of World's first Computer.


Charles Babbage was an English mechanical engineer but also a polymath. Charles Babbage, the
father of the computer, was the first person on earth to come up with the concept of a
programmable computer by conceptualizing the first mechanical computer in the early 19th
century in the history of computers. Note that during those days, IC were not fabricated and the
concept of storing memory was also not developed that much in the field of science and
technology. But this revolutionary design was surely going to make one of the greatest
inventions in the world of technology. He further improved his first designed mechanical
computer with the methods of inputs programs and data using the punched cards for inputs. If
you realize the input and output analytical methods on a current day modern computer and how it
produces output, the same way Charles Babbage used to produce the output. He produced output
using a printer, a curve plotter, and a bell. The machine would punch numbers onto the cards as
the output then. This was all possible because of an analytical engine also known as ALU
(Arithmetic Logical Unit) in modern day computing. The analytical engine of the computer was
further improved by the control flow in the form of conditional branching and loops, and
integrated memory, making it the first design for a general-purpose computer that could be
described in modern terms as Turing-complete.

This was all possible because of an analytical engine also known as ALU (Arithmetic Logical
Unit) in the modern day computing. The analytical engine of the computer was further improved
by the control flow in the form of conditional branching and loops, and integrated memory,
making it the first design for a general-purpose computer that could be described in modern
terms as Turing-complete

TISQAAD COMPUTER SCIENCE COLLEGE 1


TISQAAD COMPUTER SCIENCE 2020

Figure 1

According to the modern day scientist, the first computer made by Charles Babbage was almost a
century ahead of its time. Since that was a mechanical computer, all the parts for his machine
had to be made by hand, unlike modern day computers where most of the parts are fabricated.
But during those days, making mechanical parts for the computer was really a difficult problem.
And on top of that, this problem continued for a long time in the world of computing eventually
resulting in the scrapping down the project and ceasing the funds for the project by the British
Government.

Also read: Antikythera Mechanism – World’s First Computer Is 2000 Years Old

However, Charles Babbage's failure to complete the analytical engine was the new beginning in
the field of designing of the modern day computer with more sophistications. However, this part
came much later into the picture. The legacy of the first mechanical computer by Charles
Babbage continued by his son Henry Babbage. He completed a simplified version of the
analytical engine's computing unit (the mill) in 1888. And also, gave a successful demonstration
of its use in computing tables in 1906

TISQAAD COMPUTER SCIENCE COLLEGE 2


TISQAAD COMPUTER SCIENCE 2020

Figure 2

The Five Generations of Computers


Learn about each of the five generations of computers and major technology developments that
have led to the computing devices that we use today.

The history of computer development is a computer science topic that is often used to reference
the different generations of computing devices.

Each one of the five generations of computers is characterized by a major technological


development that fundamentally changed the way computers operate. Most major developments
from the 1940's to present day have resulted in increasingly smaller, cheaper, more powerful and
more efficient computing devices.

What Are the Five Generations of Computers?


In this Webopedia Study Guide, you'll learn about each of the five generations of computers and
the advances in technology that have led to the development of the many computing devices that
we use today. Our journey of the five generations of computers starts in 1940 with vacuum tube
circuitry and goes to the present day — and beyond — with artificial intelligence (AI) systems
and devices.

First Generation: Vacuum Tubes (1940-1956)


TISQAAD COMPUTER SCIENCE COLLEGE 3
TISQAAD COMPUTER SCIENCE 2020
The first computer systems used vacuum tubes for circuitry and magnetic drums for memory,
and were often enormous, taking up entire rooms. These computers were very expensive to
operate and in addition to using a great deal of electricity, the first computers generated a lot of
heat, which was often the cause of malfunctions.

First generation computers relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve one problem at a
time. It would take operators days or even weeks to set-up a new problem. Input was based on
punched cards and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census
Bureau in 1951.

Figure 3

Second Generation: Transistors (1956-1963)


The world would see transistors replace vacuum tubes in the second generation of computers. The
transistor was invented at Bell Labs in 1947 but did not see widespread use in computers until the late
1950s.

The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster,
cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the
transistor still generated a great deal of heat that subjected the computer to damage, it was a vast
improvement over the vacuum tube. Second-generation computers still relied on punched cards for
input and printouts for output.

From Binary to Assembly

TISQAAD COMPUTER SCIENCE COLLEGE 4


TISQAAD COMPUTER SCIENCE 2020
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly,
languages, which allowed programmers to specify instructions in words. High-level programming
languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These
were also the first computers that stored their instructions in their memory, which moved from a
magnetic drum to magnetic core technology.

The first computers of this generation were developed for the atomic energy industry.

Third Generation: Integrated Circuits (1964-1971)


The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically
increased the speed and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to run many
different applications at one time with a central program that monitored the memory. Computers for
the first time became accessible to a mass audience because they were smaller and cheaper than their
predecessors.

Did You Know... ? An integrated circuit (IC) is a small electronic device made out of a semiconductor
material. The first integrated circuit was developed in the 1950s by Jack Kilby of Texas Instruments and
Robert Noyce of Fairchild Semiconductor.

Fourth Generation: Microprocessors (1971-Present)


The microprocessor brought the fourth generation of computers, as thousands of integrated circuits
were built onto a single silicon chip. What in the first generation filled an entire room could now fit in
the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the
computer—from the central processing unit and memory to input/output controls—on a single chip.

TISQAAD COMPUTER SCIENCE COLLEGE 5


TISQAAD COMPUTER SCIENCE 2020
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of
life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form networks,
which eventually led to the development of the Internet. Fourth generation computers also saw the
development of GUIs, the mouse and handheld devices.

Figure 4

History of Computers
This chapter is a brief summary of the history of Computers. It is supplemented by the two PBS
documentaries video tapes "Inventing the Future" And "The Paperback Computer". The chapter
highlights some of the advances to look for in the documentaries.

In particular, when viewing the movies you should look for two things:

The progression in hardware representation of a bit of data:

Vacuum Tubes (1950s) - one bit on the size of a thumb;

Transistors (1950s and 1960s) - one bit on the size of a fingernail;

Integrated Circuits (1960s and 70s) - thousands of bits on the size of a hand

Silicon computer chips (1970s and on) - millions of bits on the size of a finger nail.

TISQAAD COMPUTER SCIENCE COLLEGE 6


TISQAAD COMPUTER SCIENCE 2020
The progression of the ease of use of computers:

Almost impossible to use except by very patient geniuses (1950s);

Programmable by highly trained people only (1960s and 1970s);

Useable by just about anyone (1980s and on).

to see how computers got smaller, cheaper, and easier to use.

Figure 5

First Computers
The first substantial computer was the giant ENIAC machine by John W. Mauchly and J. Presper
Eckert at the University of Pennsylvania. ENIAC (Electrical Numerical Integrator and Calculator)
used a word of 10 decimal digits instead of binary ones like previous automated
calculators/computers. ENIAC was also the first machine to use more than 2,000 vacuum tubes,
using nearly 18,000 vacuum tubes. Storage of all those vacuum tubes and the machinery
required to keep the cool took up over 167 square meters (1800 square feet) of floor space.
Nonetheless, it had punched-card input and output and arithmetically had 1 multiplier, 1
divider-square rooter, and 20 adders employing decimal "ring counters," which served as
adders and also as quick-access (0.0002 seconds) read-write register storage.

TISQAAD COMPUTER SCIENCE COLLEGE 7


TISQAAD COMPUTER SCIENCE 2020

The executable instructions composing a program were embodied in the separate units of
ENIAC, which were plugged together to form a route through the machine for the flow of
computations. These connections had to be redone for each different problem, together with
presetting function tables and switches. This "wire-your-own" instruction technique was
inconvenient, and only with some license could ENIAC be considered programmable; it was,
however, efficient in handling the particular programs for which it had been designed. ENIAC is
generally acknowledged to be the first successful high-speed electronic digital computer (EDC)
and was productively used from 1946 to 1955. A controversy developed in 1971, however, over
the patentability of ENIAC's basic digital concepts, the claim being made that another U.S.
physicist, John V. Atanasoff, had already used the same ideas in a simpler vacuum-tube device
he built in the 1930s while at Iowa State College. In 1973, the court found in favor of the
company using Atanasoff claim and Atanasoff received the acclaim he rightly deserved.

Progression of Hardware
In the 1950's two devices would be invented that would improve the computer field and set in
motion the beginning of the computer revolution. The first of these two devices was the
transistor. Invented in 1947 by William Shockley, John Bardeen, and Walter Brattain of Bell
Labs, the transistor was fated to oust the days of vacuum tubes in computers, radios, and other
electronics.

The vacuum tube, used up to this time in almost all the computers and calculating machines, had
been invented by American physicist Lee De Forest in 1906. The vacuum tube, which is about
the size of a human thumb, worked by using large amounts of electricity to heat a filament inside
the tube until it was cherry red. One result of heating this filament up was the release of electrons
into the tube, which could be controlled by other elements within the tube. De Forest's original
device was a triode, which could control the flow of electrons to a positively charged plate inside
the tube. A zero could then be represented by the absence of an electron current to the plate; the
presence of a small but detectable current to the plate represented a one.

Vacuum tubes were highly inefficient, required a great deal of space, and needed to be
replaced often. Computers of the 1940s and 50s had 18,000 tubes in them and housing
all these tubes and cooling the rooms from the heat produced by 18,000 tubes was not
cheap. The transistor promised to solve all of these problems and it did so.
Transistors, however, had their problems too. The main problem was that transistors,
like other electronic components, needed to be soldered together. As a result, the more
complex the circuits became, the more complicated and numerous the connections
between the individual transistors and the likelihood of faulty wiring increased.

TISQAAD COMPUTER SCIENCE COLLEGE 8


TISQAAD COMPUTER SCIENCE 2020

In 1958, this problem too was solved by Jack St. Clair Kilby of Texas Instruments. He
manufactured the first integrated circuit or chip. A chip is really a collection of tiny

transistors which are connected together when the transistor is manufactured. Thus,
the need for soldering together large numbers of transistors was practically nullified;
now only connections were needed to other electronic components. In addition to
saving space, the speed of the machine was now increased since there was a
diminished distance that the electrons had to follow.

Mainframes to PCs
The 1960s saw large mainframe computers become much more common in large industries and
with the US military and space program. IBM became the unquestioned market leader in selling
these large, expensive, error-prone, and very hard to use machines.

A veritable explosion of personal computers occurred in the early 1970s, starting with Steve Jobs
and Steve Wozniak exhibiting the first Apple II at the First West Coast Computer Faire in San
Francisco. The Apple II boasted built-in BASIC programming language, color graphics, and a
4100 character memory for only $1298. Programs and data could be stored on an everyday
audio-cassette recorder. Before the end of the fair, Wozniak and Jobs had secured 300 orders for
the Apple II and from there Apple just took off.

Also introduced in 1977 was the TRS-80. This was a home computer manufactured by Tandy
Radio Shack. In its second incarnation, the TRS-80 Model II, came complete with a 64,000
character memory and a disk drive to store programs and data on. At this time, only Apple and
TRS had machines with disk drives. With the introduction of the disk drive, personal computer
applications took off as a floppy disk was a most convenient publishing medium for distribution
of software.

IBM, which up to this time had been producing mainframes and minicomputers for medium to
large-sized businesses, decided that it had to get into the act and started working on the Acorn,
TISQAAD COMPUTER SCIENCE COLLEGE 9
TISQAAD COMPUTER SCIENCE 2020
which would later be called the IBM PC. The PC was the first computer designed for the home
market which would feature modular design so that pieces could easily be added to the
architecture. Most of the components, surprisingly, came from outside of IBM, since building it
with IBM parts would have cost too much for the home computer market. When it was
introduced, the PC came with a 16,000 character memory, keyboard from an IBM electric
typewriter, and a connection for tape cassette player for $1265.

By 1984, Apple and IBM had come out with new models. Apple released the first generation
Macintosh, which was the first computer to come with a graphical user interface(GUI) and a
mouse. The GUI made the machine much more attractive to home computer users because it was
easy to use. Sales of the Macintosh soared like nothing ever seen before. IBM was hot on Apple's
tail and released the 286-AT, which with applications like Lotus 1-2-3, a spreadsheet, and
Microsoft Word, quickly became the favourite of business concerns.

That brings us up to about ten years ago. Now people have their own personal graphics
workstations and powerful home computers. The average computer a person might have in their
home is more powerful by several orders of magnitude than a machine like ENIAC. The
computer revolution has been the fastest growing technology in man's history.

Figure 6

TISQAAD COMPUTER SCIENCE COLLEGE 10


TISQAAD COMPUTER SCIENCE 2020

Mainframe
Alternatively referred to as a big iron computer, a mainframe is a large central computer with
more memory, storage space, and processing power than a standard computer. A mainframe is
used by governments, schools, and corporations for added security and processing large
amounts of data, such as consumer statistics, census data, or electronic transactions. Their
reliability and high stability allow these machines to run for a very long time, even decades.

Web search engine


"Search engine" redirects here. For other uses, see Search engine (disambiguation).

For a tutorial on using search engines for researching Wikipedia articles, see Wikipedia:Search
engine test.

A web search engine or Internet search engine is a software system that is designed to carry out
web search (Internet search), which means to search the World Wide Web in a systematic way
for particular information specified in a textual web search query. The search results are
generally presented in a line of results, often referred to as search engine results pages (SERPs).
The information may be a mix of links to web pages, images, videos, infographics, articles,
research papers, and other types of files. Some search engines also mine data available in

TISQAAD COMPUTER SCIENCE COLLEGE 11


TISQAAD COMPUTER SCIENCE 2020
databases or open directories. Unlike web directories, which are maintained only by human
editors, search engines also maintain real-time information by running an algorithm on a web
crawler. Internet content that is not capable of being searched by a web search engine is
generally described as the deep

History

Further information: Timeline of web search engines


Internet search engines themselves predate the debut of the Web in December 1990. The Who is
user search dates back to 1982[1] and the Knowbot Information Service multi-network user
search was first implemented in 1989.[2] The first well documented search engine that searched
content files, namely FTP files, was Archie, which debuted on 10 September 1990.[3]

Prior to September 1993, the World Wide Web was entirely indexed by hand. There was a list of
webservers edited by Tim Berners-Lee and hosted on the CERN webserver. One snapshot of the
list in 1992 remains,[4] but as more and more web servers went online the central list could no
longer keep up. On the NCSA site, new servers were announced under the title "What's New!"[5]

The first tool used for searching content (as opposed to users) on the Internet was Archie.[6] The
name stands for "archive" without the "v". It was created by Alan Emtage, Bill Heelan and J.
Peter Deutsch, computer science students at McGill University in Montreal, Quebec, Canada.
The program downloaded the directory listings of all the files located on public anonymous FTP
(File Transfer Protocol) sites, creating a searchable database of file names; however, Archie
Search Engine did not index the contents of these sites since the amount of data was so limited it
could be readily searched manually.

The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to
two new search programs, Veronica and Jughead. Like Archie, they searched the file names and
titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to
Computerized Archives) provided a keyword search of most Gopher menu titles in the entire
Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a
tool for obtaining menu information from specific Gopher servers. While the name of the search
engine "Archie Search Engine" was not a reference to the Archie comic book series, "Veronica"
and "Jughead" are characters in the series, thus referencing their predecessor.

In the summer of 1993, no search engine existed for the web, though numerous specialized
catalogues were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series
of Perl scripts that periodically mirrored these pages and rewrote them into a standard format.
This formed the basis for W3Catalog, the web's first primitive search engine, released on
September 2, 1993.[7]

TISQAAD COMPUTER SCIENCE COLLEGE 12


TISQAAD COMPUTER SCIENCE 2020
In June 1993, Matthew Gray, then at MIT, produced what was probably the first web robot, the
Perl-based World Wide Web Wanderer, and used it to generate an index called "Wandex". The
purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late
1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a
web robot, but instead depended on being notified by website administrators of the existence at
each site of an index file in a particular format.

JumpStation (created in December 1993[8] by Jonathon Fletcher) used a web robot to find web
pages and to build its index, and used a web form as the interface to its query program. It was
thus the first WWW resource-discovery tool to combine the three essential features of a web
search engine (crawling, indexing, and searching) as described below. Because of the limited
resources available on the platform it ran on, its indexing and hence searching were limited to the
titles and headings found in the web pages the crawler encountered.

One of the first "all text" crawler-based search engines was WebCrawler, which came out in
1994. Unlike its predecessors, it allowed users to search for any word in any webpage, which has
become the standard for all major search engines since. It was also the search engine that was
widely known by the public. Also in 1994, Lycos (which started at Carnegie Mellon University)
was launched and became a major commercial endeavor.

The first popular search engine on the Web was Yahoo! Search.[9] The first product from
Yahoo!, founded by Jerry Yang and David Filo in January 1994, was a Web directory called
Yahoo! Directory. In 1995, a search function was added, allowing users to search Yahoo!
Directory![10][11] It became one of the most popular ways for people to find web pages of
interest, but its search function operated on its web directory, rather than its full-text copies of
web pages.

Soon after, a number of search engines appeared and vied for popularity. These included
Magellan, Excite, Infoseek, Inktomi, Northern Light, and AltaVista. Information seekers could
also browse the directory instead of doing a keyword-based search.

In 1996, Robin Li developed the RankDex site-scoring algorithm for search engines results page
ranking[12][13][14] and received a US patent for the technology.[15] It was the first search
engine that used hyperlinks to measure the quality of websites it was indexing,[16] predating the
very similar algorithm patent filed by Google two years later in 1998.[17] Larry Page referenced
Li's work in some of his U.S. patents for PageRank.[18] Li later used his Rankdex technology for
the Baidu search engine, which was founded by Robin Li in China and launched in 2000.

TISQAAD COMPUTER SCIENCE COLLEGE 13


TISQAAD COMPUTER SCIENCE 2020
In 1996, Netscape was looking to give a single search engine an exclusive deal as the featured
search engine on Netscape's web browser. There was so much interest that instead Netscape
struck deals with five of the major search engines: for $5 million a year, each search engine
would be in rotation on the Netscape search engine page. The five engines were Yahoo!,
Magellan, Lycos, Infoseek, and Excite.[19][20]

Google adopted the idea of selling search terms in 1998, from a small search engine company
named goto.com. This move had a significant effect on the SE business, which went from
struggling to one of the most profitable businesses in the Internet.[21]

Search engines were also known as some of the brightest stars in the Internet investing frenzy
that occurred in the late 1990s.[22] Several companies entered the market spectacularly,
receiving record gains during their initial public offerings. Some have taken down their public
search engine, and are marketing enterprise-only editions, such as Northern Light. Many search
engine companies were caught up in the dot-com bubble, a speculation-driven market boom that
peaked in 1999 and ended in 2001.

Around 2000, Google's search engine rose to prominence.[23] The company achieved better
results for many searches with an algorithm called PageRank, as was explained in the paper
Anatomy of a Search Engine written by Sergey Brin and Larry Page, the later founders of
Google.[24] This iterative algorithm ranks web pages based on the number and PageRank of
other web sites and pages that link there, on the premise that good or desirable pages are linked
to more than others. Larry Page's patent for PageRank cites Robin Li's earlier RankDex patent as
an influence.[18][14] Google also maintained a minimalist interface to its search engine. In
contrast, many of its competitors embedded a search engine in a web portal. In fact, Google
search engine became so popular that spoof engines emerged such as Mystery Seeker.

By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo!
acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in 2003.
Yahoo! switched to Google's search engine until 2004, when it launched its own search engine
based on the combined technologies of its acquisitions.

Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In
early 1999 the site began to display listings from Looksmart, blended with results from Inktomi.
For a short time in 1999, MSN Search used results from AltaVista instead. In 2004, Microsoft
began a transition to its own search technology, powered by its own web crawler (called
msnbot).

Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009,
Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft
Bing technology.

TISQAAD COMPUTER SCIENCE COLLEGE 14


TISQAAD COMPUTER SCIENCE 2020
As of 2019, active search engine crawlers include those of Google, Sogou, Baidu, Bing,
Gigablast, Mojeek, DuckDuckGo and Yandex.

Approach

Main article: Search engine technology

A search engine maintains the following processes in near real time:

Web crawling

Indexing

Searching[25]

Web search engines get their information by web crawling from site to site. The "spider" checks
for the standard filename robots.txt, addressed to it. The robots.txt file contains directives for
search spiders, telling it which pages to crawl. After checking for robots.txt and either finding it
or not, the spider sends certain information back to be indexed depending on many factors, such
as the titles, page content, JavaScript, Cascading Style Sheets (CSS), headings, or its metadata in
HTML meta tags. After a certain number of pages crawled, amount of data indexed, or time
spent on the website, the spider stops crawling and moves on. "[N]o web crawler may actually
crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies
of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site
should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled
only partially".[26]

Indexing means associating words and other definable tokens found on web pages to their
domain names and HTML-based fields. The associations are made in a public database, made
available for web search queries. A query from a user can be a single word, multiple words or a
sentence. The index helps find information relating to the query as quickly as possible.[25] Some
of the techniques for indexing, and caching are trade secrets, whereas web crawling is a
straightforward process of visiting all sites on a systematic basis.

Between visits by the spider, the cached version of page (some or all the content needed to
render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is
overdue, the search engine can just act as a web proxy instead. In this case the page may differ
from the search terms indexed.[25] The cached page holds the appearance of the version whose
words were previously indexed, so a cached version of a page can be useful to the web site when
the actual page has been lost, but this problem is also considered a mild form of linkrot.

Typically when a user enters a query into a search engine it is a few keywords.[27] The index
already has the names of the sites containing the keywords, and these are instantly obtained from
the index. The real processing load is in generating the web pages that are the search results list:
TISQAAD COMPUTER SCIENCE COLLEGE 15
TISQAAD COMPUTER SCIENCE 2020
Every page in the entire list must be weighted according to information in the indexes.[25] Then
the top search result item requires the lookup, reconstruction, and markup of the snippets
showing the context of the keywords matched. These are only part of the processing each search
results web page requires, and further pages (next to the top) require more of this post
processing.

Beyond simple keyword lookups, search engines offer their own GUI- or command-driven
operators and search parameters to refine the search results. These provide the necessary controls
for the user engaged in the feedback loop users create by filtering and weighting while refining
the search results, given the initial pages of the first search results. For example, from 2007 the
Google.com search engine has allowed one to filter by date by clicking "Show search tools" in
the leftmost column of the initial search results page, and then selecting the desired date range.
[28] It's also possible to weight by date because each page has a modification time. Most search
engines support the use of the boolean operators AND, OR and NOT to help end users refine the
search query. Boolean operators are for literal searches that allow the user to refine and extend
the terms of the search. The engine looks for the words or phrases exactly as entered. Some
search engines provide an advanced feature called proximity search, which allows users to define
the distance between keywords.[25] There is also concept-based searching where the research
involves using statistical analysis on pages containing the words or phrases you search for. As
well, natural language queries allow the user to type a question in the same form one would ask
it to a human.[29] A site like this would be ask.com.[30]

The usefulness of a search engine depends on the relevance of the result set it gives back. While
there may be millions of web pages that include a particular word or phrase, some pages may be
more relevant, popular, or authoritative than others. Most search engines employ methods to rank
the results to provide the "best" results first. How a search engine decides which pages are the
best matches, and what order the results should be shown in, varies widely from one engine to
another.[25] The methods also change over time as Internet usage changes and new techniques
evolve. There are two main types of search engine that have evolved: one is a system of
predefined and hierarchically ordered keywords that humans have programmed extensively. The
other is a system that generates an "inverted index" by analyzing texts it locates. This first form
relies much more heavily on the computer itself to do the bulk of the work.

Most Web search engines are commercial ventures supported by advertising revenue and thus
some of them allow advertisers to have their listings ranked higher in search results for a fee.
Search engines that do not accept money for their search results make money by running search
related ads alongside the regular search engine results. The search engines make money every
time someone clicks on one of these ads.[31]
Figure 7

TISQAAD COMPUTER SCIENCE COLLEGE 16


TISQAAD COMPUTER SCIENCE 2020

Figure 8

Supercomputer
"High-performance computing" redirects here. For narrower definitions of HPC, see high-throughput computing and  many-task computing. For
other uses, see supercomputer (disambiguation).

A supercomputer is a computer with a high level of performance as compared to a general-purpose


computer. The performance of a supercomputer is commonly measured in floating-point operations per
second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers
which can perform over a hundred quadrillion FLOPS (petaFLOPS).[3] Since November 2017, all of the
world's fastest 500 supercomputers run Linux-based operating systems.[4] Additional research is being
conducted in China, the United States, the European Union, Taiwan and Japan to build faster, more
powerful and technologically superior exascale supercomputers.[5]

Supercomputers play an important role in the field of computational science, and are used for a wide
range of computationally intensive tasks in various fields, including quantum mechanics, weather
forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and
properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical
simulations (such as simulations of the early moments of the universe, airplane and spacecraft
aerodynamics, the detonation of nuclear weapons, and nuclear fusion). They have been essential in the
field of cryptanalysis.[6]

Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour
Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or
monogram. The first such machines were highly tuned conventional designs that ran faster than their
more general-purpose contemporaries. Through the decade, increasing amounts of parallelism were
added, with one to four processors being typical. From the 1970s, vector processors operating on large
arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector
computers remained the dominant design into the 1990s. From then until today, massively parallel
supercomputers with tens of thousands of off-the-shelf processors became the norm.[7][8]

The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted
dominance of the field, and later through a variety of technology companies. Japan made major strides in
the field in the 1980s and 90s, with China becoming increasingly active in the field. As of November

TISQAAD COMPUTER SCIENCE COLLEGE 17


TISQAAD COMPUTER SCIENCE 2020
2018, the fastest supercomputer on the TOP500 supercomputer list is the Summit, in the United States,
with a LINPACK benchmark score of 143.5 PFLOPS, followed by, Sierra, by around 48.860 PFLOPS.[9]
The US has five of the top 10 and China has two.[9] In June 2018, all supercomputers on the list
combined broke the 1 exaFLOPS mark.[10]

History

Main article: History of supercomputing


In 1960 UNIVAC built the Livermore Atomic Research Computer (LARC), today considered
among the first supercomputers, for the US Navy Research and Development Centre. It still used
high-speed drum memory, rather than the newly emerging disk drive technology.[11] Also
among the first supercomputers was the IBM 7030 Stretch. The IBM 7030 was built by IBM for
the Los Alamos National Laboratory, which in 1955 had requested a computer 100 times faster
than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined
instructions, prefetched data through a memory controller and included pioneering random
access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge
of a hundredfold increase in performance, it was purchased by the Los Alamos National
Laboratory. Customers in England and France also bought the computer and it became the basis
for the IBM 7950 Harvest, a supercomputer built for cryptanalysis.[12]

Cray left CDC in 1972 to form his own company, Cray Research.[17] Four years after leaving CDC,
Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful
supercomputers in history.[20][21] The Cray-2 was released in 1985. It had eight central processing
units (CPUs), liquid cooling and the electronics coolant liquid fluorinert was pumped through
the supercomputer architecture. It performed at 1.9 gigaFLOPS and was the world's second fastest
after M-13 supercomputer in Moscow.[22]The third pioneering supercomputer project in the early
1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He
designed the Atlas to have memory space for up to a million words of 48 bits, but because
magnetic storage with such a capacity was unaffordable, the actual core memory of Atlas was
only 16,000 words, with a drum providing memory for a further 96,000 words. The Atlas
operating system swapped data in the form of pages between the magnetic core and the drum.
The Atlas operating system also introduced time-sharing to supercomputing, so that more than
one programe could be executed on the supercomputer at any one time.[13] Atlas was a joint
venture between Ferranti and the Manchester University and was designed to operate at
processing speeds approaching one microsecond per instruction, about one million instructions
per second.[14]

The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from
germanium to silicon transistors. Silicon transistors could run faster and the overheating problem
was solved by introducing refrigeration to the supercomputer design.[15] Thus the CDC6600
became the fastest computer in the world. Given that the 6600 outperformed all the other

TISQAAD COMPUTER SCIENCE COLLEGE 18


TISQAAD COMPUTER SCIENCE 2020
contemporary computers by about 10 times, it was dubbed a supercomputer and defined the
supercomputing market, when one hundred computers were sold at $8 million each.[16][17][18]
[19]

Figure 9 Figure 10

TISQAAD COMPUTER SCIENCE COLLEGE 19


TISQAAD COMPUTER SCIENCE 2020

END

TISQAAD COMPUTER SCIENCE COLLEGE 20

You might also like