History of Computer
History of Computer
This was all possible because of an analytical engine also known as ALU (Arithmetic Logical
Unit) in the modern day computing. The analytical engine of the computer was further improved
by the control flow in the form of conditional branching and loops, and integrated memory,
making it the first design for a general-purpose computer that could be described in modern
terms as Turing-complete
Figure 1
According to the modern day scientist, the first computer made by Charles Babbage was almost a
century ahead of its time. Since that was a mechanical computer, all the parts for his machine
had to be made by hand, unlike modern day computers where most of the parts are fabricated.
But during those days, making mechanical parts for the computer was really a difficult problem.
And on top of that, this problem continued for a long time in the world of computing eventually
resulting in the scrapping down the project and ceasing the funds for the project by the British
Government.
Also read: Antikythera Mechanism – World’s First Computer Is 2000 Years Old
However, Charles Babbage's failure to complete the analytical engine was the new beginning in
the field of designing of the modern day computer with more sophistications. However, this part
came much later into the picture. The legacy of the first mechanical computer by Charles
Babbage continued by his son Henry Babbage. He completed a simplified version of the
analytical engine's computing unit (the mill) in 1888. And also, gave a successful demonstration
of its use in computing tables in 1906
Figure 2
The history of computer development is a computer science topic that is often used to reference
the different generations of computing devices.
First generation computers relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve one problem at a
time. It would take operators days or even weeks to set-up a new problem. Input was based on
punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census
Bureau in 1951.
Figure 3
The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster,
cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the
transistor still generated a great deal of heat that subjected the computer to damage, it was a vast
improvement over the vacuum tube. Second-generation computers still relied on punched cards for
input and printouts for output.
The first computers of this generation were developed for the atomic energy industry.
Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to run many
different applications at one time with a central program that monitored the memory. Computers for
the first time became accessible to a mass audience because they were smaller and cheaper than their
predecessors.
Did You Know... ? An integrated circuit (IC) is a small electronic device made out of a semiconductor
material. The first integrated circuit was developed in the 1950s by Jack Kilby of Texas Instruments and
Robert Noyce of Fairchild Semiconductor.
As these small computers became more powerful, they could be linked together to form networks,
which eventually led to the development of the Internet. Fourth generation computers also saw the
development of GUIs, the mouse and handheld devices.
Figure 4
History of Computers
This chapter is a brief summary of the history of Computers. It is supplemented by the two PBS
documentaries video tapes "Inventing the Future" And "The Paperback Computer". The chapter
highlights some of the advances to look for in the documentaries.
In particular, when viewing the movies you should look for two things:
Integrated Circuits (1960s and 70s) - thousands of bits on the size of a hand
Silicon computer chips (1970s and on) - millions of bits on the size of a finger nail.
Figure 5
First Computers
The first substantial computer was the giant ENIAC machine by John W. Mauchly and J. Presper
Eckert at the University of Pennsylvania. ENIAC (Electrical Numerical Integrator and Calculator)
used a word of 10 decimal digits instead of binary ones like previous automated
calculators/computers. ENIAC was also the first machine to use more than 2,000 vacuum tubes,
using nearly 18,000 vacuum tubes. Storage of all those vacuum tubes and the machinery
required to keep the cool took up over 167 square meters (1800 square feet) of floor space.
Nonetheless, it had punched-card input and output and arithmetically had 1 multiplier, 1
divider-square rooter, and 20 adders employing decimal "ring counters," which served as
adders and also as quick-access (0.0002 seconds) read-write register storage.
The executable instructions composing a program were embodied in the separate units of
ENIAC, which were plugged together to form a route through the machine for the flow of
computations. These connections had to be redone for each different problem, together with
presetting function tables and switches. This "wire-your-own" instruction technique was
inconvenient, and only with some license could ENIAC be considered programmable; it was,
however, efficient in handling the particular programs for which it had been designed. ENIAC is
generally acknowledged to be the first successful high-speed electronic digital computer (EDC)
and was productively used from 1946 to 1955. A controversy developed in 1971, however, over
the patentability of ENIAC's basic digital concepts, the claim being made that another U.S.
physicist, John V. Atanasoff, had already used the same ideas in a simpler vacuum-tube device
he built in the 1930s while at Iowa State College. In 1973, the court found in favor of the
company using Atanasoff claim and Atanasoff received the acclaim he rightly deserved.
Progression of Hardware
In the 1950's two devices would be invented that would improve the computer field and set in
motion the beginning of the computer revolution. The first of these two devices was the
transistor. Invented in 1947 by William Shockley, John Bardeen, and Walter Brattain of Bell
Labs, the transistor was fated to oust the days of vacuum tubes in computers, radios, and other
electronics.
The vacuum tube, used up to this time in almost all the computers and calculating machines, had
been invented by American physicist Lee De Forest in 1906. The vacuum tube, which is about
the size of a human thumb, worked by using large amounts of electricity to heat a filament inside
the tube until it was cherry red. One result of heating this filament up was the release of electrons
into the tube, which could be controlled by other elements within the tube. De Forest's original
device was a triode, which could control the flow of electrons to a positively charged plate inside
the tube. A zero could then be represented by the absence of an electron current to the plate; the
presence of a small but detectable current to the plate represented a one.
Vacuum tubes were highly inefficient, required a great deal of space, and needed to be
replaced often. Computers of the 1940s and 50s had 18,000 tubes in them and housing
all these tubes and cooling the rooms from the heat produced by 18,000 tubes was not
cheap. The transistor promised to solve all of these problems and it did so.
Transistors, however, had their problems too. The main problem was that transistors,
like other electronic components, needed to be soldered together. As a result, the more
complex the circuits became, the more complicated and numerous the connections
between the individual transistors and the likelihood of faulty wiring increased.
In 1958, this problem too was solved by Jack St. Clair Kilby of Texas Instruments. He
manufactured the first integrated circuit or chip. A chip is really a collection of tiny
transistors which are connected together when the transistor is manufactured. Thus,
the need for soldering together large numbers of transistors was practically nullified;
now only connections were needed to other electronic components. In addition to
saving space, the speed of the machine was now increased since there was a
diminished distance that the electrons had to follow.
Mainframes to PCs
The 1960s saw large mainframe computers become much more common in large industries and
with the US military and space program. IBM became the unquestioned market leader in selling
these large, expensive, error-prone, and very hard to use machines.
A veritable explosion of personal computers occurred in the early 1970s, starting with Steve Jobs
and Steve Wozniak exhibiting the first Apple II at the First West Coast Computer Faire in San
Francisco. The Apple II boasted built-in BASIC programming language, color graphics, and a
4100 character memory for only $1298. Programs and data could be stored on an everyday
audio-cassette recorder. Before the end of the fair, Wozniak and Jobs had secured 300 orders for
the Apple II and from there Apple just took off.
Also introduced in 1977 was the TRS-80. This was a home computer manufactured by Tandy
Radio Shack. In its second incarnation, the TRS-80 Model II, came complete with a 64,000
character memory and a disk drive to store programs and data on. At this time, only Apple and
TRS had machines with disk drives. With the introduction of the disk drive, personal computer
applications took off as a floppy disk was a most convenient publishing medium for distribution
of software.
IBM, which up to this time had been producing mainframes and minicomputers for medium to
large-sized businesses, decided that it had to get into the act and started working on the Acorn,
TISQAAD COMPUTER SCIENCE COLLEGE 9
TISQAAD COMPUTER SCIENCE 2020
which would later be called the IBM PC. The PC was the first computer designed for the home
market which would feature modular design so that pieces could easily be added to the
architecture. Most of the components, surprisingly, came from outside of IBM, since building it
with IBM parts would have cost too much for the home computer market. When it was
introduced, the PC came with a 16,000 character memory, keyboard from an IBM electric
typewriter, and a connection for tape cassette player for $1265.
By 1984, Apple and IBM had come out with new models. Apple released the first generation
Macintosh, which was the first computer to come with a graphical user interface(GUI) and a
mouse. The GUI made the machine much more attractive to home computer users because it was
easy to use. Sales of the Macintosh soared like nothing ever seen before. IBM was hot on Apple's
tail and released the 286-AT, which with applications like Lotus 1-2-3, a spreadsheet, and
Microsoft Word, quickly became the favourite of business concerns.
That brings us up to about ten years ago. Now people have their own personal graphics
workstations and powerful home computers. The average computer a person might have in their
home is more powerful by several orders of magnitude than a machine like ENIAC. The
computer revolution has been the fastest growing technology in man's history.
Figure 6
Mainframe
Alternatively referred to as a big iron computer, a mainframe is a large central computer with
more memory, storage space, and processing power than a standard computer. A mainframe is
used by governments, schools, and corporations for added security and processing large
amounts of data, such as consumer statistics, census data, or electronic transactions. Their
reliability and high stability allow these machines to run for a very long time, even decades.
For a tutorial on using search engines for researching Wikipedia articles, see Wikipedia:Search
engine test.
A web search engine or Internet search engine is a software system that is designed to carry out
web search (Internet search), which means to search the World Wide Web in a systematic way
for particular information specified in a textual web search query. The search results are
generally presented in a line of results, often referred to as search engine results pages (SERPs).
The information may be a mix of links to web pages, images, videos, infographics, articles,
research papers, and other types of files. Some search engines also mine data available in
History
Prior to September 1993, the World Wide Web was entirely indexed by hand. There was a list of
webservers edited by Tim Berners-Lee and hosted on the CERN webserver. One snapshot of the
list in 1992 remains,[4] but as more and more web servers went online the central list could no
longer keep up. On the NCSA site, new servers were announced under the title "What's New!"[5]
The first tool used for searching content (as opposed to users) on the Internet was Archie.[6] The
name stands for "archive" without the "v". It was created by Alan Emtage, Bill Heelan and J.
Peter Deutsch, computer science students at McGill University in Montreal, Quebec, Canada.
The program downloaded the directory listings of all the files located on public anonymous FTP
(File Transfer Protocol) sites, creating a searchable database of file names; however, Archie
Search Engine did not index the contents of these sites since the amount of data was so limited it
could be readily searched manually.
The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to
two new search programs, Veronica and Jughead. Like Archie, they searched the file names and
titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to
Computerized Archives) provided a keyword search of most Gopher menu titles in the entire
Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a
tool for obtaining menu information from specific Gopher servers. While the name of the search
engine "Archie Search Engine" was not a reference to the Archie comic book series, "Veronica"
and "Jughead" are characters in the series, thus referencing their predecessor.
In the summer of 1993, no search engine existed for the web, though numerous specialized
catalogues were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series
of Perl scripts that periodically mirrored these pages and rewrote them into a standard format.
This formed the basis for W3Catalog, the web's first primitive search engine, released on
September 2, 1993.[7]
JumpStation (created in December 1993[8] by Jonathon Fletcher) used a web robot to find web
pages and to build its index, and used a web form as the interface to its query program. It was
thus the first WWW resource-discovery tool to combine the three essential features of a web
search engine (crawling, indexing, and searching) as described below. Because of the limited
resources available on the platform it ran on, its indexing and hence searching were limited to the
titles and headings found in the web pages the crawler encountered.
One of the first "all text" crawler-based search engines was WebCrawler, which came out in
1994. Unlike its predecessors, it allowed users to search for any word in any webpage, which has
become the standard for all major search engines since. It was also the search engine that was
widely known by the public. Also in 1994, Lycos (which started at Carnegie Mellon University)
was launched and became a major commercial endeavor.
The first popular search engine on the Web was Yahoo! Search.[9] The first product from
Yahoo!, founded by Jerry Yang and David Filo in January 1994, was a Web directory called
Yahoo! Directory. In 1995, a search function was added, allowing users to search Yahoo!
Directory![10][11] It became one of the most popular ways for people to find web pages of
interest, but its search function operated on its web directory, rather than its full-text copies of
web pages.
Soon after, a number of search engines appeared and vied for popularity. These included
Magellan, Excite, Infoseek, Inktomi, Northern Light, and AltaVista. Information seekers could
also browse the directory instead of doing a keyword-based search.
In 1996, Robin Li developed the RankDex site-scoring algorithm for search engines results page
ranking[12][13][14] and received a US patent for the technology.[15] It was the first search
engine that used hyperlinks to measure the quality of websites it was indexing,[16] predating the
very similar algorithm patent filed by Google two years later in 1998.[17] Larry Page referenced
Li's work in some of his U.S. patents for PageRank.[18] Li later used his Rankdex technology for
the Baidu search engine, which was founded by Robin Li in China and launched in 2000.
Google adopted the idea of selling search terms in 1998, from a small search engine company
named goto.com. This move had a significant effect on the SE business, which went from
struggling to one of the most profitable businesses in the Internet.[21]
Search engines were also known as some of the brightest stars in the Internet investing frenzy
that occurred in the late 1990s.[22] Several companies entered the market spectacularly,
receiving record gains during their initial public offerings. Some have taken down their public
search engine, and are marketing enterprise-only editions, such as Northern Light. Many search
engine companies were caught up in the dot-com bubble, a speculation-driven market boom that
peaked in 1999 and ended in 2001.
Around 2000, Google's search engine rose to prominence.[23] The company achieved better
results for many searches with an algorithm called PageRank, as was explained in the paper
Anatomy of a Search Engine written by Sergey Brin and Larry Page, the later founders of
Google.[24] This iterative algorithm ranks web pages based on the number and PageRank of
other web sites and pages that link there, on the premise that good or desirable pages are linked
to more than others. Larry Page's patent for PageRank cites Robin Li's earlier RankDex patent as
an influence.[18][14] Google also maintained a minimalist interface to its search engine. In
contrast, many of its competitors embedded a search engine in a web portal. In fact, Google
search engine became so popular that spoof engines emerged such as Mystery Seeker.
By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo!
acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in 2003.
Yahoo! switched to Google's search engine until 2004, when it launched its own search engine
based on the combined technologies of its acquisitions.
Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In
early 1999 the site began to display listings from Looksmart, blended with results from Inktomi.
For a short time in 1999, MSN Search used results from AltaVista instead. In 2004, Microsoft
began a transition to its own search technology, powered by its own web crawler (called
msnbot).
Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009,
Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft
Bing technology.
Approach
Web crawling
Indexing
Searching[25]
Web search engines get their information by web crawling from site to site. The "spider" checks
for the standard filename robots.txt, addressed to it. The robots.txt file contains directives for
search spiders, telling it which pages to crawl. After checking for robots.txt and either finding it
or not, the spider sends certain information back to be indexed depending on many factors, such
as the titles, page content, JavaScript, Cascading Style Sheets (CSS), headings, or its metadata in
HTML meta tags. After a certain number of pages crawled, amount of data indexed, or time
spent on the website, the spider stops crawling and moves on. "[N]o web crawler may actually
crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies
of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site
should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled
only partially".[26]
Indexing means associating words and other definable tokens found on web pages to their
domain names and HTML-based fields. The associations are made in a public database, made
available for web search queries. A query from a user can be a single word, multiple words or a
sentence. The index helps find information relating to the query as quickly as possible.[25] Some
of the techniques for indexing, and caching are trade secrets, whereas web crawling is a
straightforward process of visiting all sites on a systematic basis.
Between visits by the spider, the cached version of page (some or all the content needed to
render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is
overdue, the search engine can just act as a web proxy instead. In this case the page may differ
from the search terms indexed.[25] The cached page holds the appearance of the version whose
words were previously indexed, so a cached version of a page can be useful to the web site when
the actual page has been lost, but this problem is also considered a mild form of linkrot.
Typically when a user enters a query into a search engine it is a few keywords.[27] The index
already has the names of the sites containing the keywords, and these are instantly obtained from
the index. The real processing load is in generating the web pages that are the search results list:
TISQAAD COMPUTER SCIENCE COLLEGE 15
TISQAAD COMPUTER SCIENCE 2020
Every page in the entire list must be weighted according to information in the indexes.[25] Then
the top search result item requires the lookup, reconstruction, and markup of the snippets
showing the context of the keywords matched. These are only part of the processing each search
results web page requires, and further pages (next to the top) require more of this post
processing.
Beyond simple keyword lookups, search engines offer their own GUI- or command-driven
operators and search parameters to refine the search results. These provide the necessary controls
for the user engaged in the feedback loop users create by filtering and weighting while refining
the search results, given the initial pages of the first search results. For example, from 2007 the
Google.com search engine has allowed one to filter by date by clicking "Show search tools" in
the leftmost column of the initial search results page, and then selecting the desired date range.
[28] It's also possible to weight by date because each page has a modification time. Most search
engines support the use of the boolean operators AND, OR and NOT to help end users refine the
search query. Boolean operators are for literal searches that allow the user to refine and extend
the terms of the search. The engine looks for the words or phrases exactly as entered. Some
search engines provide an advanced feature called proximity search, which allows users to define
the distance between keywords.[25] There is also concept-based searching where the research
involves using statistical analysis on pages containing the words or phrases you search for. As
well, natural language queries allow the user to type a question in the same form one would ask
it to a human.[29] A site like this would be ask.com.[30]
The usefulness of a search engine depends on the relevance of the result set it gives back. While
there may be millions of web pages that include a particular word or phrase, some pages may be
more relevant, popular, or authoritative than others. Most search engines employ methods to rank
the results to provide the "best" results first. How a search engine decides which pages are the
best matches, and what order the results should be shown in, varies widely from one engine to
another.[25] The methods also change over time as Internet usage changes and new techniques
evolve. There are two main types of search engine that have evolved: one is a system of
predefined and hierarchically ordered keywords that humans have programmed extensively. The
other is a system that generates an "inverted index" by analyzing texts it locates. This first form
relies much more heavily on the computer itself to do the bulk of the work.
Most Web search engines are commercial ventures supported by advertising revenue and thus
some of them allow advertisers to have their listings ranked higher in search results for a fee.
Search engines that do not accept money for their search results make money by running search
related ads alongside the regular search engine results. The search engines make money every
time someone clicks on one of these ads.[31]
Figure 7
Figure 8
Supercomputer
"High-performance computing" redirects here. For narrower definitions of HPC, see high-throughput computing and many-task computing. For
other uses, see supercomputer (disambiguation).
Supercomputers play an important role in the field of computational science, and are used for a wide
range of computationally intensive tasks in various fields, including quantum mechanics, weather
forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and
properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical
simulations (such as simulations of the early moments of the universe, airplane and spacecraft
aerodynamics, the detonation of nuclear weapons, and nuclear fusion). They have been essential in the
field of cryptanalysis.[6]
Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour
Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or
monogram. The first such machines were highly tuned conventional designs that ran faster than their
more general-purpose contemporaries. Through the decade, increasing amounts of parallelism were
added, with one to four processors being typical. From the 1970s, vector processors operating on large
arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector
computers remained the dominant design into the 1990s. From then until today, massively parallel
supercomputers with tens of thousands of off-the-shelf processors became the norm.[7][8]
The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted
dominance of the field, and later through a variety of technology companies. Japan made major strides in
the field in the 1980s and 90s, with China becoming increasingly active in the field. As of November
History
Cray left CDC in 1972 to form his own company, Cray Research.[17] Four years after leaving CDC,
Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful
supercomputers in history.[20][21] The Cray-2 was released in 1985. It had eight central processing
units (CPUs), liquid cooling and the electronics coolant liquid fluorinert was pumped through
the supercomputer architecture. It performed at 1.9 gigaFLOPS and was the world's second fastest
after M-13 supercomputer in Moscow.[22]The third pioneering supercomputer project in the early
1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He
designed the Atlas to have memory space for up to a million words of 48 bits, but because
magnetic storage with such a capacity was unaffordable, the actual core memory of Atlas was
only 16,000 words, with a drum providing memory for a further 96,000 words. The Atlas
operating system swapped data in the form of pages between the magnetic core and the drum.
The Atlas operating system also introduced time-sharing to supercomputing, so that more than
one programe could be executed on the supercomputer at any one time.[13] Atlas was a joint
venture between Ferranti and the Manchester University and was designed to operate at
processing speeds approaching one microsecond per instruction, about one million instructions
per second.[14]
The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from
germanium to silicon transistors. Silicon transistors could run faster and the overheating problem
was solved by introducing refrigeration to the supercomputer design.[15] Thus the CDC6600
became the fastest computer in the world. Given that the 6600 outperformed all the other
Figure 9 Figure 10
END