Robot History
Robot History
Robot History
1 of 2
Learn about an origami robot that forms itself, execute a variety of tasks, and then disappears by
degradationA multilayered sheet, made from printable material, folds up when heated, transforming into
a tiny “origami robot” that can be remotely controlled.(more)
Dexterous industrial manipulators and industrial vision have roots in advanced robotics work conducted
in artificial intelligence (AI) laboratories since the late 1960s. Yet, even more than with AI itself, these
accomplishments fall far short of the motivating vision of machines with broad human abilities.
Techniques for recognizing and manipulating objects, reliably navigating spaces, and planning actions
have worked in some narrow, constrained contexts, but they have failed in more general circumstances.
See how researchers developed a fully maneuverable fish robot modeled on the electric fishLearn how
researchers developed a highly maneuverable ribbon-finned underwater robot, modeled on the knifefish
of South America.(more)
The first robotics vision programs, pursued into the early 1970s, used statistical formulas to detect linear
boundaries in robot camera images and clever geometric reasoning to link these lines into boundaries of
probable objects, providing an internal model of their world. Further geometric formulas related object
positions to the necessary joint angles needed to allow a robot arm to grasp them, or the steering and
drive motions to get a mobile robot around (or to) the object. This approach was tedious to program and
frequently failed when unplanned image complexities misled the first steps. An attempt in the late 1970s
to overcome these limitations by adding an expert system component for visual analysis mainly made
the programs more unwieldy—substituting complex new confusions for simpler failures.
three stages of mobile robot development for the Mars Rover Research ProjectThree stages of mobile
robot development for the Mars Rover Research Project: (A) Genghis, (B) Attila, and (C) Pebbles,
displayed in MIT's development of a mobile robot to reconnoitre the Martian surface. To see a larger
image and obtain information on each robot, click on the individual photograph.(more)
In the mid-1980s Rodney Brooks of the MIT AI lab used this impasse to launch a highly visible new
movement that rejected the effort to have machines create internal models of their surroundings.
Instead, Brooks and his followers wrote computer programs with simple subprograms that connected
sensor inputs to motor outputs, each subprogram encoding a behaviour such as avoiding a sensed
obstacle or heading toward a detected goal. There is evidence that many insects function largely this
way, as do parts of larger nervous systems. The approach resulted in some very engaging insectlike
robots, but—as with real insects—their behaviour was erratic, as their sensors were momentarily misled,
and the approach proved unsuitable for larger robots. Also, this approach provided no
direct mechanism for specifying long, complex sequences of actions—the raison d’être of industrial
robot manipulators and surely of future home robots (note, however, that in 2004 iRobot Corporation
sold more than one million robot vacuum cleaners capable of simple insectlike behaviours, a first for a
service robot).
Pebbles the robotPebbles, a tractorlike robot utilizing a vision-based control system developed during
the late 1990s as part of MIT's Mars Rover Research Project. Pebbles, which is about the size of a
domestic cat, negotiates around obstacles with the aid of a single camera, the robot's only sensor. With
its arm attached, Pebbles can collect samples or handle dangerous objects.(more)
Meanwhile, other researchers continue to pursue various techniques to enable robots to perceive their
surroundings and track their own movements. One prominent example involves semiautonomous
mobile robots for exploration of the Martian surface. Because of the long transmission times for signals,
these “rovers” must be able to negotiate short distances between interventions from Earth.
A particularly interesting testing ground for fully autonomous mobile robot research is football (soccer).
In 1993 an international community of researchers organized a long-term program to develop robots
capable of playing this sport, with progress tested in annual machine tournaments. The first RoboCup
games were held in 1997 in Nagoya, Japan, with teams entered in three competition
categories: computer simulation, small robots, and midsize robots. Merely finding and pushing the ball
was a major accomplishment, but the event encouraged participants to share research, and play
improved dramatically in subsequent years. In 1998 Sony began providing researchers with
programmable AIBOs for a new competition category; this gave teams a standard reliable prebuilt
hardware platform for software experimentation.
See how robotic fingers controlled by sensor glove, aids the wearer to perform tasks with one hand,
which usually requires two handsA look at experimental robotic fingers, controlled from a glove, that
enhance the hand's grasping motion and enable the wearer to perform with one hand many tasks that
ordinarily require both hands.(more)
The future
Robots assisting the elderly in JapanThe development of robot suits that aid the elderly and disabled.
Numerous companies are working on consumer robots that can navigate their surroundings, recognize
common objects, and perform simple chores without expert custom installation. This process will
produce the first broadly competent “universal robots” with lizardlike minds that can be programmed for
almost any routine chore. With anticipated increases in computing power, by 2030 second-generation
robots with trainable mouselike minds may become possible. Besides application programs, these robots
may host a suite of software “conditioning modules” that generate positive- and negative-reinforcement
signals in predefined circumstances.
How flexible electronic skin is createdScientists working to develop flexible electronic skin to bring the
sense of touch to robots and prosthetic devices.(more)
By 2040 computing power should make third-generation robots with monkeylike minds possible. Such
robots would learn from mental rehearsals in simulations that would model physical, cultural, and
psychological factors. Physical properties would include shape, weight, strength, texture, and
appearance of things and knowledge of how to handle them. Cultural aspects would include a thing’s
name, value, proper location, and purpose. Psychological factors, applied to humans and other robots,
would include goals, beliefs, feelings, and preferences. The simulation would track external events and
would tune its models to keep them faithful to reality. This should let a robot learn by imitation and
afford it a kind of consciousness. By the middle of the 21st century, fourth-generation robots may exist
with humanlike mental power able to abstract and generalize. Researchers hope that such machines will
result from melding powerful reasoning programs to third-generation machines. Properly educated,
fourth-generation robots are likely to become intellectually formidable.