Robots and AI - Future of the Earth

Robots and Artificial Intelligence:
Artificial intelligence (AI) is arguably the most exciting field in robotics. It's certainly the most controversial: Everybody agrees that a robot can work in an assembly line, but there's no consensus on whether a robot can ever be intelligent.
Like the term "robot" itself, artificial intelligence is hard to define. Ultimate AI would be a recreation of the human thought process -- a man-made machine with our intellectual abilities. This would include the ability to learn just about anything, the ability to reason, the ability to use language and the ability to formulate original ideas. Roboticists are nowhere near achieving this level of artificial intelligence, but they have made a lot of progress with more limited AI. Today's AI machines can replicate some specific elements of intellectual ability.

Computers can already solve problems in limited realms. The basic idea of AI problem-solving is very simple, though its execution is complicated. First, the AI robot or computer gathers facts about a situation through sensors or human input. The computer compares this information to stored data and decides what the information signifies. The computer runs through various possible actions and predicts which action will be most successful based on the collected information. Of course, the computer can only solve problems it's programmed to solve -- it doesn't have any generalized analytical ability. Chess computers are one example of this sort of machine.


Some modern robots also have the ability to learn in a limited capacity. Learning robots recognize if a certain action (moving its legs in a certain way, for instance) achieved a desired result (navigating an obstacle). The robot stores this information and attempts the successful action the next time it encounters the same situation. Again, modern computers can only do this in very limited situations. They can't absorb any sort of information like a human can. Some robots can learn by mimicking human actions. In Japan, roboticists have taught a robot to dance by demonstrating the moves themselves.

Some robots can interact socially. Kismet, a robot at M.I.T's Artificial Intelligence Lab, recognizes human body language and voice inflection and responds appropriately. Kismet's creators are interested in how humans and babies interact, based only on tone of speech and visual cue. This low-level interaction could be the foundation of a human-like learning system.

Kismet and other humanoid robots at the M.I.T. AI Lab operate using an unconventional control structure. Instead of directing every action using a central computer, the robots control lower-level actions with lower-level computers. The program's director, Rodney Brooks, believes this is a more accurate model of human intelligence. We do most things automatically; we don't decide to do them at the highest level of consciousness.

The real challenge of AI is to understand how natural intelligence works. Developing AI isn't like building an artificial heart -- scientists don't have a simple, concrete model to work from. We do know that the brain contains billions and billions of neurons, and that we think and learn by establishing electrical connections between different neurons. But we don't know exactly how all of these connections add up to higher reasoning, or even low-level operations. The complex circuitry seems incomprehensible.
Because of this, AI research is largely theoretical. Scientists hypothesize on how and why we learn and think, and they experiment with their ideas using robots. Brooks and his team focus on humanoid robots because they feel that being able to experience the world like a human is essential to developing human-like intelligence. It also makes it easier for people to interact with the robots, which potentially makes it easier for the robot to learn.

Just as physical robotic design is a handy tool for understanding animal and human anatomy, AI research is useful for understanding how natural intelligence works. For some roboticists, this insight is the ultimate goal of designing robots. Others envision a world where we live side by side with intelligent machines and use a variety of lesser robots for manual labor, health care and communication. A number of robotics experts predict that robotic evolution will ultimately turn us into cyborgs -- humans integrated with machines. Conceivably, people in the future could load their minds into a sturdy robot and live for thousands of years!

In any case, robots will certainly play a larger role in our daily lives in the future. In the coming decades, robots will gradually move out of the industrial and scientific worlds and into daily life, in the same way that computers spread to the home in the 1980s.
The best way to understand robots is to look at specific designs. The links on the next page will show you a variety of robot projects around the world.


Photo courtesy NASA
NASA's FIDO Rover is designed for exploration on Mars.

The first obstacle is to give the robot a working locomotion system. If the robot will only need to move over smooth ground, wheels or tracks are the best option. Wheels and tracks can also work on rougher terrain if they are big enough. But robot designers often look to legs instead, because they are more adaptable. Building legged robots also helps researchers understand natural locomotion -- it's a useful exercise in biological research.

Photo courtesy Fujitsu and K&D Technology, Inc.
Fujitsu's HOAP-1 robot

Typically, hydraulic or pneumatic pistons move robot legs back and forth. The pistons attach to different leg segments just like muscles attach to different bones. It's a real trick getting all these pistons to work together properly. As a baby, your brain had to figure out exactly the right combination of muscle contractions to walk upright without falling over. Similarly, a robot designer has to figure out the right combination of piston movements involved in walking and program this information into the robot's computer. Many mobile robots have a built-in balance system (a collection of gyroscopes, for example) that tells the computer when it needs to correct its movements.

Photo courtesy NASA
NASA's Frogbot uses springs, linkages and motors to hop from place to place.
What is it Good For?
Mobile robots stand in for people in a number of ways. Some explore other planets or inhospitable areas on Earth, collecting geological samples. Others seek out landmines in former battlefields. The police sometimes use mobile robots to search for a bomb, or even to apprehend a suspect.

Mobile robots also work in homes and businesses. Hospitals may use robots to transport medications. Some museums use robots to patrol their galleries at night, monitoring air quality and humidity levels. Several companies have developed robotic vacuums.


Bipedal locomotion (walking on two legs) is inherently unstable, which makes it very difficult to implement in robots. To create more stable robot walkers, designers commonly look to the animal world, specifically insects. Six-legged insects have exceptionally good balance, and they adapt well to a wide variety of terrain.
Some mobile robots are controlled by remote -- a human tells them what to do and when to do it. The remote control might communicate with the robot through an attached wire, or using radio or infrared signals. Remote robots, often called puppet robots, are useful for exploring dangerous or inaccessible environments, such as the deep sea or inside a volcano. Some robots are only partially controlled by remote. For example, the operator might direct the robot to go to a certain spot, but not steer it there -- the robot would find its own way.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.