In: Mechanical Engineering
1. Describe everything about the relationship between intelligent control and AI.
2. Describe everything about the:Basic concept of control system stability.
3. Write your opinion about the Artificial intelligence in robot control systems.
1. Ultimately, the problem of Artificial Intelligence (and thus of Neural Nets) comes down to that of making a sequence of decisions over time so as to achieve certain goals. AI is thus a control problem, at least in a trivial sense, but also in a deeper sense. This view is to be contrasted with AI’s traditional view of itself, in which the central paradigm is not that of control, but of problem solving in the sense of solving a puzzle, playing a board game, or solving a word problem. Areas where the problem solving paradigm does not naturally apply, such as robotics and vision, have been viewed as outside mainstream AI. I think that the control viewpoint is now much more profitable than the problem solving one, and that control should be the centerpiece of AI and machine learning research. If both AI and more traditional areas of engineering are viewed as approaches to the general problem of control, then why do they seem so different? In the 1950’s and early 1960’s these fields were not clearly distinguished. Pattern recognition, for example, was once a central concern of AI and only gradually shifted to become a separate specialized subfield. This happened also with various approaches to learning and adaptive control. I would characterize the split as having to do with the familiar dilemma of choosing between obtaining clear, rigorous results on the one hand, and exploring the most interesting, powerful systems one can think of on the other. AI clearly took the latter “more adventurous” approach, utilizing fully the experimental methodology made possible by digital computers, while the “more rigorous” approach became a natural extension of existing engineering theory, based on the pencil-and-paper mathematics of theorem and proof. See the figure. This is not in any way to judge these fields.
The most striking thing indicated in the figure is not that some work was more rigorous and some more adventurous, but the depth of the gulf between work of these two kinds. Most AI work makes absolutely no contact with traditional engineering algorithms, and vice versa. Perhaps this was necessary for each field to establish its own identity, but now it is counterproductive. The hottest spot in both fields is the one between them. The current enormous popularity of neural networks is due at least in part to its seeming to span these two—the applications potential of rigorous engineering approaches and the enhanced capability of AI. Intelligent control is also in this position. My conclusion then is that there is indeed a very fruitful area that lies more or less between Intelligent Control and Machine learning (including connectionist or neural net learning), and which therefore presents an excellent opportunity for interdisciplinary research
Dynamic Programming and A∗ search. These two techniques have long been known to be closely related, if not identical. Nevertheless, the complete relationship remains obscure. More importantly, many results have been obtained independently for each technique. How many of these results carry over to the other field? Amazingly, such inter-relations remain almost completely unexplored, at least in the open literature. Back-propagation. Back-propagation is a connectionist (neural net) learning technique for learning real-valued nonlinear mappings from examples, that is for nonlinear regression (see Rumelhart, Hinton & Williams, 1986). Such a function has many possible uses in control—for learning nonlinear control laws, plant dynamics and inverse dynamics. The important thing is not backpropagation as a particular algorithm—it’s clearly limited and will probably be replaced in the next few years—but the idea of a general structure for learning nonlinear mappings. This will remain of relevance to intelligent control. Temporal-Difference Learning. This is a kind of learning specialized for predicting the long-term behavior of time series. It was first used in a famous early AI program, Samuel’s checker player (Samuel, 1959), and since has been used in Genetic Algorithms (Holland, 1986) and in adaptive control in the role of a learned “critic” (Barto, Sutton & Anderson, 1983; Werbos, 1987). The basic idea is to use the change or temporal difference in prediction in place of the error in standard learning processes. Consider a sequence of predictions ending in a final outcome, perhaps a sequence of predictions about the outcome of a chess game, one made after each move, followed by the actual outcome. A normal learning process would adjust each prediction to look more like the final outcome, whereas a temporal-difference learning process would adjust each prediction to look more like the prediction that follows it (the actual outcome is taken as a final prediction for this purpose). If the classic LMS algorithm is extended in this manner to yield a temporal-difference algorithm, then, surprisingly, the new algorithm both converges to better predictions and is significantly simpler to implement (Sutton, 1988). The Perfect Model Disease. Ron Rivest has coined this term to describe an “illness” that AI (and, to a lesser extent, control theory) has had for many years and is only now beginning to recover from. The illness is the assumption and reliance upon having a perfect model of the world. In toy domains such as the blocks world, puzzle solving, and game playing this may be adequate, but in general of course it is not. Without a perfect model, everything becomes much harder—or at least much different—and so we has been reluctant to abandon the perfect model assumption. The alternative is to accept that our models of the world will always be incomplete, inaccurate, inconsistent, and changing. We will need to maintain multiple models, at multiple levels of abstraction and granularity, and at multiple time scales. It is no longer adequate to view imperfections and inconsistencies in our models as transients and to perform steady-state analysis; we must learn to work with models in which these imperfections will always be present. This means certainty equivalence approaches are not enough and dual control approaches are needed. Control without Reference Signals. The dogma in control is to assume that some outside agency specifies a desired trajectory for the plant outputs in such a way that controls or control adjustments can be determined. For many problems, however, this is simply not appropriate. Consider a chess game. The goal is clearly defined, but in no sense does one ever have a desired trajectory for the game or the moves to be made. Suppose I want a robot to learn to walk bipedally. Producing target trajectories for the joint angles and velocities is a large part of the problem, a part which needs to addressed by learning, not just by analysis and a priori specification. In my opinion, most real control problems are of this sort—in most cases it is natural to provide a specification of the desired result that falls far short of the desired trajectories usually assumed in conventional and adaptive control. This problem will become more and more common as we begin to consider imperfect and weak models, and particularly for systems with long-delayed effects of controls on goals. Reinforcement learning represents one approach to this problem (Mendel & McLaren, 1970; Sutton, 1984).
2.
A control system (also called a controller) manages a system’s operation so that the system’s response approximates commanded behavior. A common example of a control system is the cruise control in an automobile: The cruise control manipulates the throttle setting so that the vehicle speed tracks the commanded speed provided by the driver.
In years past, mechanical or electrical hardware components performed most control functions in technological systems. When hardware solutions were insufficient, continuous human participation in the control loop was necessary.
In modern system designs, embedded processors have taken over many control functions. A well-designed embedded controller can provide excellent system performance under widely varying operating conditions. To ensure a consistently high level of performance and robustness, an embedded control system must be carefully designed and thoroughly tested.
This book presents a number of control system design techniques in a step-by-step manner and identifies situations where the application of each is appropriate. It also covers the process of implementing a control system design in C or C++ in a resource-limited embedded system. Some useful approaches for thoroughly testing control system designs are also described.
There is no assumption of prior experience with control system engineering. The use of mathematics will be minimized and explanations of mathematically complex issues will appear in boxed sections. Study of those sections is recommended, but is not required for understanding the remainder of the book. The focus is on presenting control system design and testing procedures in a format that lets you put them to immediate use.
This chapter introduces the fundamental concepts of control
systems engineering and describes the steps of designing and
testing a controller. It introduces the terminology of control
system design and shows how to interpret block diagram
representations of systems.
Many of the techniques of control system engineering rely on
mathematical manipulations of system models. The easiest way to
apply these methods is to use a good control system design software
package such as the MATLAB® Control System Toolbox. MATLAB and
related products such as Simulink® and the Control System Toolbox
are used in later chapters to develop system models and apply
control system design techniques.
Throughout this book, words and phrases that appear in the Glossary are displayed in italics the first time they appear.
The goal of a controller is to move a system from its initial condition to a desired state and, once there, maintain the desired state. For the cruise control mentioned earlier, the initial condition is the vehicle speed at the time the cruise control is engaged. The desired state is the speed setting supplied by the driver. The difference between the desired and actual state is called the error signal. It is also possible that the desired state will change over time. When this happens, the controller must adjust the state of the system to track changes in the desired state.
A control system that attempts to keep the output signal at a constant level for long periods of time is called a regulator. In a regulator, the desired output value is called the set point. A control system that attempts to track an input signal that changes frequently (perhaps continuously) is called a servomechanism.
Some examples will help clarify the control system elements in familiar systems. Control systems typically have a sensor that measures the output signal to be controlled and an actuator that changes the system’s state in a way that affects the output signal. As Table 1.1 shows, many control systems are implemented using simple sensing hardware that turns an actuator such as a valve or switch on and off.
In many control system designs, it is possible to use either open loop control or feedback control. Feedback control systems measure the system parameter being controlled and use that information to determine the control actuator signal. Open loop systems do not use feedback. All the systems described in Table 1.1 use feedback control. The example below demonstrates why feedback control is the nearly universal choice for control system applications.
Consider a home heating system consisting of a furnace and a controller that cycles the furnace off and on to maintain a desired room temperature. Let’s look at how this type of controller could be implemented using open loop control and feedback control.
Open Loop Control: For a given combination of outdoor temperature and desired indoor temperature, it is possible to experimentally determine the ratio of furnace on time to off time that maintains the desired indoor temperature. Suppose a repeated cycle of 5 minutes of furnace on and 10 minutes furnace off produces the desired indoor temperature for a specific outdoor temperature. An open loop controller implementing this algorithm will produce the desired results only so long as the system and environment remain unchanged. If the outdoor temperature changes or if the furnace airflow changes because the air filter was replaced, the desired indoor temperature will no longer be maintained. This is clearly an unsatisfactory design.
Feedback Control: A feedback controller for this system measures the indoor temperature and turns the furnace on when the temperature drops below a turn-on threshold. The controller turns the furnace off when the temperature reaches a higher turn-off threshold. The threshold temperatures are set slightly above and below the desired temperature to keep the furnace from rapidly cycling on and off. This controller adapts automatically to outside temperature changes and to changes in system parameters such as airflow.
This book focuses on control systems that use feedback. This is because feedback controllers, in general, provide superior system performance in comparison to open loop controllers.
While it is possible to develop very simple feedback control systems through trial and error, for more complex applications the only feasible approach is the application of design methods that have been proven over time. This book covers a number of control system design methods and shows you how to employ them directly. The emphasis is on understanding the input and results of each technique, without requiring a deep understanding of the mathematical basis for the method.
As the applications of embedded computing expand, an increasing number of controller functions are moving to software implementations. To function as a feedback controller, an embedded processor uses one or more sensors to measure the system state and drives one or more actuators that change the system state. The sensor measurements are inputs to a control algorithm that computes the actuator commands. The control system design process encompasses the development of a control algorithm and its implementation in software along with related issues such as the selection of sensors, actuators, and the sampling rate.
The design techniques described in this book can be used to develop mechanical and electrical hardware controllers, as well as software controller implementations. This approach allows you to defer the decision of whether to implement a control algorithm in hardware or software until after its initial design has been completed.
In the context of control systems, a plant is a system to be controlled. From the controller’s point of view, the plant has one or more outputs and one or more inputs. Sensors measure the plant outputs and actuators drive the plant inputs. The behavior of the plant itself can range from trivially simple to extremely complex. At the beginning of a control system design project, it is helpful to identify a number of plant characteristics relevant to the design process.
Linear and Nonlinear Systems
A linear plant model is required for some of the control system design techniques covered in following chapters. In simple terms, a linear system produces an output that is proportional to its input. Small changes in the input signal result in small changes in the output. Large changes in the input cause large changes in output. A truly linear system must respond proportionally to any input signal, no matter how large. Note that this proportionality could also be negative: A positive input might produce a proportional negative output.
Definition of a Linear System
Consider a plant with one input and one output. Suppose you run the system for a period of time while recording the input and output signals. Call the input signal u1(t) and the output signal y1(t). Perform this experiment again with a different input signal. Name the input and output signals from this run u2(t) and y2(t) respectively. Now perform a third run of the experiment with the input signal u3(t) = u1(t) + u2(t).
The plant is linear if the output signal y3(t) = y1(t) + y2(t) for any arbitrarily selected input signals u1(t) and u2(t).
Real world systems are never precisely linear. Various factors always exist that introduce nonlinearities into the response of a system. For example, some nonlinearities in the automotive cruise control discussed earlier are:
However, the linear idealization is extremely useful as a tool for system analysis and control system design. Several of the design methods in the following chapters require a linear plant model. This immediately raises a question: If you do not have a linear model of your plant, how do you obtain one?
The approach usually taught in engineering courses is to develop a set of mathematical equations based on the laws of physics as they apply to the operation of the plant. These equations are often nonlinear, in which case it is necessary perform additional steps to linearize them. This procedure requires intimate knowledge of plant behavior as well as a strong mathematical background.
In this book, we don’t assume this type of background. Our focus is on simpler methods of acquiring a linear plant model. For instance, if you need a linear plant model but don’t want to develop one, you can always let someone else do it for you. Linear plant models are sometimes available from system data sheets or by request from experts familiar with the mathematics of a particular type of plant. Another approach is to perform a literature search to locate linear models of plants similar to the one of interest.
System identification is an alternative if none of the above approaches are suitable. System identification is a technique for performing semi-automated linear plant model development. This approach uses recorded plant input signals and output signals data to develop a linear system model that best fits the input and output data. System identification is discussed further in Chapter 3.
Simulation is another technique for developing a linear plant model. You can develop a nonlinear simulation of your plant using a tool such as Simulink and derive a linear plant model based on the simulation. We will apply this approach in some of the examples presented in later chapters.
Perhaps you just don’t want to expend the effort required to develop a linear plant model. With no plant model, an iterative procedure must be used to determine a suitable controller structure and parameter values. Chapter 2 discusses procedures for applying and tuning PID controllers. PID controller tuning is carried out using the results of experiments performed on the system consisting of plant plus controller.
Time Delays
Sometimes a linear model accurately represents the behavior of a plant, but a time delay exists between an actuator input and the start of the plant response to the input. This does not refer to sluggishness in the plant’s response. A time delay exists only when there is absolutely no response for some time interval following a change to the plant input.
For example, a time delay occurs when controlling the temperature of a shower. Changes in the hot or cold water valve positions do not have immediate results. There is a delay while water with the adjusted temperature flows up to the shower head and then down onto the shower-taker. Only then does feedback exist to indicate if further temperature adjustments are needed.
Many industrial processes exhibit time delays. Control system design methods that rely on linear plant models can’t directly work with time delays, but it is possible to extend a linear plant model to simulate the effects of a time delay. The resulting model is also linear and captures the approximate effects of the time delay. Linear control system design methods are applicable to the extended plant model. Time delays will be discussed further in Chapter 3.
Continuous-Time and Discrete-Time Systems
A continuous-time system has outputs with values defined at all points in time. The outputs of a discrete-time system are only updated or used at discrete points in time. Real world plants are usually best represented as continuous-time systems. In other words, these systems have measurable parameters such as speed, temperature, weight, etc. defined at all points in time.
The discrete-time systems of interest in this book are embedded processors and their associated input/output (I/O) devices. An embedded computing system measures its inputs and produces its outputs at discrete points in time. The embedded software typically runs at a fixed sampling rate, which results in input and output device updates at equally spaced points in time.
I/O Between Discrete-Time Systems and Continuous-Time Systems
A class of I/O devices interfaces discrete-time embedded controllers with continuous plants by performing direct conversions between analog voltages and the digital data values used in the processor. The analog to digital converter (ADC) performs input from an analog plant sensor to a discrete-time embedded computer. Upon receiving a conversion command, the ADC samples its analog input voltage and converts it to a quantized digital value. The behavior of the analog input signal between samples is unknown to the embedded processor.
The digital to analog converter (DAC) converts a quantized digital value to an analog voltage, which drives an analog plant actuator. The output of the DAC remains constant until it is updated at the next control algorithm iteration.
Two basic approaches are available for developing control algorithms that run as discrete-time systems. The first is to perform the design entirely in the discrete-time domain. For design methods that require a linear plant model, this method requires conversion of the continuous-time plant model to a discrete-time equivalent. One drawback of this approach is that it is necessary to specify the sampling rate of the discrete-time controller at the very beginning of the design process. If the sampling rate changes, all the steps in the control algorithm development process must be repeated to compensate for the change.
An alternative procedure is to perform the control system design in the continuous-time domain followed by a final step to convert the control algorithm to a discrete-time representation. Using this method, changes to the sampling rate only require repetition of the final step. Another benefit of this approach is that the continuous-time control algorithm can be implemented directly in analog hardware, if that turns out to be the best solution for a particular design. A final benefit of this approach is that the methods of control system design tend to be more intuitive in the continuous-time domain than in the discrete-time domain.
For these reasons, the design techniques covered in this book will be based in the continuous-time domain. Chapter 8 will discuss the conversion of a continuous-time control algorithm to an implementation in a discrete-time embedded processor using the C/C++ programming languages.
Number of Inputs and Outputs
The simplest feedback control system has one input and one output, and is called a single-input-single-output (SISO) system. In a SISO system, a sensor measures one signal and the controller produces one signal to drive an actuator. All of the design procedures in this book are applicable to SISO systems.
Control systems with more than one input or output are called MIMO systems, meaning multiple-input-multiple-output systems. Because of the added complexity, fewer MIMO system design procedures are available. Only the pole placement and optimal control design techniques (covered in Chapters 5 and 6) are directly suitable for MIMO systems. Chapter 7 covers issues specific to MIMO control system design.
In many cases, MIMO systems can be decomposed into a number of approximately equivalent SISO systems. For example, flying an aircraft requires simultaneous operation of several independent control surfaces including the rudder, ailerons, and elevator. This is clearly a MIMO system, but focusing on a particular aspect of behavior can result in a SISO system for control system design purposes. For instance, assume the aircraft is flying straight and level and must maintain a desired altitude. A SISO system for altitude control uses the measured altitude as its input and the commanded elevator position as its output. In this situation, the sensed parameter and the controlled parameter are directly related and have little or no interaction with other aspects of system control.
The critical factor that determines whether a MIMO system is suitable for decomposition into a number of SISO systems is the degree of coupling between inputs and outputs. If changes to a particular plant input result in significant changes to only one of its outputs, it is probably reasonable to represent the behavior of that input-output signal pair as a SISO system. When use of this technique is appropriate, all of the SISO control system design approaches become available for use with the system.
However, when too much coupling exists from a plant input to multiple outputs, there is no alternative but to perform a control system design using a MIMO method. Even in systems with weak cross coupling, the use of a MIMO design procedure will generally produce a superior design compared to the multiple SISO designs developed assuming no cross coupling between input-output signal pairs.
The two fundamental steps in designing a controller are:
The controller structure identifies the inputs, outputs, and mathematical form of the control algorithm. Each controller structure contains one or more adjustable design parameters. Given a controller structure, the control system designer must select a value for each parameter so that the overall system (consisting of the plant and the controller) satisfies performance requirements.
For example, in the root locus method described in Chapter 4, the controller structure might produce an actuator signal computed as a constant (called the gain) times the error signal. The adjustable parameter in this case is the value of the gain.
Like engineering design in other domains, control system design tends to be an iterative process. During the initial controller design iteration, it is best to begin with the simplest controller structure that could possibly provide adequate performance. Then, using one or more of the design methods in the following chapters, the designer attempts to identify values for the controller parameters that result in acceptable system performance.
It may turn out that no combination of parameter values for a given controller structure results in satisfactory performance. When this happens, the controller structure must be altered in some way to enable performance goals to be met. The designer then determines values for the adjustable parameters of the new structure. The cycle of controller structure modification and design parameter selection repeats until a final design with acceptable system performance has been achieved.
The following chapters contain several examples demonstrating the application of this two-step procedure using different control system design techniques. The examples come from engineering domains where control systems are regularly applied. Studying the steps in each example will help you develop an understanding of how to select an appropriate controller structure. For some of the design techniques, the determination of design parameter values is an automated process using control system design software. For the other design methods, you must follow the appropriate steps to select values for the design parameters.
Following each iteration of the two-step design procedure, you must evaluate the resulting controller to see if it meets performance requirements. Chapter 9 covers techniques for testing control system designs, including simulation testing and tests of the controller operating in conjunction with the actual plant.
Linear System Block Diagram Algebra
It is possible to manipulate block diagrams containing only linear components to achieve compact mathematical expressions representing system behavior. The goal of this manipulation is to determine the system output as a function of its input. The expression resulting from this exercise is useful in various control system analysis and design procedures.
Each block in the diagram must represent a linear system expressed in the form of a transfer function. Transfer functions are introduced in Chapter 3. Knowledge of the details of transfer functions is not required to perform block diagram algebra.
Figure 1.2 is a block diagram of a simple linear feedback control system. Lower case characters identify the signals in this system.
The blocks in the diagram represent linear system components. Each block can represent dynamic behavior with any degree of complexity as long as the requirement of linearity is satisfied.
One of the first steps in the control system development process is the definition of a suitable set of system performance specifications. Performance specifications guide the design process and provide the means for determining when a controller design is satisfactory. Controller performance specifications can be stated in both the time domain and in the frequency domain.
Time domain specifications usually relate to performance in response to a step change in the reference input. An example of such a step input is instantaneously changing the reference input from 0 to 1. Time domain specifications include, but are not limited to, the following parameters [1]:
Stability is a critical issue throughout the control system design process. A stable controller produces appropriate responses to changes in the reference input. If the system stops responding properly to changes in the reference input and does something else instead, it has become unstable.
Figure 1.5 shows an example of unstable system behavior. The initial response to the step input overshoots the commanded value by a large amount. The response to that overshoot is an even larger overshoot in the other direction. This pattern continues, with increasing output amplitude over time. In a real system, an unstable oscillation like this grows in amplitude until some nonlinearity such as actuator saturation (or a system breakdown!) limits the response.
Testing is an integral part of the control system design process. Many of the design methods in this book rely on the use of a linear plant model. Creating a linear model always involves approximation and simplification of the true plant behavior. The implementation of a controller using an embedded processor introduces nonlinear effects such as quantization. As a result, both the plant and the controller contain nonlinear effects that are not accounted for in a linear control system design.
The ideal way to demonstrate correct operation of the nonlinear plant and controller over the full range of system behavior is by performing thorough testing with an actual plant. This type of system-level testing normally occurs late in the product development process when prototype hardware becomes available. Problems found at this stage of the development cycle tend to be very expensive to fix.
Because of this, it is highly desirable to perform thorough testing at a much earlier stage of the development cycle. Early testing enables discovery and repair of problems when they are relatively easy and inexpensive to fix. However, testing the controller early in the product development process may not be easy if a prototype plant does not exist on which to perform tests.
System simulation provides a solution to this problem [2]. A simulation containing detailed models of the plant and controller is extremely valuable for performing early-stage control system testing. This simulation should include all relevant nonlinear effects present in the actual plant and controller implementations. While the simulation model of the plant must necessarily be a simplified approximation of the actual system, it should be a much more authentic representation than the linear plant model used in the controller design.
When using a simulation in a product development process, it is imperative to perform thorough simulation verification and validation.
The verification step is relevant for any software development process, and simply shows that the software performs as its designers intended. In simulation work, verification can occur in the early stages of a product development project. It is possible (and common) to perform verification for a simulation of a system that does not yet exist. This consists of making sure that the models used in the simulation are correctly implemented and produce the expected results. Verification allows the construction and application of a simulation in the earliest phases of a product development project.
Validation is a demonstration that the simulation models the embedded system and the real world operational environment with acceptable accuracy. A standard approach for validation is to use the results of system operational tests for comparison against simulation results. This involves running the simulation in a scenario that is identical to a test that was performed by the actual system in a real world environment. The results of the two tests are compared and the differences are analyzed to determine if they represent significant deviations between the simulation and the actual system.
A drawback of this approach to validation is that it cannot happen until a complete system prototype is available. Even when a prototype does not exist, it may be possible to perform validation at an earlier project phase at the component and subsystem level. You can perform tests on those system elements in a laboratory environment and duplicate the tests with the simulation. Comparing the results of the two tests provides confidence in the validity of the component or subsystem model.
The use of system simulation is common in the control engineering world. If you are unfamiliar with the tools and techniques of simulation, see reference [2] for an introduction to this topic.
The classical control system analysis and design methods discussed in Chapter 4 were originally developed and have been taught for years as techniques that rely on hand-drawn sketches. While this approach leads to a level of design intuition in the student, it takes significant time and practice to develop the necessary skills.
Since this book intends to enable the reader to rapidly apply a variety of control system design techniques, automated approaches will be emphasized rather than manual methods. Several software packages are commercially available that perform control system analysis and design functions as well as complete nonlinear system simulation. Some examples are listed below.
This book uses MATLAB, the Control System Toolbox, and other MATLAB add-on products to demonstrate a variety of control system modeling, design, and simulation techniques. These tools provide efficient, numerically robust algorithms to solve a variety of control system engineering problems. The MATLAB environment also provides powerful graphics capabilities for displaying the results of control system analysis and simulation procedures.
Feedback control systems measure attributes of the system being controlled and use that information to determine the control actuator signal. Feedback control provides superior performance compared to open loop control when environmental or system parameters change.
The system to be controlled is called a plant. Some of the characteristics of the plant as it relates to the control system design process are:
The two fundamental steps in control system design are:
The control system design process usually involves the iterative application of these two steps. In the first step, a candidate controller structure is selected. In the second step, a design method is used to determine suitable parameter values for that structure. If the resulting system performance is inadequate, the cycle is repeated with a new, usually more complex, controller structure.
A block diagram of a plant and controller graphically represents the structure of a controller design and its interaction with the plant. It is possible to perform algebraic operations on the components of a block diagram to reduce the diagram to a simpler form.
Performance specifications guide the design process and provide the means for determining when controller performance is satisfactory. Controller performance specifications can be stated in both the time domain and the frequency domain.
Stability is a critical issue throughout the control system design process. A stable controller produces appropriate system responses to changes in the reference input. Stability evaluation must be performed as part of the analysis of a controller design.
Testing is an integral part of the control system design process. It is highly desirable to perform thorough control system testing at an early stage of the development cycle, but prototype system hardware may be unavailable at that time. As an alternative, a simulation containing detailed models of the plant and controller is useful for performing early-stage control system testing. When prototype hardware is available, thorough testing of the control system across the intended range of operating conditions is imperative.
Several software packages are commercially available that perform control system analysis and design functions as well as complete nonlinear system simulation. These tools can significantly speed the steps of the control system design and analysis processes. This book will emphasize the application of the MATLAB Control System Toolbox and other MATLAB-related products to control system design, analysis, and system simulation tasks.
3.The The Artificial Intelligence (AI) and robotics are of interest practically to each person of our world. AI is the branch of computer science, concerned with making computers behave like humans. The term has been introduced in 1956 by John McCarthy [1]. Among many AI definitions we prefer definition given in [2]. AI is the property of the computer or of the neural network which consists in their reaction to data almost in the same way as a person reacts to information. In the monograph [2] the theory of AI is described as the science about agents who receive results of perception acts of the external environment and perform rational operations. AI defines the present and the future of the technological industry and the equipment. Robotics includes many achievements of AI [3]. Robots differ from usual technical systems by a new property, generated by synergy of mechanical, electronic and computer components of robots and AI. There is a question of assessment of new quality of the robots with AI.
The problems of intellectual robot control are on the intersection of control theory and the theory of AI. They substantially match the problems of development of the intelligent automated industrial management systems (AIMS) arising on a joint of electronics and informatics as a combination of electronic and computer devices [5]. The development of AIMS is induced by control theory, the theory of AI, system theory and system analysis. Intersection of these theories forms area I (Figure 1).The area I on Figure 1 is defined as the intelligent control [7] or control which has the property of "intelligence in the small" [6]. It is possible to define systems, implementing intelligent control, as AI systems, which has the property of intelligence "in the small" [6]. The subarea II is a part of the area I on Figure 1. This subarea defines the control, which has the property of "intelligence in the large" and corresponds to intellectual control [5, 7]. Arrows on Figure 1 illustrate the cross impact of three scientific theories while numbers 1 through 10 designate concepts and methods, which are transferred from one theory to another and create solving methodology for unformalized (semistructured) problems of complex dynamic systems control. For example, arrow 1 on Figure 1 shows that the theory of AI enriched the theory of system analysis with data processing methods, and arrow 4 points out that techniques of systems analysis are used in the theory of AI. Concepts of adaptation and intelligence used in robotic have been approbated in control theory. Tsypkin in [8] identified three stages in the control theory: determinism, stochasticity and adaptivity. We are witnessing the development of new, fourth stage of the control theory: intelligence. The former three stages are specified by solving of formalized problems, while the latter one focuses on unformalized (semistructured) problems [8]. Systems of intelligent control (SIC) are based on five principles [5, 6]: interaction of systems with an external environment; system openness; forecasting external environment and internal behavior changes; layered system architecture; system survival capability in the case of a failure of communication with the highest levels of system structure.