Control System Analysis, Matemática aplicada - Appd Math

Example 1 – Linearization of non-linear systems.

Linearize a function

Suppose we have a system represented by the following function:
nullOur task is to linearize f (x) around xo = π / 2. As:
nullWe find the following values and substitute them in the previous equation:
nullThen we can represent our nonlinear system by means of the following negative line equation:
null

The result of the linearization of f (x) around xo = π / 2 can be seen in Figure 2.48:

null

null

Linearize a differential equation

Suppose now that our system is represented by the following differential equation:
nullThe presence of the term cosx makes the previous one a non-linear equation. It is requested to linearize said equation for small excursions around x = π / 4.

To replace the independent variable x with the excursion δx, we take advantage of the fact that:
nullSo:nullWe proceed then to the substitution in the differential equation:nullWe now apply the derivation rules:nullAnd for the term that involves the cosx function we apply the same methodology that we have just seen in the previous example for a given function, that is, linearize f (x) around xo = π / 4:

Note that in the previous equation the excursion is zero when the function is evaluated exactly at the point xo. The same happens when the slope is evaluated in xo:So:

Therefore, we can rewrite the differential equation in a linear fashion around the point xo =π /4 as follows:

That is to say:

NEXT: Example 2 – Linearization of a Magnetic Levitation (MAGLEV) system – sphere. 

 

Literature review by:

Prof. Larry Francis Obando – Technical Specialist – Educational Content Writer

Se hacen trabajos, se resuelven ejercicios!!

WhatsApp:  +34633129287  Atención Inmediata!!

Twitter: @dademuch

Copywriting, Content Marketing, Tesis, Monografías, Paper Académicos, White Papers (Español – Inglés)

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, USB Valle de Sartenejas.

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, UCV CCs

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contacto: España. +34633129287

Caracas, Quito, Guayaquil, Cuenca. 

WhatsApp:  +34633129287   +593998524011  

Twitter: @dademuch

FACEBOOK: DademuchConnection

email: dademuchconnection@gmail.com

Control System Analysis, Matemática aplicada - Appd Math

Linearization of non-linear systems.

Introduction

Many components and actuators have non-linear characteristics and the effectiveness of their action requires that they remain at the point of operation where they act approximately linearly, which can be a very limited interval. For example, the music that we all hear must be amplified by a circuit composed of electronic devices that only amplify the signal when they are acting at the point of operation in which the system is designed to act linearly; proof of this is that the output of the system as a whole is proportional to the input, that is, a linear system.

What is linearization? It is to express a non-linear function or differential equation with an approximate linear version, only valid in a very small range of values of the independent variable. Something like expressing a quadratic function by the mathematical formula of a straight line. To what end? Well, to be able to apply to the system represented by this function all the control techniques for linear systems studied up to now. Our objective is to design a strategy to generate a linear equation that represents a non-linear system in a very limited region, a strategy that we configure next.

To obtain a linear mathematical model of a non-linear system it is necessary to suppose that the variable to be controlled only deviates very slightly from an operation point A of coordinates (xo, f (xo)), where xo is the input to the system and f (xo) is the output. At point A we can place a line with a certain slope and assume that for small changes δx around xo we have the output f (xo+δx) moving along this line, as shown in Figure 2-47:null

We can use point A as a new center of coordinates where the independent variable δx corresponds to the input to the system, while the dependent variable δf (x) represents the output of the system. We make this convenient change of coordinates to use the equation of the slope ma of the line in the following way:
null

OrnullAnd so:null

In the same way that:null

The latter is a linear mathematical approximation for f (x).

This technique allows us to obtain a linear expression for f (x), around the point of operation A. Now, we are going to combine the obtained expressions for f (x) and δf (x). Another way of thinking it is to think that, around the point of operation A, f (x) has the value of f (xo) plus a small component of value maδx along a straight line of slope ma:
nullWhere (x-xo) is so small that it approaches δx. Mission accomplished, we will do this:
null

What theory allows us to do this? The Taylor series.

Taylor Series

The Taylor series are the expansion of a function f (x) in terms of the value of that function at a particular point xo, around that point and in terms of the derivatives of the function evaluated at that point:
null

When the excursion around the point xo is small, as the case that interests us, the derivatives of higher order can be ignored, so:
null

Knowing that the mx slope of a line at point xo is the derivative of the line evaluated in xo, we can adapt this last equation to our strategy and we obtain the formula that interests us:
nullWhere mx = df / dx evaluated at x = xo. Note that δx is now the independent variable, for which we use only a valid range of values around xo, so that δx is an excursion. Returning to Figure 2.47, this is the key tactic of the linearization process, we have created a coordinate system centered at point A, to replace the independent variable x with δx. We can continue using the δx notation or any other more practical notation such as:
nullLet’s see how this works through examples.

Linearize a function

Suppose we have a system represented by the following function:
nullOur task is to linearize f (x) around xo = π / 2. As:
nullWe find the following values and substitute them in the previous equation:
nullThen we can represent our nonlinear system by means of the following negative line equation:
null

The result of the linearization of f (x) around xo = π / 2 can be seen in Figure 2.48:

null

null

Linearize a differential equation

Suppose now that our system is represented by the following differential equation:
nullThe presence of the term cosx makes the previous one a non-linear equation. It is requested to linearize said equation for small excursions around x = π / 4.

To replace the independent variable x with the excursion δx, we take advantage of the fact that:
nullSo:nullWe proceed then to the substitution in the differential equation:nullWe now apply the derivation rules:nullAnd for the term that involves the cosx function we apply the same methodology that we have just seen in the previous example for a given function, that is, linearize f (x) around xo = π / 4:

Note that in the previous equation the excursion is zero when the function is evaluated exactly at the point xo. The same happens when the slope is evaluated in xo:So:

Therefore, we can rewrite the differential equation in a linear fashion around the point xo =π /4 as follows:

That is to say:

Linearization of a system with two independent variables

The Taylor series enables us to work with functions or differential equations that have two independent variables. In this regard, the Taylor series applies the following formula:

null

Where the point of operation has the coordinates ¯x1 y ¯x2. For small excursions around the equilibrium point, we can obviate the higher order derivatives. The linear mathematical model for this nonlinear system around the point of operation is obtained from:

Example. Linearization of a system with two independent variables.

Linearization of magnetic sphere levitation system.

The magnetic suspension system of a sphere is shown in Figure 1.

The objective of the system is to control the position of the steel sphere by adjusting the current in the electromagnet through the input voltage e(t). The dynamics of the system is represented by the following differential equations:
Where:

It is requested to linearize the system around its equilibrium point.

See the complete answer in the following link: Example 2 – Linearization of a Magnetic Levitation (MAGLEV) system – sphere. 

Literature revirew by:

Prof. Larry Francis Obando – Technical Specialist – Educational Content Writer

Se hacen trabajos, se resuelven ejercicios!!

WhatsApp:  +34633129287  Atención Inmediata!!

Twitter: @dademuch

Copywriting, Content Marketing, Tesis, Monografías, Paper Académicos, White Papers (Español – Inglés)

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, USB Valle de Sartenejas.

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, UCV CCs

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contacto: España. +34633129287

Caracas, Quito, Guayaquil, Cuenca. 

WhatsApp:  +34633129287   +593998524011  

Twitter: @dademuch

FACEBOOK: DademuchConnection

email: dademuchconnection@gmail.com

Control System Analysis, Time Domain

Example 1 – Transient response of an electromechanical system.

The mechanical system shown in Figure P5.52(a) is used as part of the unity feedback system shown in Figure P5.52(b). Find the values of M and D to yield 20% overshoot and 2 seconds settling time.

1. System Dynamic

where:

2. Laplace Transform

3. Motor&Load Transfer Function (θm(s)/Ea(s))

4. Direct Transfer Function

For the system:

The open-loop transfer function Ga(s) is:

5. Closed-loop transfer function

The closed-loop transfer function Gc(s) is:

That is to say:

6. Calculation of M and D

According to:

Besides:

In this way:

Meanwhile:

7. Matlab verification

We use Matlab to corroborate replacing all the values calculated in the original transfer function:

Find the values of M and D to yield 20% overshoot and 2 seconds settling time.

>> stepinfo (sys)

RiseTime: 0.3554

SettlingTime: 1.8989

SettlingMin: 0.9331

SettlingMax: 1.1999

Overshoot: 19.9890

Undershoot: 0

Peak: 1.1999

PeakTime: 0.8059

Written by Larry Francis Obando – Technical Specialist – Educational Content Writer

Mentoring Academic / Entrepreneur / Business

Copywriting, Content Marketing, White Papers (Spanish – English)

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, CCs.

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, Valle de Sartenejas.

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contact: Caracas, Quito, Guayaquil, Jaén, Villafranca de Ordizia- Telf. +34633129287

WhatsApp: +593981478463

email: dademuchconnection@gmail.com

Control System Analysis, Electromechanical Systems

Solved Example 2 – Electromechanical system transfer function.

Find in generic terms, the transfer function of the unit feedback system shown in Figure P5.52 (b) of which the electromechanical system of Figure P5.52 (a) is a part.

1. System Dynamic

where:

2. Laplace Transform

3. Motor&Load Transfer Function (θm(s)/Ea(s))

4. Direct Transfer Function

For the system:

The open-loop transfer function Ga(s) is:

5. Closed-loop transfer function

The closed-loop transfer function Gc(s) is:

That is to say:

This problem is the first part of one where the transient response is requested so that the overshoot is 20% and the settling time is 2 seconds, see the complete problem in the following link: Example 1 – Transient response of an electromechanical system

Written by Larry Francis Obando – Technical Specialist – Educational Content Writer

Mentoring Academic / Entrepreneur / Business

Copywriting, Content Marketing, White Papers (Spanish – English)

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, CCs.

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, Valle de Sartenejas.

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contact: Caracas, Quito, Guayaquil, Jaén. telf – +34 633129287

WhatsApp: +34 633129287

email: dademuchconnection@gmail.com

Control System Analysis, Electromechanical Systems

Solved Example 1- Electromechanical System Transfer Function

Obtain the mathematical model of the position control system of the figure. Get the block diagram and the ansfer function between the angle of the load and the reference angle θc(s)/θc(s).

null

Data:

null

1. System dynamic

null

2. Laplace Transform

null

3. Block Diagram

null

Simplifying conveniently to obtain a model whose transfer function is known:

null

4. Transfer function of each block of the previous diagram.

Starting from:

nullWe obtain the following:nullThen, using:null

and substituting, we obtain:

null

Substituting the value of the data in the previous equation, we obtain:

null

Simplifying:null

On the other hand, the gain of the amplifier is obtained using:

null

From where:

null

null

Finally, the gear constant is given by the data and is n = 1/10. We then obtain a block diagram with the following transfer functions:

null

5. System Transfer function.

The open-loop transfer function Ga(s) of the system shown in the previous diagram is:

null

From where we can easily obtain the closed-loop transfer function Gc (s), which is what the statement asked, using the unit feedback:

null

NEXT: Example 2 – Electromechanical system transfer function (English)

Written by: Larry Francis Obando – Technical Specialist – Educational Content Writer.

Mentoring Academic/ Entrepreneurs/ Business.

Copywriting, Content Marketing, White Papers (Spanish– English)

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, Caracas.

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, Valle de Sartenejas.

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contacto: Caracas, Quito, Guayaquil, Jaén- Telf. +34 633129287

WhatsApp: +34 633129287

+593998524011

email: dademuchconnection@gmail.com

Análisis de sistemas de control, PID Control, Sin categoría

PID – Diseño y configuración del controlador.

Configuración del controlador.

El PID es uno de los controladores más ampliamente utilizados en los esquemas de compensación siguientes, algunas de las cuáles se ilustran en la Figura 10-2:

  1. Compensación en serie (cascada),
  2. Compensación mediante realimentación,
  3. Compensación mediante realimentación de estado,
  4. Compensación en serie-realimentada,
  5. Compensación prealimentada,

En general, la dinámica de un proceso lineal controlado puede representarse mediante el diagrama de la Figura 10-1.

El objetivo de diseño es que las variables controladas, representadas por el vector de salida y(t), se comporten de cierta forma deseada. El problema esencialmente involucra el determinar la señal de control u(t) dentro de un intervalo prescrito para que todas las especificaciones de diseño sean satisfechas (ver Estabilidad de un sistema de control, Respuesta transitoria, Error en estado estable).

El controlador PID aplica una señal al proceso de control mostrado en la Figura 10-1, que es una combinación proporcional, integral y derivada de las señal de actuación e(t). Debido a que estos componentes de la señal se pueden realizar y visualizar fácilmente en el dominio del tiempo, los controladores PID se diseñan comúnmente empleando métodos en el dominio del tiempo.

Después de que el diseñador ha seleccionado una configuración para el controlador, debe escoger además el tipo de controlador. En general, mientras más complejo el controlador, más costoso, menos confiable y más difícil de diseñar. Por ende, en la práctica se selecciona el tipo de controlador más simple que permita cumplir con las especificaciones de diseño, lo que involucra experiencia, intuición, arte y ciencia.

Las componentes integral y derivativo de un controlador PID tienen una implicación individual en el desempeño, y sus aplicaciones requieren un entendimiento de las bases de estos elementos. Por ello, se consideran por separado, iniciando con la porción PD.

Diseño con el controlador PD

 

La Figura 10-3 muestra el diagrama de bloques de un sistema de control realimentado que deliberadamente tiene una planta prototipo de segundo orden con la siguiente función de transferencia Gp(s):

null

El controlador en serie es del tipo proporcional-derivativo (PD) con la función de transferencia:

null

Por lo tanto, la señal de control U(s) aplicada a la planta es:

null

en donde Kp y Kd son las constantes proporcional y derivativa respectivamente, mientras E(s) es la señal de error. La realización del controlador PD mediante circuitos electrónicos se muestra en la Figura 10-4:

La función de transferencia del circuito de la Figura 10-4 es:

null

Al comparar con la Figura 10-3:

null

La función de transferencia directa del compensador mostrado en la Figura 10-3 es:

null

lo cual muestra que el control PD equivale a añadir un cero simple en s=-Kp/Kd a la función de transferencia directa. El efecto de control PD sobre la respuesta transitoria de un sistema de control se puede investigar al referirse a la respuesta en tiempo del sistema como se muestra en la Figura 10-5:

Se supone que la respuesta al escalón unitario de un sistema estable con el control proporcional es solamente como la que se presenta en la Figura 10-5(a). Se observa un sobrepaso máximo relativamente grande y un poco oscilatorio. La señal de error e(t) correspondiente, que es la diferencia entre la entrada r(t) escalón unitario y la salida y(t), y la derivada de dicho error en el tiempo, se muestran en las Figuras (b) y (c) respectivamente.

Durante el intervalo 0<t<t1, la señal de error es positiva, el sobrepaso es grande y se observa gran oscilación en la salida debido a la falta de amortiguamiento en este período. Durante el intervalo t1<t<t2, la señal de error es negativa, la salida se invierte y tiene un sobrepaso negativo. Este comportamiento se alterna sucesivamente hasta que la amplitud del error se reduce con cada oscilación, y la salida se establece eventualmente en su valor final. Se observa que el controlador PD puede añadir amortiguamiento a un sistema y reduce el sobrepaso máximo, pero no afecta el estado estable directamente.

Ejemplo. 

Para apreciar mejor el efecto del controlador PD, veamos el siguiente ejemplo. Supongamos que tenemos el sistema de la Figura 7-23.

null

La función de transferencia directa G(s) de este sistema viene dada por la siguiente expresión:null

Donde K es la constante del preamplificador.

Las especificaciones de diseño para este sistema son las siguientes:

nullDonde:

  • ess: Error en estado estable debido a una entrada de rampa unitaria
  • Mp: Sobrepaso máximo
  • Tr: Tiempo de levantamiento
  • Ts: Tiempo de asentamiento
  1. Selección del valor de K

Lo primero que vamos a hacer es hallar K para cumplir con el primer requerimiento de diseño, error en estado estable ess debido a una entrada rampa:

(Para repasar el concepto de error en estado estable ver Error en estado estable de un sistema de control)

1.a Hallar la constante de velocidad Kv porque es la relacionada a una entrada rampa:

null

1.b Hallar ess en función de K:

null

1.c Hallar para ess=0.000433:

null

Con este valor de K, la función de transferencia directa G(s) es:

null

2. Cálculo de sobrepaso 

Veamos ahora como queda el sobrepaso para el valor de K obtenido.

(Para un repaso del concepto de sobrepaso y la respuesta transitoria ver Respuesta Transitoria de un Sistema de Control)

2.a La función de transferencia de lazo cerrado Gce(s) es:

2.b Hallamos a partir de aquí el factor de amortiguamiento relativo ζ y la frecuencia natural del sistema ωn.

2.c Con estos valores, hallamos el sobrepaso máximo Mp:

En porcentaje:

Este valor supera la exigencia de la especificación, por lo que se considera insertar un controlador PD en la trayectoria directa del sistema con el fin de mejorar el amortiguamiento y ajustar el sobrepaso máximo a la especificación de diseño exigida, manteniendo sin embargo el error en estado estable en 0.000433.

3. Diseño en el dominio del tiempo del controlador PD

Añadiendo el controlador Gc(s) de la Figura 10-3 a la trayectoria directa del sistema aeronáutico, y asignando K=185.4503, la función de transferencia directa G(s) del sistema de control de posición de la aeronave es:

Mientras, la función de transferencia a lazo cerrado Gce(s) es:

Esta última ecuación muestra los efectos del controlador PD sobre la función de transferencia de lazo cerrado del sistema al cual se aplica:

  1. Añadir un cero en s=-Kp/Kd
  2. Incrementar el “término asociado al amortiguamiento”, el cual es el coeficiente de s en el denominador de Gce(s). Es decir, de 361.2 hasta 361.2 + 834526.56Kd

3.a Selección de Kp

Para asegurarnos de que se mantenga el error en estado estable para una entrada rampa de acuerdo con las especificaciones, evaluamos dicho error y seleccionamos un valor para Kp:

Al elegir Kp igual a uno, mantenemos el mismo valor para Kv que se tenía antes de añadir el controlador. Es decir, mantenemos el valor del error en estado estable para entrada rampa tal como lo exige la especificación de diseño. Entonces:null3.b Selección de Kd

De acuerdo con la ecuación de sobrepaso máximo:

El sobrepaso máximo depende del factor de amortiguamiento relativo ζ. La ecuación característica del sistema es:

null

Donde:

null

Deducimos la expresión para el factor de amortiguamiento relativo ζ:

null

Este resultado muestra claramente el efecto positivo de Kd sobre el amortiguamiento. Sin embargo, se debe resaltar el hecho de que la función de transferencia directa G(s) ya no representa un sistema prototipo de segundo orden, por lo que la respuesta transitoria también se verá afectada por el cero en s=-Kp/Kd.

Aplicaremos ahora el método del lugar geométrico de la raíces a la ecuación característica para examinar el efecto de variar Kd, mientras se mantiene constante el valor de Kp=1.

(Para un repaso ver El lugar geométrico de las raíces de un sistema de control – 1era. parte. El lugar geométrico de las raíces con Matlab)

Si deseamos obtener un Mp=5% tal y como se pide en las especificaciones de diseño, eso significa obtener un factor de amortiguamiento relativo igual a lo siguiente:null

null

La ecuación característica del sistema y su forma 1+G(s)H(s) son:null

Utilizando el siguiente comando en Matlab obtenemos el lugar geométrico de las raíces para G(s)H(s):

>> s=tf(‘s’)

>> sys=(834526.56*s)/(s^2+361.2*s+834526.56)

>> rlocus(sys)

null

La gráfica siguiente muestra como mejora el factor de amortiguamiento relativo ζ a medida que aumenta la ganancia Kd:

null

Mientras, en la gráfica siguiente se muestra que para lograr un factor de amortiguamiento relativo ζ=0.69 o mejor que ese, lo cual significa un sobrepaso menor de 5% como se especifica, es necesario tener una ganancia mínima Kd= 0.00108:

null

Sin embargo, antes de seleccionar un valor definitivo para Kd debemos observar el cumplimiento de los otros requerimientos de diseño.

3.c Evaluación de Tr y Ts según Kd y Kp calculados.

Analizamos a continuación el valor del tiempo de levantamiento Tr para el valor de ζ=0.69 , Kd= 0.00108 y Kp= 1,  utilizando la función de transferencia a lazo cerrado del sistema Gce(s)  y el gráfico de respuesta a la entrada escalón generado por el siguiente comando en Matlab:

>> s=tf(‘s’)

>>sys=(834526.56*(1+0.00108*s))/(s^2+(361.2+834526.56*0.00108)*s+834526.56)

sys =     (901.3 s + 8.345e05) / (s^2 + 1262 s + 8.345e05)

> step(sys)

null

Utilizando la gráfica para la salida C(t) del sistema a una entrada escalón para un valor determinado del factor de amortiguamiento relativo (ζ=0.69). Para hallar Tr, restamos los tiempos para los cuáles C(t)=0.9 C(t)=0.1:

null

La gráfica anterior nos permite determinar el valor de Tr para un valor de ζ=0.69 de la siguiente manera:

Podemos ver que este valor cumple con el requerimiento de que Tr≤0.005 s. Veamos ahora que pasa con Ts. Utilizando el criterio del 2% podemos calcular Ts mediante la siguiente fórmula:null

Así vemos que el factor de amortiguamiento ζ=0.69 genera un Ts que no cumple con la condición de un Ts menor o igual a 0.005 s. Sin embargo, aumentando Kd mejoramos ζ logrando satisfacer dicha condición. Para ser más específicos, despejamos ζ a partir del valor máximo aceptado para Ts:

null

Utilizamos nuevamente el lugar geométrico de las raíces para determinar el valor de Kd que se corresponde con el de ζ=0.8757:

null

Si el valor de Kd=0.00148 y mantenemos el valor de Kp=1, la función de transferencia directa es:null

Mientras, la función de transferencia a lazo cerrado del sistema en estudio es la siguiente:null

Para esta función de transferencia revisamos los valores de sobrepaso Mp y tiempo de levantamiento Tr para asegurarnos que cumplen con las especificaciones de diseño:null

null

null

Por tanto, el valor de Kd debe tener un valor mínimo de:

Y nuestro controlador PD puede tener entonces la siguiente función de transferencia:

 

ANTERIOR: PID – Efecto de las acciones de control Proporcional, Integral y Derivativo

Escrito por: Larry Francis Obando – Technical Specialist – Educational Content Writer.

Mentoring Académico / Empresarial / Emprendedores

Copywriting, Content Marketing, Tesis, Monografías, Paper Académicos, White Papers (Español – Inglés)

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, Caracas.

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, Valle de Sartenejas.

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contact: Caracas, Quito, Guayaquil, Cuenca – Telf. 00593998524011

WhatsApp: +593981478463

+593998524011

email: dademuchconnection@gmail.com

Atención:

Si lo que Usted necesita es resolver con urgencia un problema de “Sistema Masa-Resorte-Amortiguador” (encontrar la salida X(t), gráficas en Matlab del sistema de 2do Orden y parámetros relevantes, etc.), o un problema de “Sistema de Control Electromecánico” que involucra motores, engranajes, amplificadores diferenciales, etc…para entregar a su profesor en dos o tres días, o con mayor urgencia…o simplemente necesita un asesor para resolver el problema y estudiar para el próximo examen…envíeme el problema…Yo le resolveré problemas de Sistemas de Control, le entrego la respuesta en digital y le brindo una video-conferencia para explicarle la solución…incluye además simulación en Matlab.

Relacionado:

Respuesta Transitoria de un Sistema de Control

Estabilidad de un sistema de control

Simulación de Respuesta Transitoria con Matlab – Introducción

Control System Analysis, PID Control

PID – Basic Control System Actions

BEFORE:  Steady-State error control system

NEXT: PID – Effect of integrative and derivative control actions.

Introduction

An automatic controller compares the real value of the output of a plant with the input reference (the desired value), determines the deviation and produces a control signal that will reduce the deviation to zero or a small value. The way in which the automatic controller produces the control signal is called control action.

Classification of industrial controls

According to their control actions, industrial controllers are classified as:

  1. Two-position (On / Off)
  2. Proportional
  3. Integrals
  4. Proportional-Integrals
  5. Proportional-Derivatives
  6. Proportional-Integrals-Derivatives

Almost all industrial controllers use electricity as an energy source or a pressurized fluid, such as oil or air. The controllers can also be classified, according to the type of energy they use in their operation, like pneumatic, hydraulic or electronic. The type of controller that is used must be decided based on the nature of the plant and operational conditions, including considerations such as safety, cost, availability, reliability, precision, weight and size.

Figure 5-1 shows a typical configuration for an Industrial Control System:

The previous figure consists of a Block Diagram for an industrial control system composed of an automatic controller, an actuator, a plant and a sensor (measuring element). The controller detects the error signal, which is usually at a very low power level, and amplifies it to a sufficiently high level. The output of an automatic controller feeds an actuator that can be a pneumatic valve or an electric motor. The actuator is a power device that produces the input for The plant according to the control signal, so that the output signal approaches the reference input signal. The sensor, or measurement element, is a device that converts an output variable, such as a displacement, into another manageable variable, such as a voltage, that can be used to compare the output with the reference input signal. This element is in the feedback path of the closed-loop system. The setpoint of the controller must be converted into a reference input with the same units as the feedback signal from the sensor or from the measuring element.

Two positions control (On / Off).

In a two position control system, the acting element only has two fixed positions that, in many cases, are simply turned on and off. The On/Off control is relatively simple and cheap, which is why it is extensively used in industrial and domestic control systems.

Suppose that the output signal of the controller is u(t) and that the error signal is e(t). In the control of two positions, the signal u(t) remains at a value either maximum or minimum, depending on whether the error signal is positive or negative. In this way,

where U1 y U2 are constants. Very often, the minimum value of U2 is zero or –U1.

It is common for two-position controllers to be electrical devices, in which case an electrical valve operated by solenoids is widely used. Pneumatic proportional controllers with very high gains function as two-position controllers and are sometimes referred to as two-position pneumatic controllers.

Figures 5-3 (a) and (b) show the block diagrams for two controllers of two positions The range in which the error signal must move before the commutation is called differential gap. In Figure 5-3 (b) a differential gap is indicated. Such a gap causes the output of the controller u (t) to retain its present value until the error signal has moved slightly beyond zero. In some cases, the differential gap is the result of unintentional friction and a lost movement; however, it is often intentionally caused to avoid too frequent operation of the on and off mechanism.

Proportional control action.

For a controller with proportional control action, the relationship between the controller output u (t) and the error signal e (t) is:

or, in quantities transformed by the Laplace method:

where Kp is considered proportional gain.

Whatever the actual mechanism and the form of the operating power, the controller

proportional is, in essence, an amplifier with an adjustable gain. A block diagram of such a controller is presented in Figure 5-6.

Integral control action.

In a controller with integral control action, the value of the controller output u (t) is changed to a ratio proportional to the error signal e (t). That is to say,

O well:

where Ki is an adjustable constant. The transfer function of the integral controller is:

If the value of e (t) is doubled, the value of u (t) varies twice as fast. For an error of zero, the value of u (t) remains stationary. Sometimes, the integral control action is called adjustment control (reset). Figure 5-7 shows a block diagram of such controller.

 

Integral-proportional control action.
The control action of a proportional-integral controller (PI) is defined by:

null

or the transfer function of the controller, which is:

null

where Kp is the proportional gain and Ti is called integral time. Both Kp and Ti are adjustable. Integral time adjusts the integral control action, while a change in the value of Kp affects the integral and proportional parts of the control action.

The inverse of the integral time Ti is called the readjustment speed. The rate of readjustment is the number of times per minute that the proportional part of the control action is doubled. The rate of readjustment is measured in terms of the repetitions per minute. The Figure 5-8 (a) shows a block diagram of a PI controller. If the error signal e (t) is a unit step function, as shown in Figure 5-8 (b), the controller output u (t) becomes what is shown in Figure 5-8 (c).

null

null

null

Proportional-derivative control action.
The control action of a proportional-derivative (PD) controller is defined by:

null

The transfer function is:

null

where Kp is the proportional gain and Td is a constant called derivative time. Both Kp and Td are adjustable. The derivative control action, sometimes called speed control, occurs where the magnitude of the controller output is proportional to the rate of change of the error signal. The derivative time Td is the time interval during which the action of the velocity advances the effect of the proportional control action.

Figure 5-9 (a) shows a block diagram of a PD controller. If the error signal e (t) is a unit ramp function as shown in Figure 5-9 (b), the controller output u (t) becomes that shown in Figure 5-9 (c). ). The derivative control action has a forecast nature. However, it is obvious that a derivative control action never foresees an action that has never occurred.null

null

null

Although the derivative control action has the advantage of being forecast, it has the disadvantages that it amplifies the noise signals and can cause a saturation effect in the actuator. Note that the derivative control action is never used alone, because it is only effective during transient periods.

Proportional-Integral-derivative (PID) control action.

The combination of a proportional control action, an integral control action and a derivative control action is called proportional-integral-derivative (PID) control action.

This combined action has the advantages of each of the three individual control actions. The equation of a controller with this combined action is obtained by:

null

The transfer function is:

null

where Kp is the proportional gain, Ti is the integral time and Td is the derivative time. The block diagram of a PID controller appears in Figure 5-10 (a). If e (t) is a unit ramp function, like the one shown in Fig. 5-10 (b), the controller output u (t) becomes that of Fig. 5-10 (c).

null

nullnull

Effects of the sensor on the performance of the system.

Since the dynamic and static characteristics of the sensor or measuring element affects the indication of the actual value of the output variable, the sensor fulfills a function important to determine the overall performance of the control system. As usual, the sensor determines the transfer function in the feedback path. If the time constants of a sensor are negligible compared to other constants of time of the control system, the sensor transfer function simply it becomes a constant. Figures 5-11 (a), (b) and (c) show diagrams of automatic controller blocks with a first-order sensor, an overdamped second-order sensor and a second-order underdamped sensor, respectively. Often the response of a thermal sensor is of the overdamped second order type.

null

null

BEFORE:  Steady-State error control system

NEXT: PID – Effect of integrative and derivative control actions.

Source:

  1. Ingenieria de Control Moderna, 3° ED. – Katsuhiko Ogata pp 211-232

Literature review by Larry Francis Obando – Technical Specialist – Educational Content Writer

Copywriting, Content Marketing, Tesis, Monografías, Paper Académicos, White Papers (Español – Inglés)

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, CCs.

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, Valle de Sartenejas.

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contact: Caracas, Quito, Guayaquil, Cuenca. telf – 0998524011

WhatsApp: +593984950376

email: dademuchconnection@gmail.com

Análisis de sistemas de control, Electronic Engineer, PID, PID Control

Acciones Básicas de Sistemas de Control – PID

Un controlador automático compara el valor real de la salida de una planta con la entrada de referencia (el valor deseado), determina la desviación y produce una señal de control que reducirá la desviación a cero o a un valor pequeño. La manera en la cual el controlador automático produce la señal de control se denomina acción de control.

Clasificación de los controladores industriales.

De acuerdo con sus acciones de control, los controladores industriales se clasifican en:

  1. De dos posiciones o de encendido y apagado (On/Off)
  2. Proporcionales
  3. Integrales
  4. Proporcionales-Integrales (PI)
  5. Proporcionales-Derivativos (PD)
  6. Proporcionales-Integrales-Derivativos (PID)

Casi todos los controladores industriales emplean como fuente de energía la electricidad o un fluido presurizado, tal como el aceite o el aire. Los controladores también pueden clasificarse, de acuerdo con el tipo de energía que utilizan en su operación, como neumáticos, hidráulicos o electrónicos. El tipo de controlador que se use debe decidirse con base en la naturaleza de la planta y las condiciones operacionales, incluyendo consideraciones tales como seguridad, costo, disponibilidad, confiabilidad, precisión, peso y tamaño.

La Figura 5-1 muestra la configuración típica de un Sistema de Control Industrial:

La figura anterior consiste en un Diagrama de Bloques para un sistema de control industrial compuesto por un controlador automático, un actuador, una planta y un sensor (elemento de medición). El controlador detecta la señal de error, que por lo general, está en un nivel de potencia muy bajo, y la amplifica a un nivel lo suficientemente alto. La salida de un controlador automático alimenta a un actuador que puede ser una válvula neumática o un motor eléctrico. El actuador es un dispositivo de potencia que produce la entrada para La planta de acuerdo con la señal de control, a fin de que la señal de salida se aproxime a la señal de entrada de referencia. El sensor, o elemento de medición, es un dispositivo que convierte la variable de salida, tal como un desplazamiento, en otra variable manejable, tal como un voltaje, que pueda usarse para comparar la salida con la señal de entrada de referencia. Este elemento está en la trayectoria de realimentación del sistema en lazo cerrado. El punto de ajuste del controlador debe convertirse en una entrada de referencia con las mismas unidades que la señal de realimentación del sensor o del elemento de medición.

Los sistemas de control realimentados son difíciles de comprender desde el punto de vista cualitativo, por lo que dicha comprensión depende en gran medida de las matemáticas. El Lugar Geométrico de las Raíces (LGR) es la técnica gráfica que nos da esa descripción cualitativa sobre el rendimiento del sistema de control que estamos diseñando. Para pasar de una vez a la práctica del análisis y diseño de sistemas de control, ver: El lugar geométrico de las raíces con Matlab

Acción de control de dos posiciones o de encendido y apagado (on/off).

En un sistema de control de dos posiciones, el elemento de actuación sólo tiene dos posiciones fijas que, en muchos casos, son simplemente encendido y apagado. El control de dos posiciones o de encendido y apagado es relativamente simple y barato, razón por la cual su uso es extendido en sistemas de control tanto industriales como domésticos.

Supongamos que la señal de salida del controlador es u(t) y que la señal de error es e(t). En el control de dos posiciones, la señal u(t) permanece en un valor ya sea máximo o mínimo, dependiendo de si la señal de error es positiva o negativa. De este modo,

donde U1 y U2 son constantes. Por lo general, el valor mínimo de U2 es cero ó U1.

Es común que los controladores de dos posiciones sean dispositivos eléctricos, en cuyo caso se usa extensamente una válvula eléctrica operada por solenoides. Los controladores neumáticos proporcionales con ganancias muy altas funcionan como controladores de dos posiciones y, en ocasiones, se denominan controladores neumáticos de dos posiciones.

Las Figuras 5-3(a) y (b) muestran los diagramas de bloques para dos controladores de dos posiciones. El rango en el que debe moverse la señal de error antes de que ocurra la conmutación se denomina brecha diferencial. En la Figura 5-3(b) se señala una brecha diferencial. Tal brecha provoca que la salida del controlador u(t) conserve su valor presente hasta que la señal de error se haya desplazado ligeramente más allá de cero. En algunos casos, la brecha diferencial es el resultado de una fricción no intencionada y de un movimiento perdido; sin embargo, con frecuencia se provoca de manera intencional para evitar una operación demasiado frecuente del mecanismo de encendido y apagado.

Acción de control proporcional. 

Para un controlador con acción de control proporcional, la relación entre la salida del controlador u(t) y la señal de error e(t) es:

o bien, en cantidades transformadas por el método de Laplace:

donde Kp se considera la ganancia proporcional.

Cualquiera que sea el mecanismo real y la forma de la potencia de operación, el controlador proporcional es, en esencia, un amplificador con una ganancia ajustable. En la figura 5-6 se presenta un diagrama de bloques de tal controlador.

Para ver los efectos de aplicar un bloque proporcional a un sistema de control, ver: Control Proporcional de un Sistema de Control – PID.

Acción de control integral. 

En un controlador con acción de control integral, el valor de la salida del controlador u(t) se cambia a una razón proporcional a la señal de error e(t). Es decir,

o bien:

en donde Ki es una constante ajustable. La función de transferencia del controlador integral es:

Si se duplica el valor de e(t), el valor de u(t) varía dos veces más rápido. Para un error de cero, el valor de u(t) permanece estacionario. En ocasiones, la acción de control integral se denomina control de reajuste (reset). La Figura 5-7 muestra un diagrama de bloques de tal controlador.

Para ver los efectos de aplicar un bloque integrador a un sistema de control, ver: PID – Estudio de la acción integral con Matlab

Acción de control integral-proporcional. 

La acción de control de un controlador proporcional-integral (PI) se define mediante:

null

o la función de transferencia del controlador, la cual es:

null

en donde Kp es la ganancia proporcional y Ti se denomina tiempo integral. Tanto Kp como Ti son ajustables. El tiempo integral ajusta la acción de control integral, mientras que un cambio en el valor de Kp afecta las partes integral y proporcional de la acción de control.

El inverso del tiempo integral Ti se denomina velocidad de reajuste. La velocidad de reajuste es la cantidad de veces por minuto que se duplica la parte proporcional de la acción de control. La velocidad de reajuste se mide en términos de las repeticiones por minuto. La Figura 5-8(a) muestra un diagrama de bloques de un controlador proporcional más integral. Si la señal de error e(t) es una función escalón unitario, como se aprecia en la Figura 5-8(b), la salida del controlador u(t) se convierte en lo que se muestra en la Figura 5-8(c).

null

null

null

Para ver un ejemplo de diseño de un controlador PI, ver: Ejemplo 1 – Diseño de un controlador PI (Proporcional-Integral) – Matlab

Acción de control proporcional-derivativa (PD)

La acción de control de un controlador proporcional-derivativa (PD) se define mediante:

null

la función de transferencia es:

null

en donde Kp es la ganancia proporcional y Td es una constante denominada tiempo derivativo. Tanto Kp como Td son ajustables. La acción de control derivativa, en ocasiones denominada control de velocidad, ocurre donde la magnitud de la salida del controlador es proporcional a la velocidad de cambio de la señal de error. El tiempo derivativo Td es el intervalo de tiempo durante el cual la acción de la velocidad hace avanzar el efecto de la acción de control proporcional.

La Figura 5-9(a) muestra un diagrama de bloques de un controlador PD. Si la señal de error e(t) es una función rampa unitaria como se aprecia en la Figura 5-9(b), la salida del controlador u(t) se convierte en la que se muestra en la Figura 5-9(c). La acción de control derivativa tiene un carácter de previsión. Sin embargo, es obvio que una acción de control derivativa nunca prevé una acción que no ha ocurrido.

null

null

null

Aunque la acción de control derivativa tiene la ventaja de ser de previsión, tiene las desventajas de que amplifica las señales de ruido y puede provocar un efecto de saturación en el actuador. Observe que la acción de control derivativa no se usa nunca sola, debido a que sólo es eficaz durante periodos transitorios.

Para ver un ejemplo de diseño de un controlador PD, ver: Ejemplo 1 – Diseño de un controlador PD (Proporcional-Diferencial)

Acción de control proporcional-Integral-derivativa (PID)

La combinación de una acción de control proporcional, una acción de control integral y una acción de control derivativa se denomina acción de control proporcional-integral-derivativa (PID). 

Esta acción combinada tiene las ventajas de cada una de las tres acciones de control individuales. La ecuación de un controlador con esta acción combinada se obtiene mediante:

null

la función de transferencia es:

null

en donde Kp es la ganancia proporcional, Ti es el tiempo integral y Td es el tiempo derivativo. El diagrama de bloques de un controlador PID aparece en la figura 5-10(a). Si e(t) es una función rampa unitaria, como la que se observa en la Figura 5-10(b), la salida del controlador u(t) se convierte en la de la Figura 5-10(c).

null

nullnull

 

Efectos del sensor sobre el desempeño del sistema.

Dado que las características dinámica y estática del sensor o del elemento de medición afecta la indicación del valor real de la variable de salida, el sensor cumple una función importante para determinar el desempeño general del sistema de control. Por lo general, el sensor determina la función de transferencia en la trayectoria de realimentación. Si las constantes de tiempo de un sensor son insignificantes en comparación con otras constantes de tiempo del sistema de control, la función de transferencia del sensor simplemente se convierte en una constante. Las Figuras 5-11(a), (b) y (c) muestran diagramas de bloques de controladores automáticos con un sensor de primer orden, un sensor de segundo orden sobreamortiguado y un sensor de segundo orden subamortiguado, respectivamente. Con frecuencia la respuesta de un sensor térmico es del tipo de segundo orden sobreamortiguado.

null

null

ANTERIOR: Error en estado estable de un sistema de control

SIGUIENTE: PID – Efecto de las acciones de control Integral y Derivativo

Fuente:

  1. Ingenieria de Control Moderna, 3° ED. – Katsuhiko Ogata,

 

Revisión hecha por:

Prof. Larry Francis Obando – Technical Specialist – Educational Content Writer

Twitter: @dademuch

Copywriting, Content Marketing, Tesis, Monografías, Paper Académicos, White Papers (Español – Inglés)

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, USB Valle de Sartenejas.

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, UCV CCs

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contacto: España +34 633129287

Caracas, Quito, Guayaquil, Cuenca

WhatsApp: +34 633129287

Atención Inmediata !!

Twitter: @dademuch

FACEBOOK: DademuchConnection

Twitter: @dademuch

Control System Analysis, Time Domain

Steady-State error – Control Systems

Errors in a control system can be attributed to many factors. Changes in the reference input will cause unavoidable errors during transient periods and may also cause steady-state errors. Imperfections in the system components, such as static friction, backlash, and amplifier drift, as well as aging or deterioration, will cause errors at steady state. In this section, however, we shall not discuss errors due to imperfections in the system components. Rather, we shall investigate a type of steady-state error that is caused by the incapability of a system to follow particular types of inputs.

Steady-state error is the difference between the input and the output for a prescribed test input as time tends to infinity. Test inputs used for steady-state error analysis and design are summarized in Table 7.1. In order to explain how these test signals are used, let us assume a position control system, where the output position follows the input commanded position.

Step inputs represent constant position and thus are useful in determining the ability of the control system to position itself with respect to a stationary target. An antenna position control is an example of a system that can be tested for accuracy using step inputs.

Ramp inputs represent constant-velocity inputs to a position control system by their linearly increasing amplitude. These waveforms can be used to test a system’s ability to follow a linearly increasing input or, equivalently, to track a constant velocity target. For example, a position control system that tracks a satellite that moves across the sky at a constant angular velocity.

Parabolas inputs, whose second derivatives are constant, represent constant acceleration inputs to position control systems and can be used to represent accelerating targets, such as a missile.

Any physical control system inherently suffers steady-state error in response to certain types of inputs. A system may have no steady-state error to a step input, but the same system may exhibit nonzero steady-state error to a ramp input. (The only way we may be able to eliminate this error is to modify the system structure.) Whether a given system will exhibit steady-state error for a given type of input depends on the type of open-loop transfer function of the system.

Definition of the error in steady state depending on the configuration of the system.

The steady-state errors of linear control systems depend on the type of the reference signal and the type of system. Before undertaking the error in steady state, it must be clarified what is the meaning of the system error.

The error can be seen as a signal that should quickly be reduced to zero, if this is possible. Consider the system of Figure 7-5:

Where r (t) is the input signal, u (t) is the acting signal, b (t) is the feedback signal and y (t) is the output signal. The error e (t) of the system can be defined as:

We must remember that r (t) and y (t) do not necessarily have the same dimensions. On the other hand, when the system has unit feedback, H (s) = 1, the input r (t) is the reference signal and the error is simply:

That is, the error is the acting signal, u (t). When H (s) is not equal to 1, u (t) may or may not be the error, depending on the form and purpose of H (s). Therefore, the reference signal must be defined when H (s) is not equal to 1.

The error in steady state is defined as:

To establish a systematic study of the error in steady state for linear systems, we will classify the control systems as follows:

  1. Unit feedback systems,
  2. Non-unit feedback systems.

Steady-State Error in Unity-feedback control systems

Consider the system shown in Figure 5-49:

The closed-loop transfer function for this can be obtained as:

The transfer function between the error signal e(t) and the input signal r(t) is:

Where the error e(t) is the difference between the input signal and the output signal. The final-value theorem provides a convenient way to find the steady-state performance of a stable system. Since:

The steady-state error is:

This last equation allows us to calculate the steady-state error Ess, given the input R(s) and the transfer function G(s). We then substitute several inputs for R(s) and then draw conclusions about the relationship that exists between the open-loop system G(s)  and the nature of the steady-state error Ess.

  • Step Input: Using R(s) =1/S, we obtain:

Where:

Is the gain of the forward transfer function. In order to have zero steady-state error,

To satisfy the last condition, G(s) must have the following form:

And for the limit to be infinite the denominator must be equal to zero a S goes to zero. So n>=1, that is, at least one pole must be at the origin, equal to say that at least one pure integration must be present in the forward path. The steady-state respond for this case of zero steady-state  error is similar to that shown in Figure 7-2a, ouput 1.

If there are no integrations, the n=0, and it yields a finite error. This is the case shown in Figure 7-2a, output 2.

In summary, for a step input to a unity feedback system, the steady-state error will be zero if there is at least one pure integration in the forward path.  

  • Ramp Input:  Using R(s) =1/Sˆ2, we obtain:

To have zero steady-state error to a ramp input we must have:

To satisfy this G(s) must take the form where n>=2. In other words, there must be at least two integrations in the forward path. An example of a steady-state error for a ramp input is shown in Figure 7.2b, output 1:

If only one integrator exist in the forward path then lim sG(s) is finite rather tan infinite and this lead to a constant error, as shown in Figure 7.2b, output 2. If there is only one integrator in the forward path then lim sG(s) =0, and the steady-state error will be infinite and lead to diverging ramp, as shown in Figure 7.2b, output 3.

  • Parabolic Input: Using R(s) =1/Sˆ3, we obtain:

In order to have zero steady-state error for a parabolic input, we must have:

To satisfy this G(s) n must be n>=3. In other words, there must be at least three integrations in the forward path. If there only two integrators in the forward path then lim sˆ2G(s) is finite rather tan infinite and this lead to a constant error. If there one or zero integrators in the forward path then e() is infinite.

Classification of Control Systems (System Types) and Static Errors Constant.

System Type.  Control system may be classified according to their ability to follow step inputs, ramp inputs or parabolic inputs and so on. This is a reasonable classification scheme because most of the actual inputs can be considered a combination of such inputs. Consider the unity-feedback control system with the following open-loop transfer function G(s):

It involves the term SˆN in the denominator, representing a pole of multiplicity N at the origin. A system is called type 0, type 1, type 2,…if N=0, 1, 2…respectively. As the type increases, accuracy is improved. However, this agraves the stability problem. If  G(s) is written so that each term in the numerator and denominator, except the term SˆN, approaches unity as s approaches zero, then the open-loop gain K is directly related to the steady-state error.  

Static Error Constant. The Static Error Constants defined in the following are figures of merit of control systems. The higher the constants, the smaller the steady-state error.

  • Static Position Error Constant Kp. The steady-state error of a system for a unit-step input is:

The Static Position Error Constant Kp is defined by:

Thus the steady-state error in terms of the Static Position Error Constant Kp is given by:

For a type 0 system:

For a type 1 or higher system:

  • Static Velocity Error Constant Kv. The steady-state error of a system for a unit-ramp input is given by:

The Static Velocity Error Constant Kv is defined by:

Thus the steady-state error in terms of the Static Velocity Error Constant Kv is given by:

For a type 0 system:

For a type 1 system:

For For a type 2 system or higher:

  • Static Acceleration Error Constant Ka. The steady-state error of a system for a unit-parabolic input is given by:

The Static Acceleration Error Constant Kv is defined by:

Thus the steady-state error in terms of the Static Acceleration Error Constant Ka is given by:

For a type 0 system:

For a type 1 system:

For a type 2 system:

For a type 3 system or higher:

Table 7.2 ties together the concepts of steady-state error, static error constants and system type. The table shows the static error constants and the steady-state error as a functions of the input waveform and the system type.

Steady-State Error for Non-unity Feedback Systems.

Control systems often do not have unity feedback because of the compensation used to improve performance or because of the physical model of the system. In these cases the most practical way to analyze the steady-state error is to take the system and form a unity feedback system by adding and subtracting unity feedback paths as shown in Figure 7.15:

Donde G(s)=G1(s)G2(s) y H(s)=H1(s)/G1(s). Notice that these steps require that input and output signals have the same units.

BEFORE: Control System Stability

NEXT: PID -Basic control system actions

Sources:

  1. Control Systems Engineering, Nise pp 340, 353
  2. Sistemas de Control Automatico Benjamin C Kuo pp 390, 395
  3. Modern_Control_Engineering, Ogata 4t pp 301,305

Written by: Larry Francis Obando – Technical Specialist – Educational Content Writer.

  • WhatsApp: +34 633129287
  • dademuchconnection@gmail.com

I resolve problems and exercises in two hours …Inmediate attention!!..

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, Caracas.

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, Valle de Sartenejas.

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contact: Jaén – España: Tlf. 633129287

Caracas, Quito, Guayaquil, Lima, México, Bogotá, Cochabamba, Santiago.

WhatsApp: +34 633129287

Twitter: @dademuch

FACEBOOK: DademuchConnection

email: dademuchconnection@gmail.com

Attention: If what you need to solve urgently a problem of a “Mass-Spring-Damper System ” (find the output X (t), graphs in Matlab of the 2nd Order system and relevant parameters, etc.), or a “System of Electromechanical Control “…  or simply need an advisor to solve the problem and study for the next test… send the problem…I Will Write The Solution …

Related:

The Block Diagram – Control Engineering

Dinámica de un Sistema Masa-Resorte-Amortiguador

Block Diagram of Electromechanical Systems – DC Motor

Transient-response Specifications

Control System Stability

Block Diagram, Control System Analysis, Time Domain

Control System Stability

The most important problem in linear control systems concerns stability. It is the most important system specification among the three requirements enter into the design of a control system: transient response, stability and steady-state error. That is, under what conditions will a system become unstable? If it is unstable, how should we stabilize the system?

The total response of a system is the sum of the forced and natural responses:

null

Using this concept, we present the following definition of stability, instability and marginal stability:

  • A linear time-invariant system is stable if the natural response approaches to zero as time approaches infinity.
  • A linear time-invariant system is unstable if the natural responses grows without bound as time approaches infinity.
  • A linear time-invariant system is marginally stable if the natural response neither decays nor grows but remains constant or oscillates as time approaches infinity.

Thus the definition of stability implies that only the forced response remains as the natural response approaches zero. An alternate definition of stability is regards the total response and implies the first definition based upon the natural response:

  • A system is stable if every bounded input yields a bounded output. We call this statement a bounded-input, bounded-output (BIBO) definition of stability.

We now realize that if the input is bounded but the output is unbounded, the system is unstable. If the input is unbounded we will see an unbounded total response and we cannot draw any about conclusion about stability.

Physically, an unstable system whose natural response grows without bound can cause damage to the system, to adjacent property, or to human life. Many times systems are designed with limited stops to prevent total runaway.

Negative feedback tends to improve stability. From the study of the system poles we must recall that poles in the left half-plane (lhp) yield either pure exponential decay or damped sinusoidal natural responses. These natural responses decay to zero when time approaches infinity. Thus, if the closed-loop system poles are in the left half of the plane and hence have a negative real part, the system is stable. That is:

  • Stable systems have closed-loop transfer functions with poles only in the left half-plane.

Poles in the right half-plane (rhp) yield either pure exponentially increasing or exponentially increasing sinusoidal natural responses, which approche infinity when time approaches infinity. Thus:

  • Unstable systems have closed-loop transfer functions with at least one pole in the right half-plane and /or poles of multiplicity greater than 1 on the imaginary axis.

Finally, the system that has imaginary axis poles of multiplicity 1 yields pure sinusoidal oscillations as a natural response. These responses neither increase nor decrease in amplitude. Thus,

  • Marginally stable systems have closed-loop transfer functions with only imaginary axis poles of multiplicity one and poles in the left half-plane.

Figure 6.1a shows a unit step response of a stable system, while Figure 6.1b shows an unstable system.

null

[1]

Routh’s Stability Criterion

Routh’s Stability criterion tell us whether or not there are unstable roots in a polynomial equation without actually solving for them. This stability criterion applies to polynomials with only a finite number of terms. When the criterion is applied to a control system, information about absolute stability can be obtained directly from the coefficients of the characteristic equation.

The method requires two steps: 1) Generate a data table called a Routh Table and 2) Interpret the to tell how many closed-loop system poles are in the left half-plane, the right half-plane, and on the jw-axis. The power of the method lies in design rather than analysis. For example, if you have an unknown parameter in the denominator of a transfer function, it is difficult to determine via a calculator the range of this parameter to yield stability. We shall see that The Routh-Hurwitz criterion can yield a closed-form expression for the range of the unknown parameter.

  • Generating a basic Routh Table. Considering the equivalent closed-loop transfer function in Figure 6.3. Since we are interested in the system poles, we focus our attention on the denominator. We first create the Routh Table shown in Table 6.1:

null

Begin by labeling the rows with powers of s from the highest power of the denominator of the closed-loop transfer function to s^0. Next start with the coefficient of the highest power of s in the denominator and list, horizontally in the first row every other coefficient.

null

In the second row, list horizontally, starting with the next higher power of s, every coefficient that was skipped in the first row. The remain entries are filled as follows:

null

Each entry is a negative determinant of entries in the previous two rows divided by the entry in the first column directly above the calculated row. The left-hand column of the determinant is always the first column of the previous two rows, and the right-hand column is the elements of the column above and to the right.

Figures 6.4 shows an example of building the Routh Table :

null

null

null

The complete array of coefficients is triangular. Note that in developing the array an entire row maybe divided or multiplied by a positive number in order to simplify the subsequent numerical calculation without altering the stability conclusion.

Consider the following characteristic equation, example 5-13:

null

The first two rows can be obtained directly from the given polynomial. The second is divided by two but we arrive to the same conclusion:

null

  • Interpreting the basic Routh Table. Simply stated, the Routh-Hurwitz criterion declares that the number of roots of the polynomial that are in the right-half plane is equal to the number of sign changes in the first column.

If the closed-loop transfer function has all poles in the left-hand plane the system is stable. Thus, the system is stable if there are no sign changes in the first column of the Routh Table.

The last case, Example 5-13, is one of an unstable system. In that example the number of sign changes in the first column is equal to two. This means that there are two roots with positive real parts. Table 6.3 is also an unstable system. There, the first change occurs from 1 in the s^2 row to -72 in the s^1 row. The second occurs from -72 in the  s^1 row to 103 in the  s^0 row. Thus, the system has two poles in the right-half plane.

Routh’s stability criterion is of limited usefulness when applying to control system analysis because it does not suggest how to improve relative stability or how to stabilize a unstable system. It is possible, however, to determine the effects of changing one or two parameters of the system by examining the values that causes instability. In the following we consider the problem of determining the stability range of a parameter value. Consider the system of the Figure 5-38. Let us determine the range of K for stability.

nullnull

The characteristic equation is:

null

And the Routh Table:

null

For stability, K must be positive and all coefficient in the first column must be positive. Therefore:

null

When K=14/9 the system becomes oscillatory and, mathematically, the oscillation is sustained at constant amplitude.

Routh-Hurwitz Criterion Special Cases

Two special cases can occur: (1) The Routh table sometimes will have a zero only in the first column of a row, (2) The Routh table sometimes will have an entire row that consists of zeros

  • Zero only in the first column. If the first elemento of a row is zero, division by zero will be required to form the next row. To avoid this phenomenon, an epsilon ε is assigned to replace the zero in the first column. The value is then allowed to approach zero from either the positive or the negative side, after which the signs of the entries in the first column can be determined. To see the application of this, let us look the follow example: determine the stability of the closed-loop transfer function T(s):

null

The solution is shown in table 6.4:

null

We must begin by assembling the Routh table down to the row where a zero appears only in the first column (the s^3 row). Next, replace the zero by a small number ε complete the table. To begin the interpretation we must first assume a sign, positive or negative for the quantity  ε. Table 6.5  shows the first column of table 6.4 along with the resulting signs for choices of  ε positive and ε negative.

null

If is chosen ε positive Table 6.5 shows a sign change from the s^3 row to the s^2 row, and there will be another sign change from the s^2 row to the s^1 row. Hence the system is unstable and has two poles in the right half-plane. Alternatively, we could chose ε negative.  Table 6.5 then shows a sign change from the s^4 row to the s^3 row. Another sign change would occur from the s^3 row to the s^2 row. Our result would be exactly the same as that for a positive choice for ε. Thus, the system is unstable.

  • Entire Row is zero. We now look at the second special case. Sometimes while making a Routh table, we can find that an entire row consists of zeros because there is a even polynomial that is a factor of the original polynomial. This case must be handled differently from the previous case. Next example shows how to construct and interpret the Routh table wne an entire row of zeros is present.

Determine the number of right half-plane poles in the closed-loop transfer function T(s):

null

Start by forming the Routh table for the denominator. We get Table 6.7:

null

At the second we multiply by 1/7 for convenience. We stop at the third row since the entire row consist of zeros and use the following procedure. First we return to the row immediately above the row of zeros and form an auxiliary polynomial using the entries in that row as coefficients. The polynomial will start with the power of s in the label column and continue by skipping every other power of s. Thus, the polynomial formed for this example is:

null

Next we differentiate the polynomial with respect to s and we obtain:  

null

Finally we use the coefficients of this last equation to replace the row of zeros. Again, for convenience the third row is multiplied by ¼ after replacing the zeros. The remainder of the table is formed in a straightforward manner by following the standard form shown in Table 6.2

We get Table 6.7. It shows that all entries in the first column are positive. Hence, there are no right half-plane poles and the system is stable.

BEFORE: Transient-response specifications

NEXT: Steady-state error in Control Systems

Source:

  1. Control Systems Engineering, Nise pp 301-320
  2. Sistemas de Control Automatico Benjamin C Kuo pp
  3. Modern_Control_Engineering, Ogata 4t pp 288,

Written by: Larry Francis Obando – Technical Specialist – Educational Content Writer.

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, Caracas.

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, Valle de Sartenejas.

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contact: Caracas, Quito, Guayaquil, Jaén, Villafranca de Ordizia- Telf. +34633129287

WhatsApp: +593984950376

email: dademuchconnection@gmail.com

Attention:

If what you need is to solve urgently a problem of a “Mass-Spring-Damper System ” (find the output X (t), graphs in Matlab of the 2nd Order system and relevant parameters, etc.), or a “System of Electromechanical Control “… to deliver to your teacher in two or three days, or with greater urgency … or simply need an advisor to solve the problem and study for the next exam … send me the problem…I Will Write The Solution To any Control System Problem…

, …I will give you the answer in digital and I give you a video-conference to explain the solution … it also includes simulation in Matlab. In the link above, you will find the description of the service and its cost.

Related:

The Block Diagram – Control Engineering

Dinámica de un Sistema Masa-Resorte-Amortiguador

Block Diagram of Electromechanical Systems – DC Motor

Transient-response Specifications

Steady-state error in Control Systems