CN115542904B - Grouping type collaborative fire-fighting robot fire scene internal grouping queue driving control method - Google Patents
Grouping type collaborative fire-fighting robot fire scene internal grouping queue driving control method Download PDFInfo
- Publication number
- CN115542904B CN115542904B CN202211180099.9A CN202211180099A CN115542904B CN 115542904 B CN115542904 B CN 115542904B CN 202211180099 A CN202211180099 A CN 202211180099A CN 115542904 B CN115542904 B CN 115542904B
- Authority
- CN
- China
- Prior art keywords
- fire
- distance
- angle
- robot
- error
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000007689 inspection Methods 0.000 claims description 23
- 239000013641 positive control Substances 0.000 claims description 9
- 230000001052 transient effect Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 3
- 230000007480 spreading Effects 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 abstract description 23
- 238000005265 energy consumption Methods 0.000 abstract description 7
- 230000001105 regulatory effect Effects 0.000 abstract 1
- 230000003864 performance function Effects 0.000 description 104
- 230000006870 function Effects 0.000 description 28
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 20
- 230000003993 interaction Effects 0.000 description 13
- 238000013459 approach Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000006399 behavior Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 239000000779 smoke Substances 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000004804 winding Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000009194 climbing Effects 0.000 description 2
- 230000008094 contradictory effect Effects 0.000 description 2
- 238000011217 control strategy Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 239000007921 spray Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 239000004575 stone Substances 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0278—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0287—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
- G05D1/0289—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling with means for avoiding collisions between vehicles
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0287—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
- G05D1/0291—Fleet control
- G05D1/0293—Convoy travelling
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a group type collaborative fire-fighting robot fire scene internal grouping queue driving control method, which belongs to the technical field of fire fighting and comprises the following steps: acquiring the relative distance and the relative angle between the fire-fighting robot and the nearest fire-fighting robot in front of the fire-fighting robot; calculating a distance error and an angle error based on the relative distance and the relative angle; establishing a distance performance function and an angle performance function based on constraint conditions, distance errors and angle errors; the difference between the distance error and the distance performance function is regulated through the distance control gain, and a speed control signal is obtained; the steering angle control signal is obtained by adjusting the difference between the angle error and the angle performance function through the angle control gain; the robot itself is controlled to travel based on the speed control signal and the steering angle control signal. The invention sets a distributed control mode aiming at the formation following task of the chain structure, can keep the formation all the time without changing the formation, can quickly reach the fire scene, and saves the walking time and the energy consumption.
Description
Technical Field
The invention relates to the technical field of fire robots, in particular to a group collaborative fire robot fire scene internal grouping queue driving control method.
Background
The development of the fire-fighting robot is approximately in three stages, and three different fire-fighting robots are formed at the same time. The first stage mainly depends on a remote operation control system, and a fire-fighting robot formed on the basis of the first stage is called a program-controlled fire-fighting robot, which is also the first generation of fire-fighting robots in the world. In the second stage, the performance is developed mainly by sensors, and the fire-fighting robot in this stage is called a functional fire-fighting robot, which is also the second generation fire-fighting robot in the world. In the third stage, the research direction of the fire-fighting robot starts approaching to intellectualization and collaboration, the fire-fighting robot is not limited to a certain function only, but is more comprehensive in intelligent function, and the fire-fighting robot formed in the stage is called as an intelligent and collaboration fire-fighting robot, which is also a third-generation fire-fighting robot in the world.
Future fire robots are developed to focus on the development of fire robots from single-function fire robots, fire detection robots, to multi-function combinations of fire robots. The control mode is controlled by a wired and wireless remote program, and gradually develops to an intelligent group type and cooperative type fire-fighting robot. The interaction mode can begin with simple sensor interaction, gradually integrate interaction modes such as voice interaction, action interaction, gesture interaction, semantic interaction and the like, and provide possibility for achieving cooperative combat among robots in the next step and realizing unmanned rescue sites.
At present, the traditional unmanned system technology has the problems of low autonomy, poor coordination and the like, can not effectively solve the contradiction between the cooperative internal time, space and task level, and is difficult to realize the fire-fighting task under the complex environment. Aiming at the fire-fighting demands under complex fire environments such as strong interference, high dynamic and the like, in order to realize the expansion of the task capacity of an unmanned system and the improvement of the overall fire-fighting efficiency, a mode of multi-agent cooperative operation is adopted, and the realization of capacity complementation and action coordination becomes the main development direction of future fire-fighting application.
In the related art, the Chinese patent document with publication number of CN110101996A describes a method for collaborative positioning and autonomous operation of a fire-fighting robot in a complex environment, the Chinese patent document with publication number of CN110201333A describes a method for full-automatic collaborative reconnaissance and fire-extinguishing operation of the fire-fighting robot, and the like, and the centralized control adopted by the schemes requires a main control unit, namely a main control console, all operations are carried out in the main control unit, the robot can execute corresponding operations only by issuing instructions through the main control unit, the operation load and the instantaneity of the main control unit are easily reduced, and the formation of the fire-fighting robot is in a dispersed shape and is not easy to control, so that the time and the energy consumption required by the fire-fighting robot to walk to a fire scene are higher, and the fire situation is unfavorable to be restrained in time.
In the related art, the chinese patent document with publication number CN112286179a describes a cooperative motion control method, system, computer device, and robot, where a real-time distributed control manner is adopted in robot control, and a specific control algorithm of the robot is a model predictive control plus mathematical programming method, so as to study a control strategy for keeping priority or speed priority of the group formation of the robot group in a static environment, and in a dynamic environment, to maintain the control strategy of the original formation as far as possible on the premise of ensuring that multiple robots avoid obstacles successfully.
Because the fire scene has high dynamic non-structural characteristics, the communication and the visual field between robots can be blocked by the dense smoke environment and the obstacles, and especially, the problem of cross winding between the water band and between the water band and the obstacles is prevented in consideration of the water band dragging process. In addition, the water belt is in a water filling state, the load is large, the motion control and stability of the robot are greatly affected, and the factors are not considered in the traditional formation cooperative control algorithm, so that the corresponding cooperative control algorithm is needed to be adopted aiming at specific conditions and factors.
Disclosure of Invention
The invention aims to solve the technical problem of how to control a fire-fighting robot to quickly reach a fire scene, and save walking time and energy consumption.
The invention solves the technical problems by the following technical means:
the invention provides a group-type collaborative fire-fighting robot fire scene internal grouping queue driving control method, wherein the fire-fighting robots comprise inspection robots and fire-fighting robots, a queue formed by the fire-fighting robots is in a chain structure, the inspection robots are positioned at the head of a team, the fire-fighting robots follow the inspection robots, each fire-fighting robot is loaded with a laser radar and a camera, and for each fire-fighting robot, the method comprises the following steps:
acquiring the relative distance and the relative angle between the fire-fighting robot and the nearest fire-fighting robot in front of the fire-fighting robot;
calculating a distance error and an angle error based on the relative distance and the relative angle;
establishing a distance performance function and an angle performance function based on constraint conditions, the distance error and the angle error;
adjusting the difference between the distance error and the distance performance function through a distance control gain to obtain a speed control signal;
adjusting the difference between the angle error and the angle performance function through an angle control gain to obtain a steering angle control signal;
and controlling the robot to run on the basis of the speed control signal and the steering angle control signal.
According to the invention, a distributed control mode is set for a chain structure formation following task, only local information of robots is used for interaction, the distance and angle values of the front robots and the distance of the obstacles are obtained under a local coordinate system of each vehicle body, and the distance errors can be ensured to be within the upper limit and the lower limit of the distance performance function by controlling gain adjustment according to the difference between the distance errors of the robots and the distance performance function; likewise, gain adjustment is controlled through an angle according to the difference between the robot angle error and the angle performance function, so that the angle error can be ensured to be within the upper limit and the lower limit of the angle performance function; the movement and steering of the car body are controlled by obtaining corresponding speed and steering angle, so that each car can track the car in front, the formation can be kept all the time without changing the formation, the fire scene can be reached quickly, and the running time and energy consumption are saved.
Further, the kinematic model of the fire-fighting robot is as follows:
wherein: for i=1..n, where x is i ,y i ,θ i Representing the position and direction of the ith robot; u (u) i ,γ i Alpha represents the linear speed, steering angle and length thereof, respectively; Respectively represent x i ,y i ,θ i Is a first derivative of (a).
Further, the formula for calculating the distance error and the angle error based on the relative distance and the relative angle is expressed as:
wherein:representing a distance error; />Indicating an angle error;d i (t) and beta i (t) representing the relative distance and relative angle between two consecutive fire robots, respectively; d, d i,des The distance between the i-th firefighting robot and the i-1 st firefighting robot in front is set in advance.
Further, the establishing a distance performance function and an angle performance function based on the constraint condition, the distance error and the angle error are respectively as follows:
respectively carrying out differential processing on the distance error and the angle error to respectively obtain a distance error dynamic equation and an angle error dynamic equation;
setting constraint conditions, wherein the constraint conditions comprise a safety constraint condition and an initial constraint condition, and the safety constraint condition is d col <d i (t)<d con And |beta i (t)|<β con The initial constraint condition is thatAndd con ,β con respectively representing the distance and angle limits of the occurrence of a connection break, d col Represents the minimum safety distance between two consecutive robots, d i,des The distance between the ith fire-fighting robot and the i-1 th fire-fighting robot in front is preset;
And respectively establishing the distance performance function and the angle performance function based on the distance error dynamic equation, the angle error dynamic equation and the constraint condition.
Further, the distance performance function and the angle performance function are formulated as follows:
wherein:c u ,l d ,l β ,/>respectively predefined normal numbers; parameter c u For the sake of item->Reduction of the performance function caused, l d 、l β 、/>Respectively, containing required transient and steady state performance specifications; sw (sw) 1 ,sw 2 ,sw 1,2 ,sw u Is a switching function; />Representing the minimum distance of the left obstacle from the lead and follower links; />Representing the minimum distance of the right obstacle from the line of the leader and follower; u (u) i Representing the linear velocity; />Lower limit value of the performance function representing the distance error +.>An upper limit value of a performance function representing a distance error; />An upper limit value of a performance function representing an angle error; />A lower limit value of a performance function representing an angle error;respectively correspond to->Is a continuous, differentiable function of (a).
Further, the switching function is:
sw u =sw(u i ,0,δ u ),
sw 1 =sw(λ 1 +δ λ ,0,δ λ )-sw(λ 1 ,1,δ λ ),
sw 2 =sw(λ 2 +δ λ ,0,δ λ )-sw(λ 2 ,1,δ λ ),
wherein:δ u ,δ λ and delta 1,2 Is a predefined positive constant; lambda (lambda) 1 ,λ 2 Respectively represent the distance asStraight line and distance are>The values of the straight line parameters from the closest point of the straight line to the obstacle are on the right and left, respectively.
Further, the step of adjusting the difference between the distance error and the distance performance function by the distance control gain to obtain a speed control signal includes:
controlling the distance error to be within the boundary range of the distance performance function according to a speed distribution control protocol, wherein the formula is as follows:
wherein:representing a speed control signal; />Representing a positive control gain; /> c u Representing the parameters; />Representing a distance error; .
Further, the adjusting the difference between the angle error and the angle performance function through the angle control gain to obtain a steering angle control signal includes:
according to a distribution control protocol of angles, controlling the angle errors to be in a boundary range of the angle performance function, wherein the formula is as follows:
wherein:representing a steering angle control signal; />Representing a positive control gain;wherein (1)> Indicating an angle error;respectively indicate->And->Is a first derivative of (a).
Further, when the inspection robot detects a fire source, the method further comprises:
cruising the periphery of the fire source by using the patrol robot in a circular track, and performing wind speed and direction reconnaissance by using a wind speed and direction transducer carried by the patrol robot to determine the spreading direction and speed of the fire;
Constructing a three-dimensional map around the fire source and around the fire extinguishing robot travel route by using the self-carried laser radar of the inspection robot, carrying out three-dimensional coordinate assignment on the three-dimensional map, and determining the coordinate position of the fire source on the three-dimensional map;
and controlling the fire extinguishing robot to conduct path planning according to the three-dimensional map and the coordinate position of the fire source on the three-dimensional map, and conducting fire extinguishing on the fire source by moving along the planned path.
In addition, the invention also provides a group-type collaborative fire-fighting robot fire scene internal grouping queue driving control system, the system comprises a patrol robot and at least one fire-fighting robot, the patrol robot and the fire-fighting robot form a queue with a chain structure, the patrol robot is positioned at a queue head, and the patrol robot and the fire-fighting robot are loaded with a laser radar, a camera and a distributed controller, wherein: the distributed controller comprises an error calculation module, a performance function building module, a speed control module, an angle control module and a running control module;
the laser radar and the camera are respectively used for acquiring the relative distance and the relative angle between the laser radar and the nearest fire-fighting robot in front of the laser radar;
The error calculation module is used for calculating a distance error and an angle error based on the relative distance and the relative angle;
the performance function establishing module is used for establishing a distance performance function and an angle performance function based on constraint conditions, the distance error and the angle error;
the speed control module is used for adjusting the difference between the distance error and the distance performance function through the distance control gain to obtain a speed control signal;
the angle control module is used for adjusting the difference between the angle error and the angle performance function through angle control gain to obtain a steering angle control signal;
and the running control module is used for controlling the robot to run on the basis of the speed control signal and the steering angle control signal.
The invention has the advantages that:
(1) According to the invention, a distributed control mode is set for a chain structure formation following task, only local information of robots is used for interaction, the distance and angle values of the front robots and the distance of the obstacles are obtained under a local coordinate system of each vehicle body, and the distance errors can be ensured to be within the upper limit and the lower limit of the distance performance function by controlling gain adjustment according to the difference between the distance errors of the robots and the distance performance function; likewise, gain adjustment is controlled through an angle according to the difference between the robot angle error and the angle performance function, so that the angle error can be ensured to be within the upper limit and the lower limit of the angle performance function; the movement and steering of the car body are controlled by obtaining corresponding speed and steering angle, so that each car can track the car in front, the formation can be kept all the time without changing the formation, the fire scene can be reached quickly, and the running time and energy consumption are saved.
(2) According to the invention, by designing distributed control, only less information is used for interaction, and compared with the centralized control, the method has smaller operand; a distributed controller designed for fire-fighting robot formation with communication constraints, safety areas and sight ranges, so that the distance and angle errors meet preset transient (adjustment time) and steady-state performance (convergence speed, steady-state error); the preset performance control technology is introduced, so that the error can have a faster convergence speed and always change within a given range; i.e. communication retention, collision avoidance and azimuth limitation, can be solved simultaneously.
(3) The designed performance function is a function of time exponential decay, ensuring that the error is always within the boundaries of the designed performance function. In this way, the transient and steady state performance of the distance and angle errors can be preset.
(4) According to different scenes, different switching functions are selected in a targeted manner, and therefore various driving scenes in practice can be solved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of a method for controlling travel of a group queue in a scene of a fire of a group-type collaborative fire robot according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a chain structure of a queue formed by fire robots in the invention;
FIG. 3 is a schematic view of a group fire rescue of the present invention;
fig. 4 is a schematic structural diagram of a marshalling queue driving control system in a fire scene of a group-type collaborative fire robot according to another embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions in the embodiments of the present invention will be clearly and completely described in the following in conjunction with the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a group-type collaborative fire-fighting robot fire scene internal grouping queue driving control method, wherein the fire-fighting robots comprise inspection robots and fire-fighting robots, a queue formed by the fire-fighting robots is in a chain structure, the inspection robots are positioned at the head of a team, the fire-fighting robots follow the inspection robots, each fire-fighting robot is loaded with a laser radar and a camera, and for each fire-fighting robot, the method comprises the following steps:
S10, acquiring the relative distance and the relative angle between the fire-fighting robot and the nearest fire-fighting robot in front of the fire-fighting robot;
the fire-fighting robot i detects the relative distance and relative angle of the robot i-1 in front of the fire-fighting robot i by using a vehicle-mounted camera, and simultaneously scans and measures the position of an obstacle region by using a laser radar.
S20, calculating a distance error and an angle error based on the relative distance and the relative angle;
the measured relative distance and relative angle are compared with the expected value set by implementation to obtain a distance error and an angle error.
S30, establishing a distance performance function and an angle performance function based on constraint conditions, the distance error and the angle error;
s40, adjusting the difference between the distance error and the distance performance function through a distance control gain to obtain a speed control signal;
s50, adjusting the difference between the angle error and the angle performance function through angle control gain to obtain a steering angle control signal;
and S60, controlling the robot to run based on the speed control signal and the steering angle control signal.
It should be noted that, the inspection robot is loaded with a differential GPS, a gyroscope, a laser radar and a camera cradle head, and can independently complete the navigation of the global map for tracking a feasible and barrier-free track; the fire extinguishing robot is loaded with the laser radar and the camera holder, global map navigation is difficult to realize independently, and under the condition that the inspection robot has the global map navigation capability, the fire extinguishing robot utilizes limited resources of the radar and the vehicle-mounted holder as following, so that movement in a fire scene is realized.
Aiming at a chain structure formation following task, the embodiment adjusts the gain through distance control according to the difference between the distance error of the robot and the distance performance function, so that the distance error can be ensured to be within the upper limit and the lower limit of the distance performance function; likewise, gain adjustment is controlled through an angle according to the difference between the robot angle error and the angle performance function, so that the angle error can be ensured to be within the upper limit and the lower limit of the angle performance function; through design distributed control, only utilize less information to carry out the interaction, it is less in centralized control operand to compare, and need not to carry out the formation change just can keep the formation all the time, can arrive the scene of a fire fast, saves running time and energy consumption.
In one embodiment, as shown in fig. 2, consider an arrangement of N fire robots, i.e., forming a chain structure, with the rear robot moving forward with the front robot. Wherein the kinematic model of the fire-fighting robot is as follows:
wherein: for i=1..n, where x is i ,y i ,θ i Representing the position and direction of the ith robot; u (u) i ,γ i Alpha represents the linear speed, steering angle and length thereof, respectively;respectively represent x i ,y i ,θ i Is a first derivative of (a).
In an embodiment, in the step S20, the calculation formulas of the distance error and the angle error are as follows:
wherein:representing a distance error; />Indicating an angle error;d i (t) and beta i (t) representing the relative distance and relative angle between two consecutive fire robots, respectively; d, d i,des The distance between the i-th firefighting robot and the i-1 st firefighting robot in front is set in advance.
In one embodiment, the step S30: based on the constraint condition, the distance error and the angle error, a distance performance function and an angle performance function are established, and the method comprises the following steps:
s31, respectively performing differential processing on the distance error and the angle error to respectively obtain a distance error dynamic equation and an angle error dynamic equation;
it should be noted that, differentiating the time of the distance error and the angle error, and substituting the robot kinematic model to obtain the distance error dynamic equation and the angle error dynamic equation as follows:
wherein: for i=1, …, N, where Φ i =θ i -θ i-1 。
S32, setting constraint conditions, wherein the constraint conditions comprise a safety constraint condition and an initial constraint condition, and the safety constraint condition is d col <d i (t)<d con And |beta i (t)|<β con The initial constraint condition is that And->d con ,β con Respectively representing the distance and angle limits of the occurrence of a connection break, d col Represents the minimum safety distance between two consecutive robots, d i,des The distance between the ith fire-fighting robot and the i-1 th fire-fighting robot in front is preset.
It should be noted that, according to the safety constraint condition, it is concluded that: we should always keep the value of the performance function bounded, as follows:
from the initial constraints, it is possible to obtain:
and->
S32, respectively establishing the distance performance function and the angle performance function based on the distance error dynamic equation, the angle error dynamic equation and the constraint condition.
It should be noted that, the purpose of setting the performance function of the error is to constrain the range of the error so that the error gradually converges within the range of action of the performance function. Furthermore, the performance function should not exceed certain values, otherwise the visual connection is broken or a collision between robots occurs.
It should be noted that, as shown in fig. 2, it is assumed that the only available feedback signals available to each fire robot are: d, d i (t) and beta i (t) detecting the relative distance and relative angle of the robot in front of the robot by the rear robot through the vehicle-mounted cradle head camera, and detecting and extracting the position of the obstacle relative to the robot by the laser scanner, respectively. The control objective is to design a fully distributed control protocol so that each robot tracks its predecessor robots, i.e., d i (t)→d i,des And beta i (t) →0, wherein d i,des The distance between the ith robot and the front ith-1 robot is preset. Furthermore, collisions and field of view connection interruptions between robots should be avoided. Thus, d can be used con ,β con Indicating the distance and angle limits, respectively, of the occurrence of a disconnection, using d col Representing the minimum safety distance between two successive robots, another control objective is to set d i (t) and beta i (t) is limited to d col <d i (t)<d con And |beta i (t)|<β con . Finally, each robot should avoid collisions with any static obstacles of its path while keeping its front robot in the field of view of its camera.
It is assumed that the lead inspection robot of the row performs a feasible and unobstructed trajectory whose linear speed and steering angle are a bounded continuous function. In particular, it is assumed that the absolute value of the steering angle of the front robot is smaller than a reasonable value of pi/2 rad. Further, assume thatAnd beta con >Pi/4, which meets the standard specification of a typical camera, where gamma max Is the maximum steering angle, w represents the vehicle width, and α is the robot length. It is also easy to verify that these are the minimum requirements that the follower is able to track his front robot when performing the minimum turning radius. Examples: wherein, alpha=3m, w=2m and gamma of a common robot max =0.183 pi. Therefore, claim d con >7.97m。
In one embodiment, the distance performance function and the angle performance function established in the step S32 are as follows:
wherein:c u ,l d ,l β ,/>respectively predefined normal numbers; parameter c u For the sake of item->Reduction of the performance function caused, l d 、l β 、/>Respectively, containing required transient and steady state performance specifications; sw (sw) 1 ,sw 2 ,sw 1,2 ,sw u Is a switching function; />Representing the minimum distance of the left obstacle from the lead and follower links; />Representing the minimum distance of the right obstacle from the line of the leader and follower; u (u) i Representing the linear velocity; />A lower limit value of a performance function representing a distance error; />An upper limit value of a performance function representing a distance error; />An upper limit value of a performance function representing an angle error; />A lower limit value of a performance function representing an angle error;respectively correspond to->Is a continuous, differentiable function of (a).
It should be noted that the designed performance function is a function of time exponential decay, ensuring that the error is always within the boundaries of the designed performance function. In this way, the transient and steady state performance of the distance and angle errors can be preset. The function of the performance function is to ensure that the distance error and the angle error can converge within a bounded range, i.e. the error range is constrained, forcing the error to eventually converge to the bounded range.
Further, in order to define the switching function sw i Introducing a functionWherein (1)>If x is less than or equal to 0; ii) Critical (R)>If x>0. Thus, sw (x, ε, δ) is a C 1 Switching functions, e for all x e (- ≡e)]Equal to 0, for all x e [ e+delta, + ] is equal to 1, and it is an ascending, continuous and slightly graph, taking a value from 0 to 1 for x e (∈, ∈+delta).
In this embodiment, the switching term in the performance function is defined as:
sw u =sw(u i ,0,δ u ),
sw 1 =sw(λ 1 +δ λ ,0,δ λ )-sw(λ 1 ,1,δ λ ),
sw 2 =sw(λ 2 +δ λ ,0,δ λ )-sw(λ 2 ,1,δ λ ),
wherein delta u ,δ λ And delta 1,2 A very small predefined normal number;represents the minimum distance lambda between the right and left obstacles and the connecting straight line of the leader and follower 1 ,λ 2 The values of the parameters of the straight line from the nearest point of the straight line to the obstacle are respectively on the right and the left. Thus, if lambda i E (0, 1), then the nearest point is located to followBetween the leader and the leader. In other words, the corresponding obstacles may interfere between them. Thus, the present embodiment uses λ i To determine if the obstacle would cause a possible collision or disconnection.
During the fire robot train traveling, the performance function is typically driven near zero to keep the corresponding error near the origin. However, in some cases, the error must be changed to meet safety specifications (i.e., avoid collisions and connection maintenance).
Case (1): when the robot speed is zero (u=0), it cannot perform a turn due to the incomplete constraint of the model. For example, in a fire scene, this may occur when the robot (i-1) performs a radius d around its rear robot i,des Is a circular track of the object. Then, the rear robot needs to stop (u=0) in order to keep the required distance from the front robot. However, since the robot cannot turn, the robot in front cannot be tracked. Thus, when u→0, the control protocol should force the rear robot to increase its speed. By driving the two performance functions of the distance error below zero, which results inIn other words, the latter robot moves towards the robot in front of it.
Case (2): when an obstacle is between two consecutive robots, it is necessary to deviate the performance function of the heading error from zero. In particular, if an obstacle interferes with two consecutive robots, a break in connection and/or collision with the obstacle may occur. Therefore, the rear vehicle must avoid the obstacle while the front vehicle needs to be tracked to maintain connectivity. In this case, a performance function of the off-heading error is selected. For example, consider the case where there is an obstacle to the right of both the leader and follower, which would interfere with them. In this case we choose to reduce the value of the performance function, ultimately leading to β i <0. Thus, the latter robot performs a left turn and moves away from the obstacle, thereby keeping the visual connection with the leading robot while avoiding the other machinesAnd (5) a person. Alternatively, if an obstacle appears on the left, the value of the corresponding performance function is chosen to be increased, allowing the follower to make a right turn. In this way, neither a disconnection nor a collision occurs.
Case (3): a third situation may occur in the case of two obstacles simultaneously, one on the left side and one on the right side of the latter robot, where the performance function has to be modified. For example, a rear vehicle may need to turn left at the same time to maintain a connection with the front robot because there is an obstacle on the right, while turning right to avoid collision to the left due to the right obstacle. Note that in this case, the solution proposed in case B can lead to deadlock. Thus, to solve this problem, the approach proposed by case (1) was chosen here. Thus, the follower approaches the leader and is far from the location where the contradictory event occurred.
Furthermore, the performance function should not exceed certain values to prevent interruption of the visual connection or collision between vehicles.
In the embodiment, under the automatic fire extinguishing scene, the relative distance and the relative angle between the first fire-fighting robot and the second fire-fighting robot are measured by a camera, the distance between the first fire-fighting robot and the obstacle is obtained by a laser scanner, and 3 special cases are considered; then, a dynamic equation (a differential form of the performance function, which displays the variation trend of the performance function) of the performance function is obtained by utilizing the Lipohsh continuous projection operator, and a preset performance function is further obtained. The above 3 case problems can be solved by selecting the corresponding function term through the switching function, and the switching function is used for modifying the normal exponential behavior of the performance function, further:
(1)the term being at u only i <δ u And with u i The [ 0 ] is effective when it becomes [ infinity ], because it is 1-sw u In u i From delta u When changing to 0, smoothly changes from 0 to 1. In other words, as u i This reduces, which results in a reduced performance function of the distance error and falls below zero. Thus (2)The follower approaches the leader, thereby adapting to the case (1) situation.
(2)The term being at lambda only 1 ∈[0,1]Is effective when the right obstacle interferes between the follower and the leader because sw 1 =1 for λ 1 ∈[0,1]And rapidly decrease when lambda 1 From 0 to-delta λ Or from 1 to 1+delta λ And becomes 0. As the straight line connecting the leader and follower approaches the obstacle, the risk of collision and/or disconnection becomes greater. Thus, the term becomes- ≡, resulting in a decrease in the performance function of the heading error, and the following fire-fighting robot performs a left turn. Similarly, the term- >Is suitable for the situation of left side barriers.
(3)The term is valid only when the two terms are almost opposite, due to 1-w 1, Is a behavior of (1). When the rear robot encounters a contradictory event and causes the performance function of the distance error to decrease and fall below zero, it corresponds to case (3) situation. Thus, the follower is close to the leader, implementing the approach we previously proposed.
It should be noted that, since the switching function introduced in the present embodiment is a smooth switching, even if the normal behavior of the performance function is modified due to the switching in the above case, the performance function is still a continuously differentiable function.
The fire-fighting robot in the fire scene has the following characteristics: 1) The high load, its dead weight is several hundred kilograms, consider and drag the water hose and water cannon to spray the demand, its load capacity is several hundred kilograms; 2) The maneuvering performance is high, the fire-fighting time requirement is strict, and the fire-fighting robot is required to have the characteristic of high-speed movement; 3) The crawler belt is suitable for all terrains, and is suitable for various terrains of roads, indoor and muddy roads, stone pavements and step pavements; 4) The climbing and obstacle crossing capability is high, and the slope is 45 degrees and the obstacle is 20 cm; 5) The device has the functions of automatically avoiding obstacles, searching fire sources, collecting data, transmitting images, bidirectionally transmitting voice and shooting and returning in real time. The fire source position can be determined through the data transmitted by the fire-fighting robot, the fire scene is known, and a proper rescue scheme is formulated.
In addition, the fire-fighting robot can use particles with directions to represent the motion track and the real-time position in a simulation model, but factors such as the actual size of the robot and the safety distance between the robots need to be considered when obstacle avoidance is performed. If the model is required to be more in line with the actual situation, the model can be simulated according to the size of 1:1.
In addition, the fire scene has high dynamic non-structural characteristics, the communication and the visual field between robots can be blocked by the dense smoke environment and the obstacle, and the problem of cross winding between the water belt and between the water belt and the obstacle is prevented especially by considering the water belt dragging process. In addition, the water belt is in a water filling state, the load is large, the motion control and stability of the robot are greatly affected, and the factors are not considered in the traditional formation cooperative control algorithm, so that the corresponding cooperative control algorithm is needed to be adopted aiming at specific conditions and factors.
According to the embodiment, the environmental factors of the fire scene are considered more comprehensively, the normal index behavior of the performance function is modified by setting the switching function, the problems of passing and obstacle avoidance of the crawler robots and the ground multi-road-condition mode and the problem of water belt traction are solved, and therefore the fire-fighting robot can quickly reach the fire scene and timely restrain the fire.
In one embodiment, the step S40: and adjusting the difference between the distance error and the distance performance function through a distance control gain to obtain a speed control signal, wherein the speed control signal comprises the following steps of:
controlling the distance error to be within the boundary range of the distance performance function according to a speed distribution control protocol, wherein the formula is as follows:
wherein:representing a speed control signal; />Representing a positive control gain; /> c u Representing the parameters; />Indicating a distance error.
In one embodiment, the step S50: and adjusting the difference between the angle error and the angle performance function through angle control gain to obtain a steering angle control signal, wherein the method comprises the following steps of:
according to a distribution control protocol of angles, controlling the angle errors to be in a boundary range of the angle performance function, wherein the formula is as follows:
wherein:representing a steering angle control signal; />Representing a positive control gain;wherein (1)> Indicating an angle error;respectively indicate->And->Is a first derivative of (a).
The distributed control protocol designed by the embodiment comprises speed control and angle control, wherein the speed control is adjusted by a distance control gain according to the difference value between the distance error of the robot and the distance performance function, so that the distance error can be ensured to be within the upper limit and the lower limit of the distance performance function. And in the same way, the steering angle control is realized by adjusting the gain through angle control according to the difference between the angle error of the robot and the angle performance function, so that the angle error can be ensured to be in the upper limit and the lower limit of the angle performance function, and meanwhile, the switching function is introduced to carry out controller modification on 3 special cases, thereby meeting the actual scene requirement.
It should be noted that the desired transient and steady state performance of the system and the avoidance of collisions and connectivity maintenance specifications are only determined by selecting the correct performance functionRealized by simplifying the control gain +.>Is selected from the group consisting of (a) and (b).
In particular, the gain is controlledAnd->Is selected to affect the corresponding performance function packageError of collaterals->And->As well as the characteristics of the control input (e.g., decreasing the gain value results in increasing the oscillating behavior within a specified performance range, improving when higher values are employed, but increasing the amplitude and rate at which the controller performs). Thus, additional fine tuning may be required in real-time scenarios to keep the required control input signals within the feasible range of actuator implementations.
Due to control of gainAnd->The convergence speed of the distance error and the azimuth error can be influenced by the value, and the input of the controller can be generally used for adjusting one parameter from small to large in a simulation experiment, observing the change condition of the performance curve and selecting a proper parameter. In practical application, the influence factors are more, such as the mechanical structure, disturbance, environmental factors and the like of the fire-fighting robot body can influence the motion condition of the robot, so that the fire-fighting robot body cannot be completely matched with a simulation experiment, but the influence effect of parameter change on performance is approximately the same, and the parameters can only be continuously debugged, so that a pair of more suitable parameters can be found.
In one embodiment, the fire-fighting robot can use particles with directions to represent the motion track and real-time position in a simulation model, but factors such as the actual size of the robot and the safety distance between robots need to be considered when obstacle avoidance is performed. If the model is required to be more in line with the actual situation, the model can be simulated according to the size of 1:1.
In an embodiment, as shown in fig. 3, when the inspection robot detects a fire source, the method further includes:
cruising the periphery of the fire source by using the patrol robot in a circular track, and performing wind speed and direction reconnaissance by using a wind speed and direction transducer carried by the patrol robot to determine the spreading direction and speed of the fire;
constructing a three-dimensional map around the fire source and around the fire extinguishing robot travel route by using the self-carried laser radar of the inspection robot, carrying out three-dimensional coordinate assignment on the three-dimensional map, and determining the coordinate position of the fire source on the three-dimensional map;
and controlling the fire extinguishing robot to conduct path planning according to the three-dimensional map and the coordinate position of the fire source on the three-dimensional map, and conducting fire extinguishing on the fire source by moving along the planned path.
It should be noted that, the group robot has the characteristics of a typical distributed system, and performs complex tasks through actions such as interaction and coordination of robots with limited capabilities, and performs complex tasks through actions such as coordination. Compared with the traditional intelligent robot, the group robot has absolute advantages in the aspects of flexibility, cost control, stability and the like. The group robots have the cost advantage that the robots cooperate in a labor division manner, whereby the individual functional modules are assigned to the individual robots in the group. Specifically: group-type firefighting rescue robots are generally equipped with: one inspection robot, two fire extinguishing robots and one smoke exhausting robot.
After a fire disaster occurs, the group type fire-fighting robot receives an alarm signal; the fire extinguishing robots which need to be simultaneously driven not less than 2, the robot species comprise: fire extinguishing robot, smoke exhausting robot and fire extinguishing robot. The robot approaches to the fire source and forms a combat team in the automatic running process.
In this embodiment, aiming at the characteristics of the fire-fighting robot in the disaster site: 1) The high load, its dead weight is several hundred kilograms, consider and drag the water hose and water cannon to spray the demand, its load capacity is several hundred kilograms; 2) The maneuvering performance is high, the fire-fighting time requirement is strict, and the fire-fighting robot is required to have the characteristic of high-speed movement; 3) The crawler belt is suitable for all terrains, and is suitable for various terrains of roads, indoor and muddy roads, stone pavements and step pavements; 4) The climbing and obstacle crossing capability is high, and the slope is 45 degrees and the obstacle is 20 cm; 5) The device has the functions of automatically avoiding obstacles, searching fire sources, collecting data, transmitting images, bidirectionally speaking and shooting and returning in real time. The fire source position can be determined through the data transmitted by the fire-fighting robot, the fire scene is known, and a proper rescue scheme is formulated.
The fire disaster site has high dynamic non-structural characteristics, the communication and the view field among robots can be blocked by the dense smoke environment and the obstacles, and the problem of cross winding between the water belts and between the water belts and the obstacles is prevented especially by considering the water belt dragging process. In addition, the water filling state of the water belt has a large load and has a great influence on the motion control and stability of the robot, and the traditional formation cooperative control algorithm does not consider the factors.
The distributed control protocol designed by the embodiment mainly aims at formation of a chain structure, the formation can be kept all the time without formation change, the chain structure can reach a fire scene quickly, and walking time and energy consumption are saved. The designed distributed control protocol only uses less information for interaction, and compared with the centralized control operation amount, the distributed control protocol has smaller operation amount; the distributed controller is designed for the fire-fighting robot formation with communication constraint, safety area and sight range constraint, so that the distance and angle errors meet the preset transient state (adjusting time) and steady state performance (convergence speed, steady state error). In short, the problems of communication maintenance, collision avoidance, and azimuth limitation can be solved simultaneously.
In addition, as shown in fig. 4, a second embodiment of the present invention proposes a group collaborative fire robot fire scene internal grouping queue driving control system, the system includes a patrol robot and at least one fire extinguishing robot, the patrol robot and the fire extinguishing robot form a queue in a chain structure, the patrol robot is located at a queue head, and the patrol robot and the fire extinguishing robot are loaded with a laser radar, a camera and a distributed controller, wherein: the distributed controller comprises an error calculation module 10, a performance function establishment module 20, a speed control module 30, an angle control module 40 and a running control module 50;
the laser radar and the camera are respectively used for acquiring the relative distance and the relative angle between the laser radar and the nearest fire-fighting robot in front of the laser radar;
the error calculation module 10 is configured to calculate a distance error and an angle error based on the relative distance and the relative angle;
the performance function establishing module 20 is configured to establish a distance performance function and an angle performance function based on constraint conditions, the distance error and the angle error;
the speed control module 30 is configured to adjust a difference between the distance error and the distance performance function by a distance control gain to obtain a speed control signal;
The angle control module 40 is configured to adjust a difference between the angle error and the angle performance function by using an angle control gain to obtain a steering angle control signal;
the driving control module 50 is configured to control the robot to drive itself based on the speed control signal and the steering angle control signal.
In one embodiment, the distance performance function and the angle performance function established by the performance function establishing module are formulated as follows:
wherein:c u ,l d ,l β ,/>respectively predefined normal numbers; parameter c u For the sake of item->Reduction of the performance function caused, l d 、l β 、/>Respectively, containing required transient and steady state performance specifications; sw (sw) 1 ,sw 2 ,sw 1, x,sw u Is a switching function; />Representing the minimum distance of the left obstacle from the lead and follower links; />Representing the minimum distance of the right obstacle from the line of the leader and follower; u (u) i Representing the linear velocity; />A lower limit value of a performance function representing a distance error; />An upper limit value of a performance function representing a distance error; />An upper limit value of a performance function representing an angle error; />A lower limit value of a performance function representing an angle error;respectively correspond to->Is a continuous, differentiable function of (a).
In an embodiment, the switching function is:
sw u =sw(u i ,0,δ u ),
sw 1 =sw(λ 1 +δ λ ,0,δ λ )-sw(λ 1 ,1,δ λ ),
sw 2 =sw(λ 2 +δ λ ,0,δ λ )-sw(λ 2 ,1,δ λ ),
Wherein: delta u ,δ λ And delta 1,2 Is a predefined positive constant; lambda (lambda) 1 ,λ 2 Respectively represent the distance asStraight line and distance are>The values of the straight line parameters from the closest point of the straight line to the obstacle are on the right and left, respectively.
In one embodiment, the speed control module employs a distributed control protocol that is:
wherein:representing a speed control signal; />Representing a positive control gain; /> c u Representing the parameters; />Indicating a distance error.
In an embodiment, the angle control module adopts a distributed control protocol as follows:
wherein:representing a steering angle control signal; />Representing a positive control gain;wherein (1)> Indicating an angle error;respectively indicate->And->Is a first derivative of (a).
It should be noted that, other embodiments or implementation methods of the group type collaborative fire robot fire scene internal grouping queue driving control system of the present invention can refer to the above method embodiments, and no redundant description is provided herein.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.
Claims (9)
1. The group type collaborative fire-fighting robot fire scene internal grouping queue driving control method is characterized in that the fire-fighting robots comprise inspection robots and fire-fighting robots, a queue formed by the fire-fighting robots is of a chain structure, the inspection robots are located at the head of a team, the fire-fighting robots follow the inspection robots, each fire-fighting robot is loaded with a laser radar and a camera, and for each fire-fighting robot, the method comprises the following steps:
Acquiring the relative distance and the relative angle between the fire-fighting robot and the nearest fire-fighting robot in front of the fire-fighting robot;
calculating a distance error and an angle error based on the relative distance and the relative angle;
establishing a distance performance function and an angle performance function based on constraint conditions, the distance error and the angle error;
adjusting a difference between the distance error and the distance performance function by a distance control gain to obtain a speed control signal, comprising:
controlling the distance error to be within the boundary range of the distance performance function according to a speed distribution control protocol, wherein the formula is as follows:
wherein:representing a speed control signal; />Representing a positive control gain; /> c u Representing the parameters; />Representing a distance error; />A lower limit value of a performance function representing a distance error; />An upper limit value of a performance function representing a distance error; sw (sw) 1 ,sw 2 ,sw 1, Is a switching function; />Representing the minimum distance of the left obstacle from the lead and follower links;representing the minimum distance of the right obstacle from the line of the leader and follower;
adjusting the difference between the angle error and the angle performance function through an angle control gain to obtain a steering angle control signal;
and controlling the robot to run on the basis of the speed control signal and the steering angle control signal.
2. The method for controlling marshalling and queuing in a fire scene of a group-type collaborative fire robot according to claim 1, wherein the kinematic model of the fire robot is:
wherein: for i=1..n, N denotes the number of robots, where x i ,y i ,θ i Representing the position and direction of the ith robot; u (u) i ,γ i Alpha represents the linear speed, steering angle and length thereof, respectively;respectively represent x i ,y i ,θ i Is a first derivative of (a).
3. The method for controlling the travel of a group queue in a fire scene of a group type collaborative fire robot according to claim 1, wherein the formula for calculating a distance error and an angle error based on the relative distance and the relative angle is expressed as:
wherein:representing a distance error; />Indicating an angle error;d i (t) and beta i (t) representing the relative distance and relative angle between two consecutive fire robots, respectively; d, d i,des The distance between the i-th firefighting robot and the i-1 st firefighting robot in front of the robot is preset, and N represents the number of robots.
4. The method for controlling the running of the grouping queue in the fire scene of the group type collaborative fire robot according to claim 1, wherein the establishing distance performance function and the angle performance function based on the constraint condition, the distance error and the angle error are respectively as follows:
Respectively carrying out differential processing on the distance error and the angle error to respectively obtain a distance error dynamic equation and an angle error dynamic equation;
setting constraint conditions, wherein the constraint conditions comprise a safety constraint condition and an initial constraint condition, and the safety constraint condition is d col <d i (t)<d con And |beta i (t)|<β con The initial constraint condition is thatAndd con ,β con respectively representing the distance and angle limits of the occurrence of a connection break, d col Represents the minimum safety distance between two consecutive robots, d i,des The distance between the ith fire-fighting robot and the i-1 th fire-fighting robot in front is preset;
and respectively establishing the distance performance function and the angle performance function based on the distance error dynamic equation, the angle error dynamic equation and the constraint condition.
5. The method for controlling marshalling in-fire travel of a group-type collaborative fire robot according to claim 4, wherein the distance performance function and the angle performance function are formulated as follows:
wherein:c u ,l d ,l β ,/>respectively predefined normal numbers; parameter c u For the sake of item->Reduction of the performance function caused, l d 、l β 、/>Respectively, containing required transient and steady state performance specifications; sw (sw) u Is a switching function; u (u) i Representing the linear velocity; />An upper limit value of a performance function representing an angle error; />A lower limit value of a performance function representing an angle error; />Respectively correspond to-> Is a continuous, differentiable function of (a).
6. The method for controlling marshalling and queuing operations in a fire scene of a group-type collaborative fire robot according to claim 5, wherein the switching function is:
sw u =sw(u i ,0,δ u ),
sw 1 =sw(λ 1 +δ λ ,0,δ λ )-sw(λ 1 ,1,δ λ ),
sw 2 =sw(λ 2 +δ λ ,0,δ λ )-sw(λ 2 ,1,δ λ ),
wherein: delta u ,δ λ And delta 1,2 Is a predefined positive constant; lambda (lambda) 1 ,λ 2 Respectively represent the distance asStraight line and distance are>The values of the straight line parameters from the closest point of the straight line to the obstacle are on the right and left, respectively.
7. The method for controlling the marshalling and queuing operations in a fire scene of a group-type collaborative fire robot according to claim 6, wherein the adjusting the difference between the angle error and the angle performance function by the angle control gain to obtain a steering angle control signal comprises:
according to a distribution control protocol of angles, controlling the angle errors to be in a boundary range of the angle performance function, wherein the formula is as follows:
wherein:representing a steering angle control signal; />Representing a positive control gain; />Wherein (1)> Indicating an angle error; />Respectively representAnd->α is the robot length.
8. The method for controlling marshalling and queuing within a fire scene of a group-type collaborative fire robot as set forth in claim 1, wherein when the inspection robot detects a fire source, the method further comprises:
Cruising the periphery of the fire source by using the patrol robot in a circular track, and performing wind speed and direction reconnaissance by using a wind speed and direction transducer carried by the patrol robot to determine the spreading direction and speed of the fire;
constructing a three-dimensional map around the fire source and around the fire extinguishing robot travel route by using the self-carried laser radar of the inspection robot, carrying out three-dimensional coordinate assignment on the three-dimensional map, and determining the coordinate position of the fire source on the three-dimensional map;
and controlling the fire extinguishing robot to conduct path planning according to the three-dimensional map and the coordinate position of the fire source on the three-dimensional map, and conducting fire extinguishing on the fire source by moving along the planned path.
9. Group formula is fire control robot scene of a fire in coordination marshalling queue control system that traveles, a serial communication port, the system includes a inspection robot and at least one fire-extinguishing robot, inspection robot with the fire-extinguishing robot constitutes the queue that is chain structure just inspection robot is located the team head, inspection robot with fire-extinguishing robot all loads laser radar, camera and distributed controller, wherein: the distributed controller comprises an error calculation module, a performance function building module, a speed control module, an angle control module and a running control module;
The laser radar and the camera are respectively used for acquiring the relative distance and the relative angle between the laser radar and the nearest fire-fighting robot in front of the laser radar;
the error calculation module is used for calculating a distance error and an angle error based on the relative distance and the relative angle;
the performance function establishing module is used for establishing a distance performance function and an angle performance function based on constraint conditions, the distance error and the angle error;
the speed control module is configured to adjust a difference between the distance error and the distance performance function through a distance control gain, to obtain a speed control signal, and includes:
controlling the distance error to be within the boundary range of the distance performance function according to a speed distribution control protocol, wherein the formula is as follows:
wherein:representing a speed control signal; />Representing a positive control gain; /> c u Representing the parameters; />Representing a distance error; />A lower limit value of a performance function representing a distance error; />An upper limit value of a performance function representing a distance error; sw (sw) 1 ,sw 2 ,sw 1, Is a switching function; />Representing the minimum distance of the left obstacle from the lead and follower links;representing the minimum distance of the right obstacle from the line of the leader and follower;
The angle control module is used for adjusting the difference between the angle error and the angle performance function through angle control gain to obtain a steering angle control signal;
and the running control module is used for controlling the robot to run on the basis of the speed control signal and the steering angle control signal.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211180099.9A CN115542904B (en) | 2022-09-27 | 2022-09-27 | Grouping type collaborative fire-fighting robot fire scene internal grouping queue driving control method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211180099.9A CN115542904B (en) | 2022-09-27 | 2022-09-27 | Grouping type collaborative fire-fighting robot fire scene internal grouping queue driving control method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115542904A CN115542904A (en) | 2022-12-30 |
| CN115542904B true CN115542904B (en) | 2023-09-05 |
Family
ID=84728640
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211180099.9A Active CN115542904B (en) | 2022-09-27 | 2022-09-27 | Grouping type collaborative fire-fighting robot fire scene internal grouping queue driving control method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115542904B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119916724A (en) * | 2025-01-15 | 2025-05-02 | 杭州海康机器人股份有限公司 | Robot control method, device, equipment and storage medium |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108469823A (en) * | 2018-04-04 | 2018-08-31 | 浙江大学 | A kind of Mobile Robot Formation's follower method based on homography |
| CN108983786A (en) * | 2018-08-08 | 2018-12-11 | 华南理工大学 | A kind of communication context constrains the formation control method of lower mobile robot |
| CN109857115A (en) * | 2019-02-27 | 2019-06-07 | 华南理工大学 | A kind of finite time formation control method of the mobile robot of view-based access control model feedback |
| CN110362075A (en) * | 2019-06-26 | 2019-10-22 | 华南理工大学 | A kind of unmanned boat output feedback formation control design method with default capabilities |
| CN110605973A (en) * | 2019-09-18 | 2019-12-24 | 北京理工大学 | A multi-axis distributed electric drive vehicle handling stability control method based on layered structure |
| CN113189979A (en) * | 2021-04-02 | 2021-07-30 | 大连海事大学 | Distributed queue finite time control method of unmanned ship |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9146561B2 (en) * | 2013-12-03 | 2015-09-29 | King Fahd University Of Petroleum And Minerals | Robotic leader-follower navigation and fleet management control method |
-
2022
- 2022-09-27 CN CN202211180099.9A patent/CN115542904B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108469823A (en) * | 2018-04-04 | 2018-08-31 | 浙江大学 | A kind of Mobile Robot Formation's follower method based on homography |
| CN108983786A (en) * | 2018-08-08 | 2018-12-11 | 华南理工大学 | A kind of communication context constrains the formation control method of lower mobile robot |
| CN109857115A (en) * | 2019-02-27 | 2019-06-07 | 华南理工大学 | A kind of finite time formation control method of the mobile robot of view-based access control model feedback |
| CN110362075A (en) * | 2019-06-26 | 2019-10-22 | 华南理工大学 | A kind of unmanned boat output feedback formation control design method with default capabilities |
| CN110605973A (en) * | 2019-09-18 | 2019-12-24 | 北京理工大学 | A multi-axis distributed electric drive vehicle handling stability control method based on layered structure |
| CN113189979A (en) * | 2021-04-02 | 2021-07-30 | 大连海事大学 | Distributed queue finite time control method of unmanned ship |
Non-Patent Citations (1)
| Title |
|---|
| 张昕.基于非线性导引的多无人机协同目标跟踪控制.《指挥信息系统与技术》.2019,第10卷(第4期),全文. * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115542904A (en) | 2022-12-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109375632B (en) | Real-time trajectory planning method for automatic driving vehicle | |
| CN114013443B (en) | Automatic driving vehicle lane change decision control method based on hierarchical reinforcement learning | |
| CN102591332B (en) | Device and method for local path planning of pilotless automobile | |
| CN111506081B (en) | A robot trajectory tracking method, system and storage medium | |
| CN113126644B (en) | 3D track tracking method of UAV based on adaptive line of sight method | |
| EP2685338B1 (en) | Apparatus and method for lateral control of a host vehicle during travel in a vehicle platoon | |
| CN112148002A (en) | Local trajectory planning method, system and device | |
| Wit | Vector pursuit path tracking for autonomous ground vehicles | |
| CN115167440B (en) | Virtual pilot-following-based multi-robot formation control method | |
| CN108222093A (en) | A kind of autonomous soil-shifting robot | |
| Sisto et al. | A fuzzy leader-follower approach to formation control of multiple mobile robots | |
| CN118311968B (en) | Unmanned ship formation tracking and obstacle avoidance control method | |
| CN114879671A (en) | Unmanned ship trajectory tracking control method based on reinforcement learning MPC | |
| Bom et al. | A global control strategy for urban vehicles platooning relying on nonlinear decoupling laws | |
| CN112462777A (en) | Ship formation path active coordination system and method considering maneuverability difference | |
| CN118651244B (en) | Motion control method of unmanned tracked vehicle under multidirectional ramp section | |
| CN115542904B (en) | Grouping type collaborative fire-fighting robot fire scene internal grouping queue driving control method | |
| CN116009530A (en) | Path planning method and system for self-adaptive tangential obstacle avoidance | |
| CN117826590A (en) | Unmanned vehicle formation control method and system based on front-follow topology structure | |
| CN116382283A (en) | A Target Tracking Method for Unmanned Submarine Based on Extended Kalman Filter Prediction | |
| AU2021448614A9 (en) | Precise stopping system and method for multi-axis flatbed vehicle | |
| CN119902432B (en) | Cluster path planning method and system based on improved A-star algorithm and reinforcement learning | |
| CN115933697A (en) | Crawler-type intelligent transport vehicle stable obstacle avoidance control method based on deep learning | |
| CN119902530A (en) | Design method of human-machine collaborative anti-collision path tracking control based on collision risk | |
| CN117850413B (en) | A vehicle control method based on "broken line" path |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |