WO2018143003A1 - ロボットパス生成装置及びロボットシステム - Google Patents
ロボットパス生成装置及びロボットシステム Download PDFInfo
- Publication number
- WO2018143003A1 WO2018143003A1 PCT/JP2018/001917 JP2018001917W WO2018143003A1 WO 2018143003 A1 WO2018143003 A1 WO 2018143003A1 JP 2018001917 W JP2018001917 W JP 2018001917W WO 2018143003 A1 WO2018143003 A1 WO 2018143003A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- path
- robot
- data
- path generation
- generated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39298—Trajectory learning
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40428—Using rapidly exploring random trees algorithm RRT-algorithm
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40429—Stochastic, probabilistic generation of intermediate points
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40462—Constant consumed energy, regenerate acceleration energy during deceleration
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40463—Shortest distance in time, or metric, time optimal
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40465—Criteria is lowest cost function, minimum work path
Definitions
- the disclosed embodiment relates to a robot path generation device and a robot system.
- Patent Document 1 describes a method of generating a robot teaching path and a calibration path by simulation of a controller.
- the method of automatically generating the path by simulation as in the above prior art is preferable, but a higher quality path can be generated in a practical time. There is still room for improvement to produce.
- the present invention has been made in view of such problems, and an object thereof is to provide a robot path generation apparatus and a robot system that can generate a more appropriate path in a practical time.
- a data set holding unit for holding a data set associated with the evaluation value data, and the robot between a setting start point and a setting end point arbitrarily set based on a result of a machine learning process based on the data set A robot path generation device having a path generation unit that generates the path is applied.
- a robot system including a robot, a robot path generation device, and a robot controller that controls the operation of the robot based on a generation result of the robot path generation device.
- FIG. 1 shows an example of a schematic system block configuration of the robot system of the present embodiment.
- the robot system 1 includes a host controller 2, a robot controller 3, a servo amplifier 4, and a robot 5.
- the case where the work is performed using the frame F of the automobile as a target work is shown, but other mechanical structures or the like are used as the target work, and parts are incorporated, paint spraying, inspection with a camera image, and the like. It may be applied to other work such as.
- the host control device 2 (input device) is composed of, for example, a general-purpose personal computer equipped with a CPU, ROM, RAM, operation unit, display unit and the like (not shown), and manages the operation of the robot system 1 as a whole. Specifically, a 3D model representing various specifications such as a work command based on various settings and commands input from the operator via the operation unit, and a target workpiece (frame F in this example) and the three-dimensional structure of the robot 5. Data (CAD data, CAM data, etc.) is input to the robot controller 3.
- CAD data CAD data, CAM data, etc.
- the robot controller 3 performs various processes for realizing the same work command input based on the 3D model data input from the host controller 2 and outputs a drive command to the servo amplifier 4.
- the robot controller 3 includes a work planning unit 31, a trajectory planning unit 32, and an inverse kinematics calculation unit 33.
- the work planning unit 31 Based on the 3D model data and work commands input from the host controller 2, the work planning unit 31 performs specific work contents to be performed by the robot 5 (the movement process of the operation position of the end effector 6 described later and each position).
- the setting start point, the setting end point, and the setting posture generated thereby are output to the trajectory planning unit 32.
- the setting start point, the setting end point, and the setting posture are commands indicating the starting point, the end point, and the posture of the end effector 6 at the end point when moving the reference point of the end effector 6 in the work space coordinates XYZ of the robot 5. It is.
- the work planning unit 31 inputs the same 3D model data as that input from the host control device 2 to the trajectory planning unit 32.
- the work planning unit 31 also outputs an operation command to the end effector 6 but is omitted in the drawing.
- the trajectory planning unit 32 moves the end effector 6 from the set start point so that the robot 5 does not interfere with the target workpiece based on the set start point, set end point, set posture, and 3D model data input from the work plan unit 31. It is moved to the set end point, and an appropriate via point and via posture for controlling the posture to the set posture are output to the inverse kinematics computing unit 33. Note that the internal processing of the trajectory planning unit 32 in the example of the present embodiment is performed by a neural network learned by a machine learning process, and details of processing and methods will be described later. The trajectory planning unit 32 corresponds to a path generation unit described in each claim.
- the inverse kinematics calculation unit 33 is required to realize the movement and posture control of the end effector 6 from the current position and posture to the via point and the via posture input from the trajectory planning unit 32.
- a target rotation angle of each drive shaft motor (not shown) is calculated, and a corresponding drive command is output.
- the processing in the work planning unit 31, the trajectory planning unit 32, the inverse kinematics calculation unit 33, and the like described above is not limited to the example of sharing of these processing, and for example, a smaller number of processing units ( For example, it may be processed by one processing unit) or may be processed by a further subdivided processing unit.
- the robot controller 3 may be implemented in software by a program executed by a CPU 901 (see FIG. 12) to be described later, or a part or all of the robot controller 3 may be an ASIC, FPGA, or other electric circuit (neuromorphic chip). Etc.) may be implemented in hardware by an actual device.
- the servo amplifier 4 controls the drive power to control each drive shaft motor (not shown) and the end effector 6 of the robot 5 based on the drive command input from the inverse kinematics calculation unit 33 of the robot controller 3. I do.
- the robot 5 is a manipulator arm (six-axis robot) having six joint axes in the example of the present embodiment shown in the figure.
- the end effector 6 of this example is attached to the arm tip 5a, and position control and posture control of the end effector 6 in the work space coordinates XYZ set with reference to the robot 5 are possible. .
- the host controller 2 and the robot controller 3 correspond to the robot path generation device described in each claim.
- a robot operates by driving each joint with a drive motor having approximately three or more axes.
- the robot controller 3 passes a path (path) through which a movement reference point such as the end effector 6 or the arm tip 5a passes from the start point to the end point.
- Orbit order sequence of waypoints
- the robot controller 3 executes a machine learning process based on the learning data set for the trajectory plan unit, and generates a robot path between an arbitrarily set start point and a set end point.
- a trajectory planning unit 32 is provided.
- a plurality of path data generated based on the motion constraint conditions of the robot 5 based on the 3D model data, and evaluation value data serving as a scale in a predetermined evaluation standard corresponding to each of the path data Are associated with each other.
- the trajectory planning unit 32 performs machine learning with a data set of path data generated based on motion constraint conditions by simulation or the like. Therefore, unlike so-called reinforcement learning, the work environment including the robot 5 and the target workpiece. Machine learning with a data set that is guaranteed to avoid interference contact. In addition, since the data set that the trajectory planning unit 32 performs machine learning also includes evaluation value data corresponding to each path data, it is possible to generate an appropriate path for the evaluation criterion. Hereinafter, the above method will be described in order.
- FIG. 2 shows a work environment map for explaining path planning by waypoint connection in the example of the present embodiment.
- the working environment map shown here is set by the trajectory planning unit 32.
- the six-dimensional drivable space corresponding to the six driving axes provided in the robot 5 is represented in the vertical and horizontal directions. It is the top view expressed by compressing into two dimensions.
- the three-dimensional position and three-dimensional posture of the end effector 6 can be controlled by driving with six axes. That is, the position coordinates of one point in the work environment map are the three-dimensional position and three-dimensional posture of the end effector 6. Both state information is expressed.
- each entry prohibition region X is simply represented by three simple geometric figures in order to avoid the complexity of illustration.
- the trajectory planning unit 32 sets the set start point Ps and the set end point Pe (including postures) input from the work plan unit 31 on the work environment map, while avoiding entry into each entry prohibition region X.
- a moving passage route that is, a path from the setting start point Ps to the setting end point Pe is generated.
- the above-described path is generated by connecting a large number of waypoints searched for branching by simulation.
- the via point P n, m that finally reaches the vicinity of the set end point Pe to the set end point Pe it is possible to pass through in series from the set start point Ps to the set end point Pe.
- a path T1 in which the points P n, m are connected in their ordered sequence is generated.
- the path T1 of the robot 5 is generated as an ordered sequence of a plurality of via points, and the trajectory planning unit 32 performs inverse kinematics operations on each via point P n (via posture) in order from the set start point Ps to the set end point Pe.
- the moving operation of the robot 5 can be controlled in a trajectory (posture) corresponding to the path T1.
- Each of the via points Pn on the path T1 is located on an operation area (operation restriction condition) that does not enter the entry prohibition area X (area that interferes with the surrounding work environment) on the work environment map. That is, interference contact between each part of the robot 5 and the frame F can be surely avoided on the trajectory corresponding to the path T1.
- the power consumption W is used as an evaluation criterion, and the path T1 where the total power consumption W in the entire path T1 is as low as possible is evaluated as high as possible.
- the position of each via point P n, m is basically randomly generated, so there is room for improving the quality of the path T1 from the viewpoint of evaluation criteria.
- the via points P 1 and m corresponding to various combinations of the setting start point Ps and the setting end point Pe (setting posture) are generated, and a lot of evaluation points data corresponding to each are generated.
- the learning data set is created and stored in the database (data set holding unit) 34.
- the trajectory planning unit 32 performs machine learning using such a large number of learning data sets, so that a highly evaluated via point P n, m can be continuously generated, and a highly evaluated path T1 is connected by connecting them. Can be generated.
- the storage device constituting the database 34 may be provided in the robot controller 3 as in the present embodiment (abbreviated as “DB” in FIG. 1 above), or information can be sent to and received from the trajectory planning unit 32. May be provided outside the robot controller 3.
- FIG. 4 shows an example of a schematic model configuration of the neural network of the trajectory planning unit 32 when deep learning is applied.
- the neural network of the trajectory planning unit 32 includes a current via point P n (Xp n , Yp n , Zp n ) and a current via posture V n (abbreviated as three-dimensional vector data) located at that time.
- the set end point Pe (Xpe, Ype, Zpe) and the set posture Ve (abbreviated as three-dimensional vector data) input from the work plan unit 31 are routed next from the correspondence between these input data.
- each value output by each output node of the trajectory planning unit 32 is output by multi-value output (continuous value) by regression problem processing. Then, the next via point P n + 1 and the next via posture V n + 1 configured by these output values are located within a predetermined separation distance range from the current via point P n and the current via posture V n in the work environment map. It is not in the entry prohibition area X, and it is a via point and a via posture from which high evaluation is obtained from the viewpoint of evaluation criteria (low movement power consumption in this example).
- next via point P n + 1 and the next via posture V n + 1 are re-input to the trajectory planning unit 32 together with the set end point Pe and the set posture Ve as the next current via point P n and the current via posture V n.
- the generation process of the next via point P n + 1 and the next via posture V n + 1 in the trajectory planning unit 32 as described above is based on the learning content in the machine learning process in the learning phase of the trajectory planning unit 32. That is, the neural network of the trajectory planning unit 32 learns a feature amount that represents a correlation between each input data and each output data.
- a multi-layer neural network designed as described above is implemented in software (or hardware) on the robot controller 3, and then stored in the database 34.
- the trajectory planning unit 32 is trained by so-called supervised learning using the trajectory planning unit learning data set.
- the trajectory planning unit learning data set used here is, for example, as shown in FIG.
- set posture Ve set posture
- the path data representing the via posture V 1 (illustrated by a work environment map in the figure) and the movement power consumption (evaluation value data) in the path data are created as one learning data set.
- a large number of such learning data sets are created by combining various setting start points Ps and setting end points Pe (setting posture Ve), and stored in the database 34.
- the current via point P n and the current via posture V n , the set end point Pe and the set posture Ve are input data, and the next via point P n + 1 and the next via posture V
- a combination of teacher data with n + 1 as output data is used to adjust the weighting factor of each edge connecting nodes so that the relationship between the input layer and the output layer of the neural network of the trajectory planning unit 32 is established.
- Learning is performed by backpropagation processing. In this back-propagation process, only data with particularly high evaluation value data may be extracted from a large number of data sets, and only this may be used as teacher data to adjust the weighting coefficient of each edge.
- all data sets may be used as teacher data, and adjustment may be performed so that the weighting coefficient of each edge is increased or decreased according to each evaluation value data.
- the processing accuracy is improved by using various known learning methods such as so-called auto encoder, limited Boltzmann machine, dropout, noise addition, and sparse regularization. Also good.
- next via point P n + 1 and the next via posture V n + 1 generated by the trajectory planning unit 32, the current via point P n and the current via posture V n that are the origin thereof, the set end point Pe and the set posture Ve are obtained. It is also possible to create a new learning data set by associating all of the path data including the evaluation data and the evaluation value data, and store it in the database 34 for use in the next learning phase of the trajectory planning unit 32. That is, so-called online learning may be executed.
- the learning phase of the trajectory planning unit 32 corresponds to the machine learning process described in each claim, and the processing part constituting the neural network in the trajectory planning unit 32 corresponds to the via point generation unit described in each claim. .
- the machine learning algorithm of the trajectory planning unit 32 is not limited to the illustrated deep learning, and other machine learning algorithms using a support vector machine, a Bayesian network, etc. (not shown) are applied. May be. Even in such a case, the basic configuration of outputting the next via point P n + 1 and the next via posture V n + 1 appropriately corresponding to the input setting start point Ps and setting end point Pe is the same.
- the robot controller 3 performs a predetermined evaluation corresponding to each of the plurality of path data generated based on the operation constraint condition of the robot 5 and the path data.
- a path of the robot 5 between a set start point Ps and a set end point Pe that are arbitrarily set by executing a machine learning process based on a trajectory plan unit learning data set that is associated with evaluation value data that is a scale in the reference It has a trajectory planning unit 32 that generates T1.
- the trajectory planning unit 32 performs machine learning with a path data data set generated based on motion constraint conditions by simulation or the like, and therefore, unlike so-called reinforcement learning, interference contact between the robot 5 and the work environment. Machine learning is possible with a data set that is guaranteed to be avoided.
- the data set that the trajectory planning unit 32 performs machine learning also includes evaluation value data corresponding to each path data, a path T1 that is appropriate for the evaluation criterion can be generated. As a result, a more appropriate path T1 can be generated.
- the trajectory planning unit 32 has a neural network that generates the next waypoint to be routed next so that the evaluation criterion is optimized based on the learning content in the machine learning process. As a result, it is possible to efficiently generate the path T1 using the next waypoint that is relatively easy to generate.
- the trajectory planning unit 32 generates the path T1 by repeating the branch search at the next via point generated by the neural network from the setting start point Ps to the vicinity of the setting end point Pe.
- the trajectory planning unit 32 generates the path T1 by repeating the branch search at the next via point generated by the neural network from the setting start point Ps to the vicinity of the setting end point Pe.
- the evaluation criterion includes at least power consumption, so that a higher-quality path T1 can be generated.
- the evaluation criteria are not limited to the above-described power consumption, and may include evaluation criteria such as a motion path distance, motion time, vibration evaluation value, or specified axis load.
- the distance of the operation route is higher as the movement route, that is, the route length of the entire path is shorter.
- the operation time is higher as the movement time is shorter, that is, the tact time is shorter.
- the vibration evaluation value is higher as the vibration during movement is smaller.
- the vibration evaluation value may be evaluated based on a so-called jerk value (a differential value of acceleration) detected specifically at the arm tip 5a or the end effector 6.
- the specified axis load is evaluated as being higher as the load on the specified joint drive axis in the robot 5 is smaller.
- the evaluation value data may be multiplied by a weight coefficient that can be arbitrarily set as appropriate, and the total evaluation value data obtained by adding all may be recorded in the data set.
- the motion constraint condition is set in the motion region of the robot 5 that satisfies the motion region in which the robot 5 does not interfere with the surrounding work environment (target workpiece, work table, tools, etc.). .
- other robots are, for example, motion regions that do not enter the predetermined entry prohibition region X for securing safety reasons and performance, or other handling robots are handling. You may set to the operation area
- the host controller 2 that sets the operation constraint condition defined by the 3D model data in the robot controller 3 is provided. This makes it possible to set flexible operation constraint conditions corresponding to the user's intention.
- an operation constraint condition may be input to the robot controller 3 using a programming pendant or the like.
- the database 34 stores a new data set in which the path T1 of the robot 5 generated by the trajectory planning unit 32 is associated with the operation constraint condition of the robot 5 and the evaluation value data. This makes it possible to execute a more appropriate machine learning process using path data including the path T1 generated as appropriate in the past, and improve the accuracy of the trajectory planning unit 32.
- the trajectory planning unit 32 sets a range in which the next waypoint can be set from the current waypoints P 2 and 1 as a candidate range B, and generates a random position from the candidate range B.
- the next waypoint may be generated from among the plurality of candidate points P 3,1 , P 3,2 .
- a known SBL Single-query, Bi-directional, Lazy in Collision checking
- RRT Rapidly-Expanding Random Tree
- the route search is also performed from the set end point Pe, and the candidate range generated by the trajectory planning unit 32 based on the current via points on the set start point Ps side and the set end point Pe side, respectively. (Candidate points) may be substituted for the above-described random sampling of SBL or RRT.
- FIG. 8 is a diagram corresponding to FIG. 2 and shows a work environment map for explaining path planning by partial path connection.
- the trajectory planning unit 32 generates the path T1 of the robot 5 by connecting the partial path generated by the simulation to the set start point Ps and the set end point Pe.
- ⁇ Path planning by waypoint connection in the case of the second embodiment> Specifically, as this simulation, first, an appropriate waypoint is searched for from the set start point Ps and the set end point Pe, and the connection points within the predetermined separation distance range from the via points on the set start point Ps side and the set end point Pe side.
- a partial path T2 that can be arranged and does not enter the entry prohibition area X is randomly generated.
- This partial path T2 is configured by connecting via points that are continuously arranged at a predetermined separation distance, and a partial path T2 is generated in which both ends can substantially connect via points on the set start point Ps side and the set end point Pe side. Repeated until
- the trajectory planning unit 32 in the present embodiment corresponds to a partial path generation unit described in each claim.
- the electric power W consumed by the moving operation of the robot 5 on the partial path T2 can be calculated as evaluation value data.
- This evaluation value data is used as the path data of the setting start point Ps, the setting end point Pe, and the partial path T2.
- One trajectory planning unit learning data set can be created in association with. Then, partial paths T2 respectively corresponding to various combinations of the setting start point and the setting end point Pe (setting posture) are generated, and a large number of learning data sets are created together with the evaluation value data corresponding to each of them to create the database 34. Save it in (Data Set Holding Unit).
- the trajectory planning unit 32 performs machine learning using such a large number of learning data sets, so that a highly evaluated partial path T2 can be generated, and these can be connected to generate a highly evaluated path T1.
- FIG. 10 is a diagram corresponding to FIG. 4 and shows an example of a schematic model configuration of the neural network of the trajectory planning unit 32 when deep learning is applied.
- the neural network of the trajectory planning unit 32 includes a set starting point Ps (Xps, Yps, Zps) and a starting point posture Vs (abbreviated as three-dimensional vector data) input from the work planning unit 31 and a set end point Pe.
- the starting point side end point P 1 of the partial path T2 estimated to be appropriate for connection from the correspondence between these input data (Xp 1 , Yp 1 , Zp 1 ) and start point side posture V 1 (abbreviated as three-dimensional vector data), each connection via point (omitted in the figure), and end point side end point P n (Xp n , Yp n , Zp n ) and the end point side posture V n (abbreviated as three-dimensional vector data).
- the trajectory planning unit learning data set used in the machine learning process of the trajectory planning unit 32 is, for example, as shown in FIG. 11 corresponding to a combination of a predetermined set start point Ps and a set end point Pe (set posture Ve).
- Set posture Ve Created as one learning data set by associating the path data (illustrated by a work environment map in the figure) representing one partial path T2 generated in step 1 and the movement power consumption (evaluation value data) in the path data. To do.
- a large number of such learning data sets are created by combining various setting start points Ps and setting end points Pe (setting posture Ve), and stored in the database 34.
- the set start point Ps and the start position Vs set the end point Pe and the set orientation Ve as input data, the start point end point P 1 and the starting point side orientation V partial path T2 1 and each connection via point (omitted in the figure) and the combination of teacher data using the end point side end point P n and the end point side posture V n as output data, by back-propagation processing according to the evaluation value data, etc.
- Do learning Also in this case, so-called online learning may be applied.
- the machine learning algorithm of the trajectory planning unit 32 is other than the one based on the illustrated deep learning, for example, another machine learning algorithm using a support vector machine or a Bayesian network (not shown). May be applied. Even in this case, the basic configuration of outputting the partial path T2 appropriately corresponding to the input setting start point Ps and setting end point Pe is the same.
- the robot system 1 includes the trajectory planning unit 32 that generates the partial path T2 that should be passed based on the learning content in the machine learning process so that the evaluation criterion is optimized. Yes. Accordingly, it is possible to generate a more appropriate path T1 using the partial path T2 learned that the evaluation criterion is appropriate.
- the trajectory planning unit 32 generates a path T1 by connecting the partial path T2 generated by the trajectory planning unit 32 between the setting start point Ps and the setting end point Pe.
- the partial path T2 can be connected to efficiently generate the path T1.
- a known SBL or RRT may be used as a basic path planning method.
- a branch search is also performed from the set end point Pe, and the partial path generated by the trajectory planning unit 32 based on the current via points on the set start point Ps side and the set end point Pe side, respectively. What is necessary is just to substitute T2 for random sampling of said SBL or RRT (illustration omitted).
- the robot controller 3 includes, for example, a CPU 901, a ROM 903, a RAM 905, a dedicated integrated circuit 907 constructed for a specific application such as an ASIC or FPGA, an input device 913, and an output device 915.
- the program can be recorded in, for example, the ROM 903, the RAM 905, the recording device 917, or the like.
- the program can also be recorded temporarily or permanently on, for example, a magnetic disk such as a flexible disk, various optical disks such as CD / MO disks / DVDs, and a removable recording medium 925 such as a semiconductor memory. .
- a recording medium 925 can also be provided as so-called package software.
- the program recorded on these recording media 925 may be read by the drive 919 and recorded on the recording device 917 via the input / output interface 911, the bus 909, or the like.
- the program can be recorded on, for example, a download site, another computer, another recording device (not shown), or the like.
- the program is transferred via a network NW such as a LAN or the Internet, and the communication device 923 receives this program.
- the program received by the communication device 923 may be recorded in the recording device 917 via the input / output interface 911, the bus 909, or the like.
- the program can be recorded in, for example, an appropriate external connection device 927.
- the program may be transferred via an appropriate connection port 921 and recorded in the recording device 917 via the input / output interface 911, the bus 909, or the like.
- the CPU 901 executes various processes according to the program recorded in the recording device 917, thereby realizing the processes by the work planning unit 31, the trajectory planning unit 32, the inverse kinematics calculation unit 33, and the like.
- the CPU 901 may directly read and execute the program from the recording device 917 or may be executed after it is once loaded into the RAM 905. Further, for example, when the program is received via the communication device 923, the drive 919, and the connection port 921, the CPU 901 may directly execute the received program without recording it in the recording device 917.
- the CPU 901 may perform various processes based on signals and information input from the input device 913 such as a mouse, a keyboard, and a microphone (not shown) as necessary.
- the input device 913 such as a mouse, a keyboard, and a microphone (not shown) as necessary.
- the CPU 901 may output the result of executing the above processing from an output device 915 such as a display device or an audio output device, and the CPU 901 may send the processing result to the communication device 923 or the connection device as necessary. It may be transmitted via the port 921 or recorded on the recording device 917 or the recording medium 925.
- the work planning unit 31, the trajectory planning unit 32, the inverse kinematics calculation unit 33, and the database 34 are all integrated into the robot controller 3, but this is not a limitation. Absent.
- the robot controller 3 includes only the inverse kinematics calculation unit 33, and the general-purpose personal computer 2 ⁇ / b> A (abbreviated as “PC” in the drawing) includes a work planning unit 31, a trajectory planning unit 32,
- the database 34 may be implemented in software. Even in this case, the transmission / reception relationship of various information and commands is the same.
- the trajectory planning unit 32 is machine-learned using the learning data set stored in the database 34, whereby the same effects as those of the above embodiments can be obtained.
- the general-purpose personal computer 2A in this case corresponds to the robot path generation device described in each claim.
- Robot system 2 Host controller (robot path generator) 2A General-purpose personal computer (robot path generator) 3 Robot controller (robot path generator) 4 Servo Amplifier 5 Robot 6 End Effector 31 Work Planning Unit 32 Trajectory Planning Unit (Path Generation Unit, Via Point Generation Unit, Partial Path Generation Unit) 33 Inverse kinematics calculation unit 34 Database (data set holding unit) F frame Ps setting start point Pe setting end point T1 path T2 partial path X entry prohibition area
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
- Numerical Control (AREA)
Abstract
Description
図1は、本実施形態のロボットシステムの概略的なシステムブロック構成の一例を表している。図1においてロボットシステム1は、上位制御装置2と、ロボットコントローラ3と、サーボアンプ4と、ロボット5とを有している。なお、本実施形態の例では、自動車のフレームFを対象ワークとして作業を行う場合を示しているが、他の機械構造物等を対象ワークとし、部品組み込み、塗料噴霧、カメラ画像での検査などといった他の作業に適用してもよい。
一般的にロボットは、およそ3軸以上の駆動モータで各関節を駆動して動作する。上記ロボット5をその作業対象である対象ワークに対して所定の作業動作を行わせる際、ロボットコントローラ3はエンドエフェクタ6もしくはアーム先端部5aなどの移動基準点が始点から終点まで通過するパス(経路、軌道:経由点の順序列)を指定して動作させる。このパスは、ロボットと対象ワークが干渉接触しない等の動作拘束条件下で設定することが望ましく、これまではロボットのパスを手動操作によるティーチングやランダムサンプリングを用いたパスプランニングで生成していた。
図2は、本実施形態の例の経由点接続によるパスプランニングを説明するための作業環境マップを示している。ここで図示する作業環境マップは、上記軌道計画部32が設定するものであり、上述したようにロボット5が備える6軸の駆動軸に対応した6次元の駆動可能空間を、縦方向と横方向の2次元に次元圧縮して表現した平面図である。一般的に6軸での駆動によってエンドエフェクタ6の3次元位置と3次元姿勢の制御が可能であり、つまり当該作業環境マップにおける1点の位置座標はエンドエフェクタ6の3次元位置と3次元姿勢の両方の状態情報を表現している。そしてこの作業環境マップ中においては、上記作業計画部31から入力された3Dモデルデータに基づいて、ロボット5の各部と対象ワークのフレームFとが干渉接触するとして進入が禁止されている複数の進入禁止領域Xが設定されている。なお図中では、図示の煩雑を避けるために、各進入禁止領域Xを3つの単純な幾何図形で簡略的に表している。
軌道計画部32には多様な機械学習手法を適用できるが、以下においては例えば機械学習アルゴリズムに深層学習(ディープラーニング)を適用した場合の例を説明する。図4は、深層学習を適用した場合における軌道計画部32のニューラルネットワークの概略モデル構成の一例を示している。
以上説明したように、第1実施形態のロボットシステム1は、ロボットコントローラ3が、ロボット5の動作拘束条件に基づいて生成された複数のパスデータと、パスデータのそれぞれに対応して所定の評価基準における尺度となる評価値データとを対応付けた軌道計画部学習用データセットに基づく機械学習プロセスを実行して、任意に設定された設定始点Psと設定終点Peとの間のロボット5のパスT1を生成する軌道計画部32を有している。このように軌道計画部32が、シミュレーション等により動作拘束条件に基づいて生成されたパスデータのデータセットで機械学習しているため、いわゆる強化学習と異なり、ロボット5と作業環境との干渉接触の回避が保証されたデータセットで機械学習できる。また、軌道計画部32が機械学習するデータセットには各パスデータに対応した評価値データも含まれているため、評価基準について適切となるパスT1を生成できる。この結果、より適切なパスT1を生成できる。
なお、以上説明した第1実施形態は、その趣旨及び技術的思想を逸脱しない範囲内で種々の変形が可能である。
以下に、部分パス接続によるパスプランニングでパスを生成する第2実施形態について説明する。図8は、上記図2に対応する図であり、部分パス接続によるパスプランニングを説明するための作業環境マップを示している。本実施形態の例では、軌道計画部32は、シミュレーションによって生成した部分パスを設定始点Psと設定終点Peに接続することでロボット5のパスT1を生成する。
このシミュレーションとして具体的には、まず設定始点Psと設定終点Peからそれぞれ適宜の経由点を分岐探索し、それら設定始点Ps側と設定終点Pe側の経由点から所定の離間距離範囲内で略接続可能な配置で、かつ進入禁止領域Xに入らない部分パスT2をランダムに生成する。この部分パスT2は所定の離間距離間隔で連続した配置の経由点を接続して構成され、その両端が設定始点Ps側と設定終点Pe側の経由点を略接続可能となる部分パスT2が生成されるまで繰り返される。
この第2実施形態においても、軌道計画部32には多様な機械学習手法を適用できるが、以下においては例えば機械学習アルゴリズムに深層学習(ディープラーニング)を適用した場合の例を説明する。図10は上記図4に対応する図であり、深層学習を適用した場合における軌道計画部32のニューラルネットワークの概略モデル構成の一例を示している。
以上説明したように、第2実施形態のロボットシステム1は、機械学習プロセスでの学習内容に基づき、評価基準が最適となるよう通過すべき部分パスT2を生成する軌道計画部32を有している。これにより、評価基準が適切であるとして学習した部分パスT2を利用してより適切なパスT1の生成が可能となる。
次に、図12を参照しつつ、上記で説明したCPU901が実行するプログラムによりソフトウェア的に実装された作業計画部31、軌道計画部32、逆キネマティクス演算部33等による処理を実現するロボットコントローラ3のハードウェア構成例について説明する。
上述した各実施形態及び各変形例においては、作業計画部31、軌道計画部32、逆キネマティクス演算部33、データベース34を全てロボットコントローラ3に一体にまとめた構成としていたが、これに限られない。他にも、図13に示すように、ロボットコントローラ3が逆キネマティクス演算部33だけを備え、汎用パーソナルコンピュータ2A(図中では「PC」と略記)に作業計画部31、軌道計画部32、及びデータベース34をソフトウェア的に実装してもよい。この場合においても、各種情報や指令の送受関係については同等となる。そして、汎用パーソナルコンピュータ2Aにおいて、データベース34に保存してある学習用データセットを用いて軌道計画部32を機械学習させることにより、上記各実施形態と同等の効果を得ることができる。なお、この場合の汎用パーソナルコンピュータ2Aが、各請求項記載のロボットパス生成装置に相当する。
2 上位制御装置(ロボットパス生成装置)
2A 汎用パーソナルコンピュータ(ロボットパス生成装置)
3 ロボットコントローラ(ロボットパス生成装置)
4 サーボアンプ
5 ロボット
6 エンドエフェクタ
31 作業計画部
32 軌道計画部(パス生成部、経由点生成部、部分パス生成部)
33 逆キネマティクス演算部
34 データベース(データセット保持部)
F フレーム
Ps 設定始点
Pe 設定終点
T1 パス
T2 部分パス
X 進入禁止領域
Claims (10)
- ロボットの動作拘束条件に基づいて生成された複数のパスデータと、前記パスデータのそれぞれに対応して所定の評価基準における尺度となる評価値データとを対応付けたデータセットを保持するデータセット保持部と、
前記データセットに基づく機械学習プロセスの結果に基いて、任意に設定された設定始点と設定終点との間の前記ロボットのパスを生成するパス生成部と、
を有することを特徴とするロボットパス生成装置。 - 前記パス生成部は、
前記機械学習プロセスでの学習内容に基づき、前記評価基準が最適となるよう次に経由すべき経由点を生成する経由点生成部を有していることを特徴とする請求項1記載のロボットパス生成装置。 - 前記パス生成部は、
前記設定始点から始めて前記設定終点の近傍となるまで前記経由点生成部が生成した前記経由点での分岐探索を繰り返して前記パスを生成することを特徴とする請求項2記載のロボットパス生成装置。 - 前記パス生成部は、
前記機械学習プロセスでの学習内容に基づき、前記評価基準が最適となるよう通過すべき部分パスを生成する部分パス生成部を有していることを特徴とする請求項1記載のロボットパス生成装置。 - 前記パス生成部は、
前記設定始点から前記設定終点の間で前記部分パス生成部が生成した前記部分パスを接続して前記パスを生成することを特徴とする請求項4記載のロボットパス生成装置。 - 前記評価基準は少なくとも消費電力、動作経路の距離、動作時間、振動評価値、指定軸負荷のいずれか1つを含むことを特徴とする請求項1乃至5のいずれか1項に記載のロボットパス生成装置。
- 前記動作拘束条件は、
前記ロボットがその周囲の作業環境と干渉接触しない動作領域、
前記ロボットが所定の進入禁止領域に進入しない動作領域、
前記ロボットがハンドリングしている対象物を所定角度以上に傾けない動作領域、
の少なくとも1つ以上を満たす前記ロボットの動作領域に設定されていることを特徴とする請求項1乃至6のいずれか1項に記載のロボットパス生成装置。 - 前記動作拘束条件を設定する入力装置を有することを特徴とする請求項1乃至7のいずれか1項に記載のロボットパス生成装置。
- 前記データセット保持部は、
前記パス生成部によって生成された前記ロボットのパスを前記ロボットの動作拘束条件と前記評価値データとに対応付けた新たなデータセットを記憶することを特徴とする請求項1乃至8のいずれか1項に記載のロボットパス生成装置。 - ロボットと、
請求項1乃至9のいずれか1項に記載のロボットパス生成装置と、
前記ロボットパス生成装置の生成結果に基いて前記ロボットの動作を制御するロボットコントローラと、
を有することを特徴とするロボットシステム。
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP18748623.8A EP3578322A4 (en) | 2017-01-31 | 2018-01-23 | ROBOT PATH GENERATION DEVICE AND ROBOT SYSTEM |
| JP2018566072A JP6705977B2 (ja) | 2017-01-31 | 2018-01-23 | ロボットパス生成装置及びロボットシステム |
| CN201880008226.2A CN110198813B (zh) | 2017-01-31 | 2018-01-23 | 机器人路径生成装置和机器人系统 |
| US16/452,529 US11446820B2 (en) | 2017-01-31 | 2019-06-26 | Robot path generating device and robot system |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017015408 | 2017-01-31 | ||
| JP2017-015408 | 2017-01-31 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/452,529 Continuation US11446820B2 (en) | 2017-01-31 | 2019-06-26 | Robot path generating device and robot system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018143003A1 true WO2018143003A1 (ja) | 2018-08-09 |
Family
ID=63039661
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2018/001917 Ceased WO2018143003A1 (ja) | 2017-01-31 | 2018-01-23 | ロボットパス生成装置及びロボットシステム |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US11446820B2 (ja) |
| EP (1) | EP3578322A4 (ja) |
| JP (1) | JP6705977B2 (ja) |
| CN (1) | CN110198813B (ja) |
| WO (1) | WO2018143003A1 (ja) |
Cited By (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109702768A (zh) * | 2018-10-10 | 2019-05-03 | 李强 | 学习型机器人动作数据采集方法 |
| WO2020075423A1 (ja) * | 2018-10-10 | 2020-04-16 | ソニー株式会社 | ロボット制御装置、ロボット制御方法及びロボット制御プログラム |
| CN111195906A (zh) * | 2018-11-20 | 2020-05-26 | 西门子工业软件有限公司 | 用于预测机器人的运动轨迹的方法和系统 |
| JP2020082314A (ja) * | 2018-11-29 | 2020-06-04 | 京セラドキュメントソリューションズ株式会社 | 学習装置、ロボット制御装置、及びロボット制御システム |
| EP3666476A1 (en) * | 2018-12-14 | 2020-06-17 | Toyota Jidosha Kabushiki Kaisha | Trajectory generation system and trajectory generating method |
| CN111546327A (zh) * | 2019-01-28 | 2020-08-18 | 罗伯特·博世有限公司 | 用于确定机器人的动作或轨迹的方法、设备和计算机程序 |
| JP2021010970A (ja) * | 2019-07-05 | 2021-02-04 | 京セラドキュメントソリューションズ株式会社 | ロボットシステム及びロボット制御方法 |
| US20210154846A1 (en) * | 2019-11-27 | 2021-05-27 | Kabushiki Kaisha Yaskawa Denki | Simulated robot trajectory |
| JP2021122899A (ja) * | 2020-02-05 | 2021-08-30 | 株式会社デンソー | 軌道生成装置、多リンクシステム、及び軌道生成方法 |
| JP2021169149A (ja) * | 2020-04-16 | 2021-10-28 | ファナック株式会社 | 分解ベースのアセンブリ計画 |
| JP2022077228A (ja) * | 2020-11-11 | 2022-05-23 | 富士通株式会社 | 動作制御プログラム、動作制御方法、および動作制御装置 |
| EP4029660A1 (en) | 2021-01-19 | 2022-07-20 | Kabushiki Kaisha Yaskawa Denki | Planning system, robot system, planning method, and non-transitory computer readable storage medium |
| WO2022153373A1 (ja) * | 2021-01-12 | 2022-07-21 | 川崎重工業株式会社 | 動作生成装置、ロボットシステム、動作生成方法及び動作生成プログラム |
| JP7124947B1 (ja) | 2021-11-08 | 2022-08-24 | 株式会社安川電機 | プランニングシステム、プランニング方法、およびプランニングプログラム |
| JPWO2022201362A1 (ja) * | 2021-03-24 | 2022-09-29 | ||
| US11717965B2 (en) | 2020-11-10 | 2023-08-08 | Kabushiki Kaisha Yaskawa Denki | Determination of robot posture |
| US12042940B2 (en) | 2019-11-27 | 2024-07-23 | Kabushiki Kaisha Yaskawa Denki | Interference check for robot operation |
| WO2024154249A1 (ja) * | 2023-01-18 | 2024-07-25 | 株式会社Fuji | 軌道生成装置および軌道生成方法 |
| WO2024154250A1 (ja) * | 2023-01-18 | 2024-07-25 | 株式会社Fuji | 軌道生成装置および軌道生成方法 |
| WO2024209967A1 (ja) * | 2023-04-03 | 2024-10-10 | 川崎重工業株式会社 | 動作プログラム生成装置および動作プログラム生成方法 |
| WO2025069816A1 (ja) * | 2023-09-28 | 2025-04-03 | オムロン株式会社 | モデル生成方法及び推論プログラム |
| CN120373605A (zh) * | 2025-06-26 | 2025-07-25 | 衢州光明电力设计有限公司 | 一种基于分段考虑的输电线路智能选线动态规划方法 |
Families Citing this family (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3651943B1 (en) | 2017-07-10 | 2024-04-24 | Hypertherm, Inc. | Computer-implemented methods and systems for generating material processing robotic tool paths |
| EP3587046A1 (de) * | 2018-06-28 | 2020-01-01 | Siemens Aktiengesellschaft | Verfahren und vorrichtung zum rechnergestützten ermitteln von regelparametern für eine günstige handlung eines technischen systems |
| DE102019122790B4 (de) * | 2018-08-24 | 2021-03-25 | Nvidia Corp. | Robotersteuerungssystem |
| US11833681B2 (en) * | 2018-08-24 | 2023-12-05 | Nvidia Corporation | Robotic control system |
| JP7351702B2 (ja) * | 2019-10-04 | 2023-09-27 | ファナック株式会社 | ワーク搬送システム |
| CN110991712B (zh) * | 2019-11-21 | 2023-04-25 | 西北工业大学 | 一种空间碎片清除任务的规划方法及装置 |
| US20210197377A1 (en) * | 2019-12-26 | 2021-07-01 | X Development Llc | Robot plan online adjustment |
| DE102019135810B3 (de) * | 2019-12-27 | 2020-10-29 | Franka Emika Gmbh | Erzeugung eines Steuerprogramms für einen Robotermanipulator |
| US20210347047A1 (en) * | 2020-05-05 | 2021-11-11 | X Development Llc | Generating robot trajectories using neural networks |
| TWI834876B (zh) * | 2020-05-14 | 2024-03-11 | 微星科技股份有限公司 | 場域消毒機器人及控制方法 |
| CN111923039B (zh) * | 2020-07-14 | 2022-07-05 | 西北工业大学 | 一种基于强化学习的冗余机械臂路径规划方法 |
| TWI739536B (zh) * | 2020-07-31 | 2021-09-11 | 創璟應用整合有限公司 | 即時校正夾持座標之機械手臂系統 |
| US12145277B2 (en) * | 2020-09-03 | 2024-11-19 | Fanuc Corporation | Framework of robotic online motion planning |
| CN112446113A (zh) * | 2020-11-12 | 2021-03-05 | 山东鲁能软件技术有限公司 | 电力系统环网图最优路径自动生成方法及系统 |
| IT202100003821A1 (it) * | 2021-02-19 | 2022-08-19 | Univ Pisa | Procedimento di interazione con oggetti |
| US12103185B2 (en) | 2021-03-10 | 2024-10-01 | Samsung Electronics Co., Ltd. | Parameterized waypoint generation on dynamically parented non-static objects for robotic autonomous tasks |
| US11945117B2 (en) | 2021-03-10 | 2024-04-02 | Samsung Electronics Co., Ltd. | Anticipating user and object poses through task-based extrapolation for robot-human collision avoidance |
| US11833691B2 (en) * | 2021-03-30 | 2023-12-05 | Samsung Electronics Co., Ltd. | Hybrid robotic motion planning system using machine learning and parametric trajectories |
| CN113552807B (zh) * | 2021-09-22 | 2022-01-28 | 中国科学院自动化研究所 | 数据集生成方法、装置、电子设备及存储介质 |
| US20240326247A1 (en) * | 2023-03-30 | 2024-10-03 | Omron Corporation | Method and apparatus for improved sampling-based graph generation for online path planning by a robot |
| CN116890340B (zh) * | 2023-07-27 | 2024-07-12 | 孙然 | 一种用于工业制造的智能搬运机器人 |
| CN118585296B (zh) * | 2024-04-01 | 2025-02-07 | 南京航空航天大学 | 基于人工智能的异构机器人系统任务行程时间预测方法 |
| CN119041906B (zh) * | 2024-10-31 | 2025-02-21 | 长沙斐视科技有限公司 | 一种用于矿区电铲的远程控制系统及方法 |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH05119823A (ja) * | 1991-10-24 | 1993-05-18 | Hitachi Ltd | ロボツトの軌道計画方法及び制御装置 |
| JP2000010617A (ja) * | 1998-06-24 | 2000-01-14 | Honda Motor Co Ltd | 物品の最適移送経路決定方法 |
| JP2002073130A (ja) * | 2000-06-13 | 2002-03-12 | Yaskawa Electric Corp | ロボットの大域動作経路計画方法とその制御装置 |
| JP2008105132A (ja) * | 2006-10-25 | 2008-05-08 | Toyota Motor Corp | アームの関節空間における経路を生成する方法と装置 |
| JP2011161624A (ja) * | 2010-01-12 | 2011-08-25 | Honda Motor Co Ltd | 軌道計画方法、軌道計画システム及びロボット |
| JP2013193194A (ja) * | 2012-03-22 | 2013-09-30 | Toyota Motor Corp | 軌道生成装置、移動体、軌道生成方法及びプログラム |
| JP2014104581A (ja) | 2012-11-29 | 2014-06-09 | Fanuc Robotics America Inc | ロボットシステムの較正方法 |
| WO2016103297A1 (ja) * | 2014-12-25 | 2016-06-30 | 川崎重工業株式会社 | アーム型のロボットの障害物自動回避方法及び制御装置 |
Family Cites Families (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5141876B2 (ja) * | 2007-09-12 | 2013-02-13 | 株式会社国際電気通信基礎技術研究所 | 軌道探索装置 |
| WO2010057528A1 (en) * | 2008-11-19 | 2010-05-27 | Abb Technology Ab | A method and a device for optimizing a programmed movement path for an industrial robot |
| KR101554515B1 (ko) * | 2009-01-07 | 2015-09-21 | 삼성전자 주식회사 | 로봇의 경로계획장치 및 그 방법 |
| KR101667029B1 (ko) * | 2009-08-10 | 2016-10-17 | 삼성전자 주식회사 | 로봇의 경로 계획방법 및 장치 |
| KR101667031B1 (ko) * | 2009-11-02 | 2016-10-17 | 삼성전자 주식회사 | 로봇의 경로 계획 장치 및 그 방법 |
| JP5906837B2 (ja) * | 2012-03-12 | 2016-04-20 | 富士通株式会社 | 経路探索方法、経路探索装置、及びプログラム |
| US8700307B1 (en) * | 2013-03-04 | 2014-04-15 | Mitsubishi Electric Research Laboratories, Inc. | Method for determining trajectories manipulators to avoid obstacles |
| US9649765B2 (en) * | 2013-03-11 | 2017-05-16 | Siemens Aktiengesellschaft | Reducing energy consumption of industrial robots by using new methods for motion path programming |
| CN103278164B (zh) * | 2013-06-13 | 2015-11-18 | 北京大学深圳研究生院 | 一种复杂动态场景下机器人仿生路径规划方法及仿真平台 |
| JP5860081B2 (ja) * | 2014-02-27 | 2016-02-16 | ファナック株式会社 | ロボットの動作経路を生成するロボットシミュレーション装置 |
| US20150278404A1 (en) * | 2014-03-26 | 2015-10-01 | Siemens Industry Software Ltd. | Energy and cycle time efficiency based method for robot positioning |
| CN104020772B (zh) * | 2014-06-17 | 2016-08-24 | 哈尔滨工程大学 | 一种带有运动学的复杂形状目标遗传路径规划方法 |
| CN105511457B (zh) * | 2014-09-25 | 2019-03-01 | 科沃斯机器人股份有限公司 | 机器人静态路径规划方法 |
| US10023393B2 (en) * | 2015-09-29 | 2018-07-17 | Amazon Technologies, Inc. | Robotic tossing of items in inventory system |
| US10093021B2 (en) * | 2015-12-02 | 2018-10-09 | Qualcomm Incorporated | Simultaneous mapping and planning by a robot |
| US10035266B1 (en) * | 2016-01-18 | 2018-07-31 | X Development Llc | Generating robot trajectories using a real time trajectory generator and a path optimizer |
-
2018
- 2018-01-23 JP JP2018566072A patent/JP6705977B2/ja active Active
- 2018-01-23 CN CN201880008226.2A patent/CN110198813B/zh active Active
- 2018-01-23 EP EP18748623.8A patent/EP3578322A4/en not_active Ceased
- 2018-01-23 WO PCT/JP2018/001917 patent/WO2018143003A1/ja not_active Ceased
-
2019
- 2019-06-26 US US16/452,529 patent/US11446820B2/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH05119823A (ja) * | 1991-10-24 | 1993-05-18 | Hitachi Ltd | ロボツトの軌道計画方法及び制御装置 |
| JP2000010617A (ja) * | 1998-06-24 | 2000-01-14 | Honda Motor Co Ltd | 物品の最適移送経路決定方法 |
| JP2002073130A (ja) * | 2000-06-13 | 2002-03-12 | Yaskawa Electric Corp | ロボットの大域動作経路計画方法とその制御装置 |
| JP2008105132A (ja) * | 2006-10-25 | 2008-05-08 | Toyota Motor Corp | アームの関節空間における経路を生成する方法と装置 |
| JP2011161624A (ja) * | 2010-01-12 | 2011-08-25 | Honda Motor Co Ltd | 軌道計画方法、軌道計画システム及びロボット |
| JP2013193194A (ja) * | 2012-03-22 | 2013-09-30 | Toyota Motor Corp | 軌道生成装置、移動体、軌道生成方法及びプログラム |
| JP2014104581A (ja) | 2012-11-29 | 2014-06-09 | Fanuc Robotics America Inc | ロボットシステムの較正方法 |
| WO2016103297A1 (ja) * | 2014-12-25 | 2016-06-30 | 川崎重工業株式会社 | アーム型のロボットの障害物自動回避方法及び制御装置 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP3578322A4 |
Cited By (45)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020075423A1 (ja) * | 2018-10-10 | 2020-04-16 | ソニー株式会社 | ロボット制御装置、ロボット制御方法及びロボット制御プログラム |
| CN109702768A (zh) * | 2018-10-10 | 2019-05-03 | 李强 | 学习型机器人动作数据采集方法 |
| CN111195906A (zh) * | 2018-11-20 | 2020-05-26 | 西门子工业软件有限公司 | 用于预测机器人的运动轨迹的方法和系统 |
| EP3656513A1 (en) * | 2018-11-20 | 2020-05-27 | Siemens Industry Software Ltd. | Method and system for predicting a motion trajectory of a robot moving between a given pair of robotic locations |
| CN111195906B (zh) * | 2018-11-20 | 2023-11-28 | 西门子工业软件有限公司 | 用于预测机器人的运动轨迹的方法和系统 |
| JP7247552B2 (ja) | 2018-11-29 | 2023-03-29 | 京セラドキュメントソリューションズ株式会社 | 学習装置、ロボット制御装置、及びロボット制御システム |
| JP2020082314A (ja) * | 2018-11-29 | 2020-06-04 | 京セラドキュメントソリューションズ株式会社 | 学習装置、ロボット制御装置、及びロボット制御システム |
| JP7028151B2 (ja) | 2018-12-14 | 2022-03-02 | トヨタ自動車株式会社 | 軌道生成装置 |
| JP2020093364A (ja) * | 2018-12-14 | 2020-06-18 | トヨタ自動車株式会社 | 軌道生成装置 |
| CN111319038B (zh) * | 2018-12-14 | 2023-02-28 | 丰田自动车株式会社 | 轨道生成系统和轨道生成方法 |
| CN111319038A (zh) * | 2018-12-14 | 2020-06-23 | 丰田自动车株式会社 | 轨道生成系统和轨道生成方法 |
| US11433538B2 (en) | 2018-12-14 | 2022-09-06 | Toyota Jidosha Kabushiki Kaisha | Trajectory generation system and trajectory generating method |
| EP3666476A1 (en) * | 2018-12-14 | 2020-06-17 | Toyota Jidosha Kabushiki Kaisha | Trajectory generation system and trajectory generating method |
| KR102330754B1 (ko) * | 2018-12-14 | 2021-11-25 | 도요타지도샤가부시키가이샤 | 궤도 생성 장치 및 궤도 생성 방법 |
| KR20200073985A (ko) * | 2018-12-14 | 2020-06-24 | 도요타지도샤가부시키가이샤 | 궤도 생성 장치 및 궤도 생성 방법 |
| CN111546327A (zh) * | 2019-01-28 | 2020-08-18 | 罗伯特·博世有限公司 | 用于确定机器人的动作或轨迹的方法、设备和计算机程序 |
| JP2021010970A (ja) * | 2019-07-05 | 2021-02-04 | 京セラドキュメントソリューションズ株式会社 | ロボットシステム及びロボット制御方法 |
| EP3827935A1 (en) | 2019-11-27 | 2021-06-02 | Kabushiki Kaisha Yaskawa Denki | Simulated robot trajectory |
| US20210154846A1 (en) * | 2019-11-27 | 2021-05-27 | Kabushiki Kaisha Yaskawa Denki | Simulated robot trajectory |
| US12042940B2 (en) | 2019-11-27 | 2024-07-23 | Kabushiki Kaisha Yaskawa Denki | Interference check for robot operation |
| JP2021122899A (ja) * | 2020-02-05 | 2021-08-30 | 株式会社デンソー | 軌道生成装置、多リンクシステム、及び軌道生成方法 |
| JP7375587B2 (ja) | 2020-02-05 | 2023-11-08 | 株式会社デンソー | 軌道生成装置、多リンクシステム、及び軌道生成方法 |
| JP2021169149A (ja) * | 2020-04-16 | 2021-10-28 | ファナック株式会社 | 分解ベースのアセンブリ計画 |
| JP7628052B2 (ja) | 2020-04-16 | 2025-02-07 | ファナック株式会社 | 分解ベースのアセンブリ計画 |
| US11717965B2 (en) | 2020-11-10 | 2023-08-08 | Kabushiki Kaisha Yaskawa Denki | Determination of robot posture |
| JP7463946B2 (ja) | 2020-11-11 | 2024-04-09 | 富士通株式会社 | 動作制御プログラム、動作制御方法、および動作制御装置 |
| JP2022077228A (ja) * | 2020-11-11 | 2022-05-23 | 富士通株式会社 | 動作制御プログラム、動作制御方法、および動作制御装置 |
| JPWO2022153373A1 (ja) * | 2021-01-12 | 2022-07-21 | ||
| WO2022153373A1 (ja) * | 2021-01-12 | 2022-07-21 | 川崎重工業株式会社 | 動作生成装置、ロボットシステム、動作生成方法及び動作生成プログラム |
| JP7441335B2 (ja) | 2021-01-12 | 2024-02-29 | 川崎重工業株式会社 | 動作生成装置、ロボットシステム、動作生成方法及び動作生成プログラム |
| US12090667B2 (en) | 2021-01-19 | 2024-09-17 | Kabushiki Kaisha Yaskawa Denki | Planning system, robot system, planning method, and non-transitory computer readable storage medium |
| JP2022110711A (ja) * | 2021-01-19 | 2022-07-29 | 株式会社安川電機 | プランニングシステム、ロボットシステム、プランニング方法、およびプランニングプログラム |
| JP7272374B2 (ja) | 2021-01-19 | 2023-05-12 | 株式会社安川電機 | プランニングシステム、ロボットシステム、プランニング方法、およびプランニングプログラム |
| EP4029660A1 (en) | 2021-01-19 | 2022-07-20 | Kabushiki Kaisha Yaskawa Denki | Planning system, robot system, planning method, and non-transitory computer readable storage medium |
| JP7438453B2 (ja) | 2021-03-24 | 2024-02-26 | 三菱電機株式会社 | ロボット制御装置、ロボット制御プログラムおよびロボット制御方法 |
| WO2022201362A1 (ja) * | 2021-03-24 | 2022-09-29 | 三菱電機株式会社 | ロボット制御装置、ロボット制御プログラムおよびロボット制御方法 |
| JPWO2022201362A1 (ja) * | 2021-03-24 | 2022-09-29 | ||
| JP2023069759A (ja) * | 2021-11-08 | 2023-05-18 | 株式会社安川電機 | プランニングシステム、プランニング方法、およびプランニングプログラム |
| JP7124947B1 (ja) | 2021-11-08 | 2022-08-24 | 株式会社安川電機 | プランニングシステム、プランニング方法、およびプランニングプログラム |
| US12304075B2 (en) | 2021-11-08 | 2025-05-20 | Kabushiki Kaisha Yaskawa Denki | Planning system, planning method, and non-transitory computer readable storage medium |
| WO2024154249A1 (ja) * | 2023-01-18 | 2024-07-25 | 株式会社Fuji | 軌道生成装置および軌道生成方法 |
| WO2024154250A1 (ja) * | 2023-01-18 | 2024-07-25 | 株式会社Fuji | 軌道生成装置および軌道生成方法 |
| WO2024209967A1 (ja) * | 2023-04-03 | 2024-10-10 | 川崎重工業株式会社 | 動作プログラム生成装置および動作プログラム生成方法 |
| WO2025069816A1 (ja) * | 2023-09-28 | 2025-04-03 | オムロン株式会社 | モデル生成方法及び推論プログラム |
| CN120373605A (zh) * | 2025-06-26 | 2025-07-25 | 衢州光明电力设计有限公司 | 一种基于分段考虑的输电线路智能选线动态规划方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110198813A (zh) | 2019-09-03 |
| JPWO2018143003A1 (ja) | 2019-06-27 |
| US11446820B2 (en) | 2022-09-20 |
| CN110198813B (zh) | 2023-02-28 |
| EP3578322A4 (en) | 2020-08-26 |
| JP6705977B2 (ja) | 2020-06-03 |
| EP3578322A1 (en) | 2019-12-11 |
| US20190314989A1 (en) | 2019-10-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6705977B2 (ja) | ロボットパス生成装置及びロボットシステム | |
| US20220032461A1 (en) | Method to incorporate complex physical constraints in path-constrained trajectory planning for serial-link manipulator | |
| CN112584990B (zh) | 控制装置、控制方法以及存储介质 | |
| US10671081B1 (en) | Generating and utilizing non-uniform volume measures for voxels in robotics applications | |
| JP6807949B2 (ja) | 干渉回避装置 | |
| JP5754454B2 (ja) | ロボットピッキングシステム及び被加工物の製造方法 | |
| Wang et al. | Optimal trajectory planning of grinding robot based on improved whale optimization algorithm | |
| CN112672857B (zh) | 路径生成装置、路径生成方法及存储有路径生成程序的存储介质 | |
| US8606402B2 (en) | Manipulator and control method thereof | |
| US12202140B2 (en) | Simulating multiple robots in virtual environments | |
| JP2003241836A (ja) | 自走移動体の制御方法および装置 | |
| Balatti et al. | A collaborative robotic approach to autonomous pallet jack transportation and positioning | |
| JP7028196B2 (ja) | ロボット制御装置、ロボット制御方法、及びロボット制御プログラム | |
| WO2021033486A1 (ja) | モデル生成装置、モデル生成方法、制御装置及び制御方法 | |
| JP2019018272A (ja) | モーション生成方法、モーション生成装置、システム及びコンピュータプログラム | |
| CN114516060A (zh) | 用于控制机器人装置的设备和方法 | |
| JP2019084649A (ja) | 干渉判定方法、干渉判定システム及びコンピュータプログラム | |
| US12304075B2 (en) | Planning system, planning method, and non-transitory computer readable storage medium | |
| US10035264B1 (en) | Real time robot implementation of state machine | |
| CN111002315A (zh) | 一种轨迹规划方法、装置及机器人 | |
| Su et al. | Adaptive coordinated motion constraint control for cooperative multi-manipulator systems | |
| JP2021084159A (ja) | 制御装置、制御方法、及びロボットシステム | |
| Zhou et al. | Adaptive Robot Motion Planning for Smart Manufacturing Based on Digital Twin and Bayesian Optimization-Enhanced Reinforcement Learning | |
| Bingol et al. | Hybrid learning-based visual path following for an industrial robot | |
| US20250312914A1 (en) | Transformer diffusion for robotic task learning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18748623 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2018566072 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2018748623 Country of ref document: EP Effective date: 20190902 |
|
| WWW | Wipo information: withdrawn in national office |
Ref document number: 2018748623 Country of ref document: EP |