CN116197887B - Image data processing method, device, electronic equipment and storage medium for generating capture auxiliary images - Google Patents
Image data processing method, device, electronic equipment and storage medium for generating capture auxiliary images Download PDFInfo
- Publication number
- CN116197887B CN116197887B CN202111426988.4A CN202111426988A CN116197887B CN 116197887 B CN116197887 B CN 116197887B CN 202111426988 A CN202111426988 A CN 202111426988A CN 116197887 B CN116197887 B CN 116197887B
- Authority
- CN
- China
- Prior art keywords
- auxiliary
- image
- user
- grabbing
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Processing Or Creating Images (AREA)
- Manipulator (AREA)
Abstract
Description
技术领域Technical field
本申请涉及机械手臂或夹具的自动控制、程序控制B25J领域,更具体而言,特别涉及生成抓取辅助图像的图像数据处理方法、装置、电子设备和存储介质。The present application relates to the field of automatic control and program control B25J of robotic arms or grippers, and more specifically, to image data processing methods, devices, electronic equipment and storage media for generating grasping auxiliary images.
背景技术Background technique
机器人具有感知、决策、执行等基本特征,可以辅助甚至替代人类完成危险、繁重、复杂的工作,提高工作效率与质量,服务人类生活,扩大或延伸人的活动及能力范围。随着工业自动化和计算机技术的发展,机器人开始进入大量生产和实际应用阶段。在工业场景中,工业机器人已经有很普遍的应用,能够代替人类执行一些重复性或者具有危险的工作。传统的工业机器人设计,关注的重点在于机器人硬件的设计和制造,机器人本身不够“智能”。在工业现场使用机器人时,技术人员需要提前对整个工业现场的硬件设备,流水线,物料位置和机器人的任务路径等进行规划,例如,如果要对物品进行分拣搬运,现场工作人员需要先将不同种类的物品分拣出,并整齐地摆放于具有统一规格的物料框中,在使用机器人作业前,需要先将流水线,料框,搬运位置等确定好,并依据确定好的信息为机器人设置固定的运动路径,固定的抓取位置,固定的旋转角度,固定的的夹具等。Robots have basic characteristics such as perception, decision-making, and execution. They can assist or even replace humans in completing dangerous, heavy, and complex tasks, improve work efficiency and quality, serve human life, and expand or extend the scope of human activities and capabilities. With the development of industrial automation and computer technology, robots have begun to enter the stage of mass production and practical application. In industrial scenarios, industrial robots have been widely used and can replace humans in performing some repetitive or dangerous tasks. Traditional industrial robot design focuses on the design and manufacturing of robot hardware, and the robot itself is not "intelligent" enough. When using robots at industrial sites, technicians need to plan in advance the hardware equipment, assembly lines, material locations, and robot task paths for the entire industrial site. For example, if items are to be sorted and transported, on-site workers need to first sort and transport different items. Various types of items are sorted out and neatly placed in material frames with uniform specifications. Before using the robot to operate, you need to determine the assembly line, material frames, transportation positions, etc., and set the robot settings based on the determined information. Fixed motion path, fixed grabbing position, fixed rotation angle, fixed fixture, etc.
作为对上述传统机器人技术的改进,已经开发了基于机器人视觉的智能型程控机器人,然而目前的“智能”还比较简单,主要实现方式是通过相机等视觉采集装置获取与任务相关的图像数据,基于图像数据获得3D点云信息,再基于点云信息对机器人的作业进行规划,包括运动速度,运动轨迹等信息,从而控制机器人执行任务。然而现有的机器人控制方案碰到复杂的任务时,效果并不好。例如,在商超,物流等场景中,针对众多堆叠在一起的物品进行处理,这需要机械臂从散乱,无序场景中,借助视觉设备依次定位及识别物体所在位置,并使用吸盘、夹具或其它仿生器械拾取物体,被拾取的物体,通过机械臂运动,轨迹规划等操作将物体按照一定规则摆放至相应位置。在这样的工业场景下,使用机器人执行抓取存在很多困难,例如,场景中待抓取物品数量过多,光线不均匀,导致部分物品的点云质量较差,影响抓取的效果;物品种类多,且摆放不整齐,朝向五花八门,导致在抓取每个物品时,抓取点都不同,难以确定夹具的抓取位置;物品堆叠放置,容易产生抓取某个物品时带飞其它物品的情况。在这样的工业场景下,影响物品抓取难易程度的因素较多,传统的抓取排序方法效果也不够好;除此以外,当抓取算法设计得较为复杂时,也为现场工作人员带来了更多的障碍,当出现问题时,现场工作人员很难搞清楚为什么出现这种问题,以及怎么调整才能解决问题,往往需要机器人提供商派出专家进行协助。As an improvement to the above-mentioned traditional robot technology, intelligent program-controlled robots based on robot vision have been developed. However, the current "intelligence" is still relatively simple. The main implementation method is to obtain task-related image data through visual acquisition devices such as cameras, based on The image data obtains 3D point cloud information, and then the robot's operations are planned based on the point cloud information, including movement speed, movement trajectory and other information, thereby controlling the robot to perform tasks. However, existing robot control solutions are not effective when encountering complex tasks. For example, in shopping malls, logistics and other scenarios, processing many stacked items requires a robotic arm to use visual equipment to locate and identify the location of objects in a scattered and disordered scene, and use suction cups, clamps or Other bionic devices pick up objects, and the picked objects are placed in corresponding positions according to certain rules through operations such as robotic arm movement and trajectory planning. In such an industrial scenario, there are many difficulties in using robots to perform grasping. For example, there are too many items to be grasped in the scene and the light is uneven, resulting in poor point cloud quality of some items, affecting the grabbing effect; types of items There are many, and they are not arranged neatly and in various directions. As a result, when grabbing each item, the grabbing point is different, making it difficult to determine the grabbing position of the clamp; items are stacked, and it is easy for other items to fly when grabbing an item. Case. In such an industrial scenario, there are many factors that affect the difficulty of grabbing items, and the traditional grabbing and sorting method is not effective enough. In addition, when the grabbing algorithm is designed to be more complex, it also brings problems to on-site workers. More obstacles arise. When a problem occurs, it is difficult for on-site staff to figure out why the problem occurs and how to adjust to solve the problem. It is often necessary for the robot provider to send experts to assist.
发明内容Contents of the invention
鉴于上述问题,提出了本发明以便克服上述问题或者至少部分地解决上述问题。具体地,本发明提出了一种将与本发明的抓取控制方法相关联的参数及图像数据可视化地展示给用户的方法,使得用户在对机器人运行原理不了解的情况下,能够直观地确定机器人执行抓取过程中的各项参数,理解机器人为什么会以某种方式执行任务,并进而确定如何对机器人的参数进行调整以使得机器人能够按照自己的需要运行。In view of the above problems, the present invention is proposed in order to overcome the above problems or at least partially solve the above problems. Specifically, the present invention proposes a method of visually displaying parameters and image data associated with the grasping control method of the present invention to the user, so that the user can intuitively determine the operation principle of the robot without understanding the The robot performs various parameters in the grabbing process, understands why the robot performs tasks in a certain way, and then determines how to adjust the robot's parameters so that the robot can operate according to its own needs.
本申请权利要求和说明书所披露的所有方案均具有上述一个或多个创新之处,相应地,能够解决上述一个或多个技术问题。具体地,本申请提供一种生成抓取辅助图像的方法、装置、电子设备和存储介质。All solutions disclosed in the claims and description of this application have one or more of the above innovations, and accordingly, can solve one or more of the above technical problems. Specifically, this application provides a method, device, electronic device and storage medium for generating a capture auxiliary image.
本申请的实施方式的生成抓取辅助图像的方法,包括:The method for generating capture auxiliary images according to the embodiment of the present application includes:
获取包括一个或多个待抓取物品的图像数据;Obtain image data including one or more items to be grabbed;
将所述图像数据以及可操作控件输出以形成交互界面,所述控件可供用户操作以选择抓取辅助图像并向用户展示选择的抓取辅助图像;Output the image data and operable controls to form an interactive interface, the controls being operable by a user to select a capture auxiliary image and display the selected capture auxiliary image to the user;
响应于用户对所述控件的操作,获取与用户所选择的抓取辅助图像相对应的抓取辅助数据;In response to the user's operation on the control, obtaining capture auxiliary data corresponding to the capture auxiliary image selected by the user;
基于获取的所述抓取辅助数据,生成抓取辅助图层;Generate a crawling auxiliary layer based on the obtained crawling auxiliary data;
将所述抓取辅助图层与包括一个或多个待抓取物品的图像数据组合以生成用户选择的抓取辅助图像。The grasp assist layer is combined with image data including one or more items to be grasped to generate a user selected grasp assist image.
在某些实施方式中,所述图像数据与可操作控件在同一个交互界面内。In some embodiments, the image data and the operable controls are within the same interactive interface.
在某些实施方式中,所述图像数据与可操作控件在不同交互界面内。In some embodiments, the image data and operable controls are within different interactive interfaces.
在某些实施方式中,所述不同交互界面响应于用户的操作而进行切换。In some embodiments, the different interactive interfaces switch in response to user operations.
在某些实施方式中,所述抓取辅助数据包括:与用户所选的抓取辅助图像相关联的数值以及所述待抓取物品的可抓取区域的掩膜。In some embodiments, the grasping auxiliary data includes: a numerical value associated with a user-selected grasping auxiliary image and a mask of the graspable area of the item to be grasped.
在某些实施方式中,所述将所述抓取辅助图层与包括一个或多个待抓取物品的图像数据组合包括:调整所述抓取辅助图层的颜色、透明度和/或对比度之后,再将调整后的抓取辅助图层与包括一个或多个待抓取物品的图像数据组合。In some embodiments, combining the grasping auxiliary layer with image data including one or more items to be grasped includes: adjusting the color, transparency and/or contrast of the grasping auxiliary layer , and then combine the adjusted grabbing auxiliary layer with image data including one or more items to be grabbed.
本申请的实施方式的生成抓取辅助图像的装置,包括:The device for generating capture auxiliary images according to the embodiment of the present application includes:
图像数据获取模块,用于获取包括一个或多个待抓取物品的图像数据;An image data acquisition module, used to acquire image data including one or more items to be grabbed;
交互界面展示模块,用于将所述图像数据以及可操作控件输出以形成交互界面,所述控件可供用户操作以选择抓取辅助图像并向用户展示选择的抓取辅助图像;An interactive interface display module, configured to output the image data and operable controls to form an interactive interface. The controls can be operated by the user to select the capture auxiliary image and display the selected capture auxiliary image to the user;
辅助数据获取模块,用于响应于用户对所述控件的操作,获取与用户所选择的抓取辅助图像相对应的抓取辅助数据;An auxiliary data acquisition module, configured to acquire auxiliary grasping data corresponding to the auxiliary grasping image selected by the user in response to the user's operation on the control;
辅助图层生成模块,用于基于获取的所述抓取辅助数据,生成抓取辅助图层;An auxiliary layer generation module, configured to generate an auxiliary crawling layer based on the acquired auxiliary crawling data;
辅助图像生成模块,用于将所述抓取辅助图层与包括一个或多个待抓取物品的图像数据组合以生成用户选择的抓取辅助图像。An auxiliary image generation module is configured to combine the grasping auxiliary layer with image data including one or more items to be grasped to generate a user-selected grasping auxiliary image.
在某些实施方式中,所述图像数据与可操作控件在同一个交互界面内。In some embodiments, the image data and the operable controls are within the same interactive interface.
在某些实施方式中,所述图像数据与可操作控件在不同交互界面内。In some embodiments, the image data and operable controls are within different interactive interfaces.
在某些实施方式中,所述不同交互界面响应于用户的操作而进行切换。In some embodiments, the different interactive interfaces switch in response to user operations.
在某些实施方式中,所述抓取辅助数据包括:与用户所选的抓取辅助图像相关联的数值以及所述待抓取物品的可抓取区域的掩膜。In some embodiments, the grasping auxiliary data includes: a numerical value associated with a user-selected grasping auxiliary image and a mask of the graspable area of the item to be grasped.
在某些实施方式中,所述辅助图像生成模块还用于:调整所述抓取辅助图层的颜色、透明度和/或对比度之后,再将调整后的抓取辅助图层与包括一个或多个待抓取物品的图像数据组合。In some embodiments, the auxiliary image generation module is also configured to: after adjusting the color, transparency and/or contrast of the grabbing auxiliary layer, combine the adjusted grabbing auxiliary layer with one or more A combination of image data of the items to be grabbed.
本申请的实施方式的电子设备包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任一实施方式的生成抓取辅助图像的方法。The electronic device of the embodiment of the present application includes a memory, a processor, and a computer program stored on the memory and executable on the processor. When the processor executes the computer program, any one of the above embodiments is implemented. Method to generate crawling auxiliary images.
本申请的实施方式的计算机可读存储介质其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一实施方式的生成抓取辅助图像的方法。The computer-readable storage medium of the embodiment of the present application has a computer program stored thereon, and when the computer program is executed by the processor, the method for generating a grabbing auxiliary image in any of the above embodiments is implemented.
本申请的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
附图说明Description of the drawings
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the description of the embodiments in conjunction with the following drawings, in which:
图1是本申请某些实施方式的掩膜预处理的示意图;Figure 1 is a schematic diagram of mask preprocessing in some embodiments of the present application;
图2是本申请某些实施方式的抓取参数可视化方法的流程示意图;Figure 2 is a schematic flow chart of a grabbing parameter visualization method according to some embodiments of the present application;
图3a和图3b是本申请某些实施方式的可视化菜单及选择高度及吸盘大小可视化后,展示给用户的可视化图像的示意图;Figures 3a and 3b are schematic diagrams of the visual menu and the visual image displayed to the user after the selected height and suction cup size are visualized in some embodiments of the present application;
图4是本申请某些实施方式的抓取参数可视化装置的结构示意图;Figure 4 is a schematic structural diagram of a grabbing parameter visualization device according to certain embodiments of the present application;
图5是本申请某些实施方式的电子设备的结构示意图。Figure 5 is a schematic structural diagram of an electronic device according to certain embodiments of the present application.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be implemented in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided to provide a thorough understanding of the disclosure, and to fully convey the scope of the disclosure to those skilled in the art.
在具体实施例的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。In the description of specific embodiments, it should be understood that the terms "center", "longitudinal", "lateral", "upper", "lower", "front", "back", "left", "right", The orientations or positional relationships indicated by "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. are based on the orientations or positional relationships shown in the drawings, and are only for the convenience of describing the present invention. and simplified description, rather than indicating or implying that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be construed as a limitation of the present invention.
此外,术语“第一”、“第二”、“第三”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”等的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明的描述中,除非另有说明,“多个”的含义是两个或两个以上。Furthermore, the terms “first”, “second”, “third”, etc. are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Thus, features defined by "first," "second," etc. may explicitly or implicitly include one or more of such features. In the description of the present invention, unless otherwise specified, "plurality" means two or more.
本发明可以用于基于视觉识别的工业机器人控制场景中。一个典型的基于视觉识别的工业机器人控制场景包括用于采集图像的装置,用于生产线的硬件及生产线PLC等控制装置,用于执行任务的机器人部件,以及用于对这些装置进行控制的操作系统或软件。用于采集图像的装置可以包括2D或3D的智能/非智能工业相机,依据不同的功能和应用场景,可以包括面阵相机、线阵相机、黑白相机、彩色相机、CCD相机、CMOS相机、模拟相机、数字相机、可见光相机、红外相机、紫外相机等;生产线可以包括包装生产线、分拣生产线、物流生产线、加工生产线等需要使用机器人的生产线;工业场景中使用的用于执行任务的机器人部件可以是仿生机器人,例如人型机器人或者狗型机器人,也可以是传统的工业机器人,例如机械臂等;工业机器人可以是操作型机器人,程控型机器人,示教再现型机器人,数控型机器人,感觉控制型机器人,适应控制型机器人,学习控制型机器人,智能机器人等;机械臂依据工作原理可以是球坐式机械手,多关节式机械手,直角坐标式机械手,圆柱坐标式机械手,极坐标式机械手等,依据机械臂的功能,可以使用抓取机械臂,码垛机械臂,焊接机械臂,工业机械臂;机械臂的末端可以装配有末端执行器,末端执行器依据任务的需要,可以使用机器人夹具,机器人抓手,机器人工具快换装置,机器人碰撞传感器,机器人旋转连接器,机器人压力工具,顺从装置,机器人喷涂枪,机器人毛刺清理工具,机器人弧焊焊枪,机器人电焊焊枪等;机器人夹具可以是各类通用夹具,通用夹具是指结构已经标准化,且有较大适用范围的夹具,例如,车床用的三爪卡盘和四爪卡盘,铣床用的平口钳及分度头等。又如,按夹具所用夹紧动力源,可将夹具分为手动夹紧夹具、气动夹紧夹具、液压夹紧夹具、气液联动夹紧夹具、电磁夹具、真空夹具等,或者其它能够拾取物品的仿生器械。用于采集图像的装置,用于生产线的硬件及生产线PLC等控制装置,用于执行任务的机器人部件,以及用于对这些装置进行控制的操作系统或软件之间可基于TCP协议、HTTP协议、GRPC协议(Google Remote Procedure Call Protocol,谷歌远程过程调用协议)进行通信,以传递各种控制指令或命令。操作系统或软件可以设置于任意的电子设备中,典型的此类电子设备包括工业计算机,个人计算机,笔记本电脑,平板电脑,手机等,电子设备可以通过有线或无线的方式与其它设备或系统进行通信。此外,本发明中出现的抓取指的是广义的能够控制住物品以改变物品的位置的任意的抓取动作,而并非局限于狭义的以“抓”的方式抓取物品,换句话说,诸如吸取,抬起,套紧等方式对物品的抓取,也属于本发明抓取的范围。本发明中待抓取物品可以是纸箱、纸盒、塑料软包(包括但不限于零食包装、牛奶利乐枕包装、牛奶塑料包装等等)、药妆瓶、药妆品、和/或不规则的玩具等物品等等,这些待抓取物品可以放置在地面、托盘、传送带和/或物料筐内。The present invention can be used in industrial robot control scenarios based on visual recognition. A typical industrial robot control scenario based on visual recognition includes devices for collecting images, hardware for production lines and control devices such as production line PLCs, robot components for performing tasks, and operating systems for controlling these devices. or software. Devices used to collect images can include 2D or 3D smart/non-intelligent industrial cameras. Depending on the functions and application scenarios, they can include area scan cameras, line scan cameras, black and white cameras, color cameras, CCD cameras, CMOS cameras, and analog cameras. Cameras, digital cameras, visible light cameras, infrared cameras, UV cameras, etc.; production lines can include packaging production lines, sorting production lines, logistics production lines, processing production lines and other production lines that require the use of robots; robot components used to perform tasks in industrial scenarios can It is a bionic robot, such as a humanoid robot or a dog-shaped robot, or it can be a traditional industrial robot, such as a mechanical arm, etc.; industrial robots can be operating robots, program-controlled robots, teaching and reproduction robots, CNC robots, and sensory control type robots, adaptive control robots, learning control robots, intelligent robots, etc.; according to the working principle, the robot arm can be a spherical manipulator, a multi-joint manipulator, a rectangular coordinate manipulator, a cylindrical coordinate manipulator, a polar coordinate manipulator, etc. Depending on the function of the robotic arm, you can use a grabbing robotic arm, a palletizing robotic arm, a welding robotic arm, or an industrial robotic arm; the end of the robotic arm can be equipped with an end effector, and the end effector can use a robot fixture according to the needs of the task. Robot gripper, robot tool quick changer, robot collision sensor, robot rotary connector, robot pressure tool, compliance device, robot spray gun, robot burr cleaning tool, robot arc welding gun, robot electric welding gun, etc.; robot fixtures can be various General-purpose fixtures refer to fixtures that have standardized structures and have a wide range of applications, such as three-jaw chucks and four-jaw chucks for lathes, flat-nose pliers and indexing heads for milling machines, etc. For another example, according to the clamping power source used by the clamp, the clamp can be divided into manual clamping clamp, pneumatic clamping clamp, hydraulic clamping clamp, gas-hydraulic linkage clamping clamp, electromagnetic clamp, vacuum clamp, etc., or other items that can be picked up of bionic devices. Devices used to collect images, hardware used in production lines and control devices such as production line PLCs, robot components used to perform tasks, and operating systems or software used to control these devices can be based on TCP protocols, HTTP protocols, GRPC protocol (Google Remote Procedure Call Protocol, Google Remote Procedure Call Protocol) communicates to transmit various control instructions or commands. The operating system or software can be set in any electronic device. Typical such electronic devices include industrial computers, personal computers, laptops, tablets, mobile phones, etc. Electronic devices can communicate with other devices or systems through wired or wireless methods. communication. In addition, grabbing in the present invention refers to any grabbing action that can control an object to change the position of the object in a broad sense, and is not limited to grabbing objects in a narrow sense by "grabbing". In other words, The grasping of objects by methods such as sucking, lifting, and tightening also belongs to the scope of grasping of the present invention. The items to be grabbed in the present invention can be cartons, cartons, plastic soft packages (including but not limited to snack packaging, milk Tetra Pak pillow packaging, milk plastic packaging, etc.), cosmeceutical bottles, cosmeceuticals, and/or other Regular toys and other items, etc., these items to be grabbed can be placed on the ground, pallets, conveyor belts and/or material baskets.
在实际工业场景中,一般会让现场的工作人员为具体的抓取任务设置机器人的各项参数,然而现场工作人员对抓取的原理不熟悉,因此在发现出现问题时,并不清楚问题出现在哪,也不清楚如何修改设置以解决该问题。例如,当抓取多个堆叠在一起的待抓取物品时,出现将物品带飞出框的情形,现场工作人员判断原因出在夹具越过了上层的物品直接抓取了下层的物品,但是他并不能确定为什么机器人会认为下层物品的优先值更高,也不清楚如何设置权值改变机器人的抓取顺序。为了解决该问题,发明人开发出了一套将抓取过程中涉及的图形和参数按照现场工作人员的需要而以可视化的方式展现给现场工作人员进行操作的方法,这也是本发明的重点之一。In actual industrial scenarios, on-site workers are generally asked to set various parameters of the robot for specific grabbing tasks. However, on-site workers are not familiar with the principles of grabbing, so when a problem is discovered, they are not clear about the problem. Neither is it clear how to modify the settings to resolve the issue. For example, when grabbing multiple stacked items to be grabbed, the items flew out of the frame. The on-site staff determined that the reason was that the clamp passed over the items on the upper layer and directly grabbed the items on the lower layer, but they It is not clear why the robot considers the lower items to have a higher priority, nor is it clear how to set the weight to change the robot's grabbing order. In order to solve this problem, the inventor developed a set of methods to visually display the graphics and parameters involved in the grabbing process to the on-site workers for operation according to the needs of the on-site workers. This is also one of the key points of the present invention. one.
图2示出了根据本发明一个实施例的将抓取过程中的图形和参数可视化的方法的流程示意图。如图2所示,该方法包括:Figure 2 shows a schematic flowchart of a method for visualizing graphics and parameters in a grabbing process according to an embodiment of the present invention. As shown in Figure 2, the method includes:
步骤S400,获取包括一个或多个待抓取物品的图像数据;Step S400, obtain image data including one or more items to be grabbed;
步骤S410,将所述图像数据以及可操作控件输出以形成交互界面,所述控件可供用户操作以选择抓取辅助图像并向用户展示选择的抓取辅助图像;Step S410, output the image data and operable controls to form an interactive interface, and the controls can be operated by the user to select the capture auxiliary image and display the selected capture auxiliary image to the user;
步骤S420,响应于用户对所述控件的操作,获取与用户所选择的抓取辅助图像相对应的抓取辅助数据;Step S420, in response to the user's operation on the control, obtain the capture auxiliary data corresponding to the capture auxiliary image selected by the user;
步骤S430,基于获取的所述抓取辅助数据,生成抓取辅助图层;Step S430: Generate a crawling auxiliary layer based on the obtained crawling auxiliary data;
步骤S440,将所述抓取辅助图层与包括一个或多个待抓取物品的图像数据组合以生成用户选择的抓取辅助图像。Step S440: Combine the grabbing auxiliary layer with image data including one or more items to be grabbed to generate a grabbing auxiliary image selected by the user.
对于步骤S400,本发明可以适用于包括一个或多个待抓取物品的工业场景中,使用夹具依次对全部待抓取物品进行抓取,并将抓取到的物品排放置特定位置。本实施例中不限定图像数据的类型以及获取方式。作为一个示例,获取的图像数据可以包括点云或者RGB彩色图,可以通过3D工业相机获取点云信息,3D工业相机一般装配有两个镜头,分别从不同的角度捕捉待抓取物品组,经过处理后能够实现物体的三维图像的展示。将待抓取物品组置于视觉传感器的下方,两个镜头同时拍摄,根据所得到的两个图像的相对姿态参数,使用通用的双目立体视觉算法计算出待涂胶玻璃的各点的X、Y、Z坐标值及各点的坐标朝向,进而转变为待抓取物品组的点云数据。具体实施时,也可以使用激光探测器、LED等可见光探测器、红外探测器以及雷达探测器等元件生成点云,本发明对具体实现方式不作限定。For step S400, the present invention can be applied to industrial scenarios that include one or more items to be grabbed, using a clamp to grab all the items to be grabbed in sequence, and placing the rows of grabbed items in a specific position. The type and acquisition method of image data are not limited in this embodiment. As an example, the acquired image data can include point clouds or RGB color images. Point cloud information can be obtained through a 3D industrial camera. A 3D industrial camera is generally equipped with two lenses to capture the group of items to be grabbed from different angles. After processing, the three-dimensional image of the object can be displayed. The group of items to be grasped is placed under the visual sensor, and the two lenses shoot at the same time. Based on the relative posture parameters of the two images obtained, a general binocular stereo vision algorithm is used to calculate the X of each point of the glass to be coated. , Y, Z coordinate values and the coordinate orientation of each point, and then converted into point cloud data of the item group to be grabbed. During specific implementation, laser detectors, visible light detectors such as LEDs, infrared detectors, radar detectors and other components may also be used to generate point clouds. The present invention does not limit the specific implementation manner.
通过以上方式获取的点云数据是三维的数据,为了滤除对抓取影响较小的维度对应的数据,减小数据处理量进而加速数据处理速度,提高效率,可将获取的三维的待抓取物品组点云数据正投影映射到二维平面上。The point cloud data obtained through the above method is three-dimensional data. In order to filter out the data corresponding to the dimensions that have less impact on the crawling, reduce the amount of data processing, thereby speeding up the data processing and improving efficiency, the obtained three-dimensional to-be-grabbed data can be Take the point cloud data of the item group and map it onto a two-dimensional plane through forward projection.
作为一个示例,也可以生成该正投影对应的深度图。可以沿垂直于物品的深度方向获取与三维物品区域相对应的二维彩色图以及对应于二维彩色图的深度图。其中,二维彩色图对应于与预设深度方向垂直的平面区域的图像;对应于二维彩色图的深度图中的各个像素点与二维彩色图中的各个像素点一一对应,且各个像素点的取值为该像素点的深度值。。As an example, a depth map corresponding to the orthographic projection can also be generated. A two-dimensional color map corresponding to the three-dimensional item area and a depth map corresponding to the two-dimensional color map may be acquired along a depth direction perpendicular to the item. Among them, the two-dimensional color image corresponds to the image of the plane area perpendicular to the preset depth direction; each pixel point in the depth image corresponding to the two-dimensional color image corresponds to each pixel point in the two-dimensional color image, and each The value of a pixel is the depth value of the pixel. .
对于步骤S410,可以将拍摄到的图片以及控件输出至显示器上,展示给用户。用户与机器人之间的交互可以采用触摸操作,语音操作,或传统操作设备的方式,例如鼠标,键盘等,进行操作,本发明对此不作限制。交互界面是人和计算机系统进行信息交换的通道,用户通过交互界面向计算机系统输入信息、进行操作,计算机则通过交互界面向用户提供信息,以供阅读、分析和判断,每一个交互界面包含该交互界面提供的信息展示界面以及可由用户操作的控件。用于控制可视化的控件可以整体上与图像展示在一个交互界面上,也可以与图像分成两个界面,并在图像界面上提供转到控件界面的接口,以及在空间界面上提供转到图像界面的接口,当用户操作该接口时,转至控件界面或图像界面。如图3a所示,控件界面可以选择与可视化相关的操作,包括:开启可视化的操作,显示压叠物体轮廓的操作以及可视化属性的操作。其中可视化属性可以包括前述任一实施例中输出的任意参数,图3a中可供选择的可视化属性包括:ALL、按位姿高度显示、按吸盘大小显示、按压叠程度显示、按透明程度显示、按位姿朝向显示。For step S410, the captured pictures and controls can be output to the display and displayed to the user. The interaction between the user and the robot can be operated by touch operation, voice operation, or traditional operating equipment, such as mouse, keyboard, etc., which is not limited by the present invention. The interactive interface is a channel for information exchange between people and computer systems. Users input information and operate the computer system through the interactive interface, and the computer provides information to the user through the interactive interface for reading, analysis and judgment. Each interactive interface contains the The interactive interface provides an information display interface and controls that can be operated by users. The controls used to control visualization can be displayed as a whole on an interactive interface with the image, or can be divided into two interfaces with the image, and provide an interface to the control interface on the image interface, and provide an interface to the image interface on the space interface. When the user operates the interface, he or she will be redirected to the control interface or image interface. As shown in Figure 3a, the control interface allows you to select operations related to visualization, including: operations to turn on visualization, operations to display the outline of overlaid objects, and operations to visualize attributes. The visual attributes may include any parameters output in any of the aforementioned embodiments. The optional visual attributes in Figure 3a include: ALL, display by posture and height, display by suction cup size, display by pressing overlap degree, display by transparency degree, Displayed by pose and orientation.
对于步骤S420,用户可以根据自己的需要选取感兴趣的值。例如,当用户发现机器人没有按照自己预想的顺序抓取时,可以选择ALL控件以展示各个待抓取物品的抓取优先值以确定实际抓取顺序与自己期望的抓取顺序的差异之处,之后再单独选择特定的可视化属性,以确定究竟是哪些属性影响了抓取顺序。当用户选择了某个可视化选项后,系统会查找并调用相应的数据。作为一个较佳的实施方式,系统响应于用户的选择会获取用户选择的参数以及可抓取区域的掩膜共同作为辅助数据使用,例如,当用户选择了“按吸盘大小显示”,则系统会同时调用可抓取区域的掩膜以及吸盘大小的值;类似地,当用户选择了“按位姿高度显示”后,则会调用可抓取区域的掩膜以及掩膜高度特征值。For step S420, the user can select the value of interest according to his own needs. For example, when the user finds that the robot does not grab in the order he expected, he can select the ALL control to display the grabbing priority value of each item to be grabbed to determine the difference between the actual grabbing order and the expected grabbing order. Then select specific visualization properties individually to determine which properties affect the crawl order. When the user selects a visualization option, the system will find and call the corresponding data. As a better implementation, in response to the user's selection, the system will obtain the parameters selected by the user and the mask of the grabbable area to use as auxiliary data. For example, when the user selects "Display by sucker size", the system will At the same time, the mask of the grabbable area and the value of the suction cup size are called; similarly, when the user selects "Display by pose height", the mask of the grabbable area and the mask height characteristic value are called.
一种可行的确定可抓取区域并生成掩膜的实施方式可以是,首先,在获取了包括一个或多个待抓取物品的图像数据后,对该图像数据进行处理,以识别出图像中的每一个像素点,例如对于一个256*256的图像来说,应当识别出256*256=65536个像素点;之后基于这些像素点中的每一个像素点的特征对整个图像中所包括的全部像素点进行分类,其中,像素点的特征主要是指像素点的RGB值,在实际的应用场景中,为了方便为特征分类,也可以将RGB彩色图处理为灰度图,利用灰度值进行分类。对于像素点的分类,可以预先确定需要将像素点分成哪几类,例如,拍摄获得的RGB图像中包括一大堆饮料罐,食品盒以及料框,因此如果目的是生成其中饮料罐,食品盒以及料框的掩膜,则预先确定的分类可以是饮料罐,食品盒和料框。我们可以为这三种不同的分类设置一个标识,标识可以是数字,例如饮料罐为1,食品盒为2,料框为3,也可以是颜色,例如饮料罐为红色,食品盒为蓝色,料框为绿色,以此分类并进行处理后,最终获得的图像中会以1或者红色标识饮料罐,2或者蓝色标识食品盒,3或者绿色标识料框。本实施例中要生成的是物品的可抓取区域的掩膜,因此仅为可抓取区域分类,例如蓝色,以此方式处理后的图像中的蓝色区域即为待抓取物品的可抓取区域的掩膜;接下来为每个分类创建图像输出的通道,该通道的作用是将输入图像中的所有与分类有关的特征提取出来作为输出。例如,我们为可抓取区域这个类创建图像输出的通道后,将采集到的RGB彩色图像输入该通道,则可以从该通道的输出中获取提取了可抓取区域的特征的图像。最后,我们将处理获得的可抓取区域的特征图像与原始RGB图像进行组合,即可生成标识有可抓取区域掩膜的合成图像数据。A feasible implementation method for determining the graspable area and generating a mask may be to: first, after acquiring image data including one or more items to be grasped, process the image data to identify the objects in the image. For each pixel, for example, for a 256*256 image, 256*256=65536 pixels should be identified; then based on the characteristics of each of these pixels, all pixels included in the entire image Pixels are classified. Among them, the characteristics of pixels mainly refer to the RGB values of pixels. In actual application scenarios, in order to facilitate the classification of features, the RGB color image can also be processed into a grayscale image, and the grayscale value is used for classification. Classification. For the classification of pixels, you can determine in advance which categories the pixels need to be divided into. For example, the RGB image obtained includes a lot of beverage cans, food boxes, and material frames. Therefore, if the purpose is to generate beverage cans, food boxes, etc. As well as the mask of the material frame, the predetermined categories can be beverage cans, food boxes and material frames. We can set a logo for these three different categories. The logo can be a number, such as 1 for beverage cans, 2 for food boxes, and 3 for material boxes. It can also be a color, such as red for beverage cans and blue for food boxes. , the material frame is green. After classification and processing, the final image will be marked with 1 or red to identify beverage cans, 2 or blue to identify food boxes, and 3 or green to identify the material frame. What is to be generated in this embodiment is a mask of the grabbable area of the item, so only the grabbable area is classified, for example, blue. The blue area in the image processed in this way is the area of the item to be grabbed. Mask of the grabber area; then create an image output channel for each classification. The function of this channel is to extract all classification-related features in the input image as output. For example, after we create an image output channel for the grabbable area class, and input the collected RGB color image into the channel, we can obtain an image that extracts the features of the grabbable area from the output of the channel. Finally, we combine the characteristic image of the grabbable area obtained through processing with the original RGB image to generate synthetic image data marked with a grabbable area mask.
以此方式生成的掩膜有时是不合适的,例如,有些掩膜的大小和形状不方便进行后续的处理;再如,有些区域虽然生成了掩膜,然而夹具在掩膜的位置并不能执行抓取。不合适的掩膜会对后续的处理过程带来很大的影响,因此需要对生成的掩膜进行预处理后再供其它步骤使用。如图1所示,对掩膜的预处理可以包括:1、对掩膜进行膨胀处理,以填补掩膜图像的缺失、不规则等缺陷。例如,对于掩膜上的每一个像素点,可以把该点周围一定数量的点,例如8-25个点,设为与该点颜色相同。该步骤相当于把每个像素点的周围都进行填充,因此假如物品掩膜存在缺失,该操作会将缺失部分全部填充,如此处理之后,物品掩膜就会变得完整,不存在缺失,同时掩膜整体上也会因为膨胀而略为变“胖”,适当的膨胀有助于后续进一步的图像处理操作;2、判断掩码的面积是否满足预定的条件,如果不满足,则剔除该掩膜。首先较小的掩膜区域很可能是错误的,因为图像数据的连续性,一个可抓取区域通常会包括众多数量的具有类似特征的像素点,离散的小像素点形成的掩膜区域可能并非真正的可抓取区域;其次,机器人末端执行机构,即夹具,在执行抓取任务时,其落脚位置需要有一定的面积,如果可抓取区域的面积过小,夹具根本无法在该区域落脚,也就无法抓取物品,因此过于小的掩膜是没有意义的。前述预定的条件,可以根据夹具的大小以及噪点的大小等条件自行设定,其值可以为一个确定的大小,或包含的像素点的数量,或者一比例,例如,可以将预定的条件设为0.1%,即掩膜面积与整体图像面积的比例小于0.1%时,即认为该掩膜不可用,进而将其从图像中剔除;3、判断掩码中的点云个数是否少于预设的最少点云个数。点云个数反映了相机采集质量的好坏,如果某个可抓取区域中点云个数过少,说明对于该区域的拍摄是不够准确的。点云可以用于控制夹具执行抓取,数量过少可能对夹具的控制过程产生影响。因此,可以设置某个掩膜区域中最少应当包括的点云的个数,例如:10个,当某个可抓取区域中覆盖的点云个数少于10个时,则将掩膜从图像数据中剔除或者为可抓取区域随机添加点云,直到达到10个为止。掩膜高度是指某个待抓取物品的可抓取区域掩膜的高度,也可以是Z坐标值。掩膜高度反映了物体可抓面的高低,由于待抓取的物品有多个,且堆叠放置在一起,优先抓取上层的物品一来能够防止因下层物品被压住而导致上层物品被带飞的问题,二来能够避免把高层物品击落,影响对低层物品的抓取,并且位于高层的物品显然要比位于低层的物品更好抓。掩膜的高度可以通过深度图或者掩膜所在位置的点云来获取,在一个实施方式中,可以先获取包括一个或多个待抓取物品的点云,点云是预设坐标系下的点的数据集,为了方便计算高度值,可以在待抓取物品的正上方使用相机进行拍摄。之后基于掩膜区域,获取包括在掩膜区域的点云。计算掩膜所代表的可抓取区域的位姿关键点以及位姿关键点的深度值,物品对象的三维位姿信息用于描述待抓取对象在三维世界的姿态。位姿关键点是指:能够反映可抓取区域的三维位置特征的位姿点。可以通过以下方式计算:Masks generated in this way are sometimes inappropriate. For example, the size and shape of some masks are inconvenient for subsequent processing; for another example, although masks are generated in some areas, the fixture cannot be executed at the position of the mask. Grab. An inappropriate mask will have a great impact on subsequent processing, so the generated mask needs to be preprocessed before being used in other steps. As shown in Figure 1, preprocessing of the mask may include: 1. Expanding the mask to fill defects such as missing and irregular mask images. For example, for each pixel on the mask, a certain number of points around the point, such as 8-25 points, can be set to the same color as the point. This step is equivalent to filling in the surroundings of each pixel, so if the item mask is missing, this operation will fill in all the missing parts. After this process, the item mask will become complete and there will be no missing parts. At the same time, The mask as a whole will also become slightly "fat" due to expansion. Appropriate expansion will help further subsequent image processing operations; 2. Determine whether the area of the mask meets the predetermined conditions. If not, remove the mask. . First of all, a smaller mask area is likely to be wrong. Because of the continuity of image data, a grabbable area usually includes a large number of pixels with similar characteristics. The mask area formed by discrete small pixels may not be The real graspable area; secondly, when the robot's end actuator, that is, the clamp, performs a grasping task, its landing position needs to have a certain area. If the area of the graspable area is too small, the clamp cannot settle in this area at all. , it is impossible to grab items, so a mask that is too small is meaningless. The aforementioned predetermined conditions can be set according to conditions such as the size of the fixture and the size of the noise. Its value can be a certain size, the number of pixels included, or a ratio. For example, the predetermined condition can be set to 0.1%, that is, when the ratio of the mask area to the overall image area is less than 0.1%, the mask is considered unusable and is removed from the image; 3. Determine whether the number of point clouds in the mask is less than the preset The minimum number of point clouds. The number of point clouds reflects the quality of camera collection. If the number of point clouds in a certain captureable area is too small, it means that the shooting of this area is not accurate enough. Point clouds can be used to control the gripper to perform grasping. Too small a number may affect the control process of the gripper. Therefore, you can set the minimum number of point clouds that should be included in a certain mask area, for example: 10. When the number of point clouds covered in a certain grabby area is less than 10, the mask will be changed from Eliminate or randomly add point clouds to grabbable areas from the image data until 10 are reached. The mask height refers to the height of the mask of the grabable area of an item to be grabbed, or it can be the Z coordinate value. The mask height reflects the height of the graspable surface of the object. Since there are multiple items to be grabbed and they are stacked together, grabbing the upper items first can prevent the upper items from being carried away due to the lower items being suppressed. Secondly, it can avoid knocking down high-level items and affecting the grabbing of low-level items, and items located on high levels are obviously easier to grab than items located on low levels. The height of the mask can be obtained through a depth map or a point cloud at the location of the mask. In one embodiment, a point cloud including one or more items to be grabbed can be obtained first. The point cloud is in a preset coordinate system. For the data set of points, in order to conveniently calculate the height value, the camera can be used to shoot directly above the object to be grabbed. Then based on the mask area, the point cloud included in the mask area is obtained. Calculate the pose key points of the graspable area represented by the mask and the depth values of the pose key points. The three-dimensional pose information of the item object is used to describe the pose of the object to be grasped in the three-dimensional world. Pose key points refer to pose points that can reflect the three-dimensional position characteristics of the graspable area. It can be calculated by:
首先,获取掩膜区域的各个数据点的三维位置坐标,根据各个数据点的三维位置坐标所对应的预设运算结果,确定掩膜所对应的可抓取区域的位姿关键点的位置信息。例如,假设掩膜区域的点云中包含100个数据点,分别获取100个数据点的三维位置坐标,计算100个数据点的三维位置坐标的平均值,将平均值所对应的数据点作为掩膜区域所对应的可抓取区域的位姿关键点。当然,上述的预设运算方式除求平均值之外,还可以为求重心、求最大值或最小值等,本发明对此不作限定。然后,找出100个数据点中变化量最小的方向以及变化量最大的方向。其中,将变化量最小的方向作为Z轴方向(即与相机拍摄方向一致的深度方向),将变化量最大的方向作为X轴方向,并通过右手坐标系确定Y轴方向,从而确定位姿关键点的位置信息的三维状态信息,以反映位姿关键点在三维空间中的方向特征。First, the three-dimensional position coordinates of each data point in the mask area are obtained, and based on the preset operation results corresponding to the three-dimensional position coordinates of each data point, the position information of the pose key points of the graspable area corresponding to the mask is determined. For example, assume that the point cloud in the mask area contains 100 data points, obtain the three-dimensional position coordinates of the 100 data points respectively, calculate the average of the three-dimensional position coordinates of the 100 data points, and use the data points corresponding to the average as the mask The pose key points of the graspable area corresponding to the membrane area. Of course, in addition to finding the average value, the above-mentioned preset calculation method can also be used to find the center of gravity, the maximum value or the minimum value, etc., and the present invention is not limited to this. Then, find the direction with the smallest change and the direction with the largest change among the 100 data points. Among them, the direction with the smallest amount of change is regarded as the Z-axis direction (that is, the depth direction consistent with the camera shooting direction), the direction with the largest amount of change is regarded as the X-axis direction, and the Y-axis direction is determined through the right-hand coordinate system to determine the key points of the pose. The three-dimensional state information of the point's position information reflects the directional characteristics of the pose key points in the three-dimensional space.
最后,计算各个掩膜区域所对应的物品可抓取区域的位姿关键点以及位姿关键点的深度值。其中,位姿关键点的深度值为物品可抓取区域对应于深度坐标轴的坐标值,其中,深度坐标轴根据相机拍照方向、重力方向或可抓取区域所在平面的垂直线所在的方向设定。相应的,深度值用于反映可抓取区域处于该深度坐标轴的位置。具体实施时,深度坐标轴的原点以及方向可以由本领域技术人员灵活设定,本发明对深度坐标轴的原点设置方式不作限定。例如,当深度坐标轴根据相机拍照方向设置时,深度坐标轴的原点可以为相机所在的位置,深度坐标轴的方向为从相机指向物品的方向,因此,各个可抓取区域的掩膜的深度值对应于可抓取区域到相机的距离的相反数,即距离相机越远,则掩膜的深度值越低,将深度值作为掩膜高度特征值。Finally, calculate the pose key points and the depth values of the pose key points in the grabbable area of the item corresponding to each mask area. Among them, the depth value of the pose key point is the coordinate value of the graspable area of the object corresponding to the depth coordinate axis, where the depth coordinate axis is set according to the camera shooting direction, the direction of gravity, or the direction of the vertical line of the plane where the grabable area is located. Certainly. Correspondingly, the depth value is used to reflect the position of the graspable area on the depth coordinate axis. During specific implementation, the origin and direction of the depth coordinate axis can be flexibly set by those skilled in the art. The present invention does not limit the way in which the origin of the depth coordinate axis is set. For example, when the depth coordinate axis is set according to the camera's shooting direction, the origin of the depth coordinate axis can be the position of the camera, and the direction of the depth coordinate axis is the direction from the camera to the object. Therefore, the depth of the mask of each grabbable area The value corresponds to the inverse of the distance from the grabbable area to the camera, that is, the further away from the camera, the lower the depth value of the mask, and the depth value is used as the mask height feature value.
夹具大小指的是为某个待抓取物品配置的夹具的大小。由于物品的可抓取区域在物体表面,因此使用夹具夹取物品,实质上就是控制夹具在可抓取区域执行抓取操作,故夹具大小可也可以算作物品的可抓取区域的掩膜的特征。夹具大小对抓取的影响主要体现在夹具是否可能误碰与夹具不对应的物品。例如,如果使用了大尺寸的吸盘,那么与使用小尺寸的吸盘相比,在堆放的物体较多时进行抓取,该大尺寸吸盘在抓取的过程中更容易碰撞到其它物品,导致吸盘晃动或者物体位置的变化,进而可能导致抓取失败。实际的工业场景中,每套系统使用什么样的夹具可能都是预先确定好的,也就是说,夹具的大小在实际抓取之前可能已经确定了,因此本实施方式中的夹具大小可以基于配置的夹具,以及预先建立并保存的夹具与其尺寸之间的映射关系,来获取。The gripper size refers to the size of the gripper configured for an item to be grasped. Since the grabbable area of an object is on the surface of the object, using a clamp to grab the item essentially means controlling the clamp to perform a grabbing operation in the grabbable area. Therefore, the size of the clamp can also be counted as a mask for the grabbable area of the item. Characteristics. The impact of the size of the clamp on grasping is mainly reflected in whether the clamp may accidentally touch items that do not correspond to the clamp. For example, if a large-sized suction cup is used, compared with using a small-sized suction cup, the large-sized suction cup is more likely to collide with other objects during the grabbing process, causing the suction cup to shake. Or the object's position changes, which may lead to grasping failure. In actual industrial scenarios, what kind of clamps are used for each system may be predetermined. That is to say, the size of the clamps may have been determined before actual grabbing. Therefore, the size of the clamps in this embodiment can be based on the configuration. To obtain the fixture, as well as the pre-established and saved mapping relationship between the fixture and its size.
对于步骤S430,将步骤S420中调用的数据组合生成可供用户查看的可视化图层,以用户选择了“按位姿高度显示”,“按吸盘大小显示”并且抓取辅助数据也包括可抓取区域的掩膜为例:当用户选择了“按位姿高度显示”后,则调用原始图像中的各个待抓取物品的掩膜,以及每个待抓取物品的掩膜高度特征值,并生成将掩膜高度特征值放置在对应的掩膜旁边的图层;当用户选择了“按吸盘大小显示”后,则调用原始图像中的各个待抓取物品的掩膜,以及每个待抓取物品的吸盘大小特征值,并生成将吸盘大小特征值放置在对应的掩膜旁边的图层;For step S430, the data called in step S420 are combined to generate a visualization layer that can be viewed by the user, assuming that the user has selected "display by pose height" and "display by suction cup size" and the grabbing auxiliary data also includes grabbizable Take the mask of the area as an example: when the user selects "Display by pose height", the mask of each item to be grabbed in the original image is called, as well as the mask height feature value of each item to be grabbed, and Generate a layer that places the mask height feature value next to the corresponding mask; when the user selects "Display by sucker size", the mask of each item to be grabbed in the original image is called, as well as the mask of each item to be grabbed. Get the suction cup size feature value of the item and generate a layer that places the suction cup size feature value next to the corresponding mask;
对于步骤S440,将步骤S430生成的抓取辅助图层与原始拍摄的图像数据合成,并以可视化的方式展示给用户。可以对步骤S430中生成的图层进行处理,调整图层的颜色,透明度以及对比度等属性,然后按照从左到右,从上到下的顺序,依次将辅助图像图层中所有的像素点与原始图像数据中所有的像素点组合在一起,从而生成合成后的图像数据。如图3b所示,合成后的图像展示了各个待抓取物品的图像,以及覆盖在待抓取物品之上的可抓取区域的掩膜,以及在掩膜旁展示的用户选择的“位姿高度”值或者“吸盘大小”值。For step S440, the capture auxiliary layer generated in step S430 is synthesized with the original captured image data, and displayed to the user in a visual manner. You can process the layer generated in step S430, adjust the color, transparency, contrast and other attributes of the layer, and then sequentially combine all the pixels in the auxiliary image layer with the layer from left to right and from top to bottom. All pixels in the original image data are combined together to generate synthesized image data. As shown in Figure 3b, the synthesized image shows the image of each item to be grasped, as well as the mask of the graspable area covering the item to be grasped, and the user-selected "position" displayed next to the mask. "Attitude Height" value or "Suction Cup Size" value.
另外,需要说明的是,虽然本发明的每个实施例都具有特定的特征组合,然而,这些特征在实施例之间的进一步组合和交叉组合也是可行的。In addition, it should be noted that although each embodiment of the present invention has a specific combination of features, further combinations and cross-combinations of these features between embodiments are also feasible.
图4示出了根据本发明又一个实施例的图像数据处理装置,该装置包括:Figure 4 shows an image data processing device according to yet another embodiment of the present invention. The device includes:
图像数据获取模块800,用于获取包括一个或多个待抓取物品的图像数据,即用于实现步骤S400;The image data acquisition module 800 is used to acquire image data including one or more items to be grabbed, that is, to implement step S400;
交互界面展示模块810,用于将所述图像数据以及可操作控件输出以形成交互界面,所述控件可供用户操作以选择抓取辅助图像并向用户展示选择的抓取辅助图像,即用于实现步骤S410;The interactive interface display module 810 is used to output the image data and operable controls to form an interactive interface. The controls can be operated by the user to select the capture auxiliary image and display the selected capture auxiliary image to the user, that is, for Implement step S410;
辅助数据获取模块820,用于响应于用户对所述控件的操作,获取与用户所选择的抓取辅助图像相对应的抓取辅助数据,即用于实现步骤S420;The auxiliary data acquisition module 820 is used to obtain the grabbing auxiliary data corresponding to the grabbing auxiliary image selected by the user in response to the user's operation on the control, that is, to implement step S420;
辅助图层生成模块830,用于基于获取的所述抓取辅助数据,生成抓取辅助图层,即用于实现步骤S430;The auxiliary layer generation module 830 is used to generate a grabbing auxiliary layer based on the obtained grabbing auxiliary data, that is, to implement step S430;
辅助图像生成模块840,用于将所述抓取辅助图层与包括一个或多个待抓取物品的图像数据组合以生成用户选择的抓取辅助图像,即用于实现步骤S440。The auxiliary image generation module 840 is used to combine the grabbing auxiliary layer with image data including one or more items to be grabbed to generate a grabbing auxiliary image selected by the user, that is, to implement step S440.
应当理解,在上述由图4所示的装置实施例中,仅描述了模块的主要功能,各个模块的全部功能与方法实施例中相应步骤相对应,各个模块的工作原理同样可以参照方法实施例中相应步骤的描述。例如,上述实施例中辅助图像生成模块840用于实现步骤S440的方法,表明用于描述和解释步骤S440的内容也是用于描述和解释辅助图像生成模块840的功能的内容。另外,虽然上述实施例中限定了功能模块的功能与方法的对应关系,然而本领域技术人员能够理解,功能模块的功能并不局限于上述对应关系,即特定的功能模块还能够实现其他方法步骤或方法步骤的一部分。例如,上述实施例描述了辅助图像生成模块840用于实现步骤S440的方法,然而根据实际情况的需要,辅助图像生成模块840也可以用于实现步骤S400、S410、S420或S430的方法或方法的一部分。It should be understood that in the above device embodiment shown in Figure 4, only the main functions of the modules are described. All functions of each module correspond to the corresponding steps in the method embodiment. The working principles of each module can also be referred to the method embodiment. Description of the corresponding steps in . For example, in the above embodiment, the auxiliary image generation module 840 is used to implement the method of step S440, indicating that the content used to describe and explain step S440 is also the content used to describe and explain the function of the auxiliary image generation module 840. In addition, although the corresponding relationship between the functions of the functional modules and the methods is limited in the above embodiments, those skilled in the art can understand that the functions of the functional modules are not limited to the above corresponding relationships, that is, specific functional modules can also implement other method steps. or part of a method step. For example, the above embodiment describes that the auxiliary image generation module 840 is used to implement the method of step S440. However, according to the needs of the actual situation, the auxiliary image generation module 840 can also be used to implement the method of steps S400, S410, S420 or S430. part.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一实施方式的方法。需要指出的是,本申请实施方式的计算机可读存储介质存储的计算机程序可以被电子设备的处理器执行,此外,计算机可读存储介质可以是内置在电子设备中的存储介质,也可以是能够插拔地插接在电子设备的存储介质,因此,本申请实施方式的计算机可读存储介质具有较高的灵活性和可靠性。This application also provides a computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the method of any of the above embodiments is implemented. It should be pointed out that the computer program stored in the computer-readable storage medium in the embodiment of the present application can be executed by the processor of the electronic device. In addition, the computer-readable storage medium can be a storage medium built in the electronic device, or can be a storage medium that can The storage medium is plugged into the electronic device in a pluggable manner. Therefore, the computer-readable storage medium in the embodiment of the present application has high flexibility and reliability.
图5示出了根据本发明实施例的一种电子设备的结构示意图,电子设备可以是汽车中配置的控制系统/电子系统、移动终端(例如,智能移动电话等)、个人计算机(PC,例如,台式计算机或者笔记型计算机等)、平板电脑以及服务器等,本发明具体实施例并不对电子设备的具体实现做限定。Figure 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device may be a control system/electronic system configured in a car, a mobile terminal (for example, a smart mobile phone, etc.), a personal computer (PC, for example) , desktop computer or notebook computer, etc.), tablet computer, server, etc. The specific embodiments of the present invention do not limit the specific implementation of the electronic device.
如图5所示,该电子设备可以包括:处理器(processor)1202、通信接口(Communications Interface)1204、存储器(memory)1206、以及通信总线1208。As shown in Figure 5, the electronic device may include: a processor (processor) 1202, a communications interface (Communications Interface) 1204, a memory (memory) 1206, and a communication bus 1208.
其中:in:
处理器1202、通信接口1204、以及存储器1206通过通信总线1208完成相互间的通信。The processor 1202, the communication interface 1204, and the memory 1206 complete communication with each other through the communication bus 1208.
通信接口1204,用于与其它设备比如客户端或其它服务器等的网元通信。The communication interface 1204 is used to communicate with network elements of other devices such as clients or other servers.
处理器1202,用于执行程序1210,具体可以执行上述方法实施例中的相关步骤。The processor 1202 is configured to execute the program 1210, and specifically can execute the relevant steps in the above method embodiment.
具体地,程序1210可以包括程序代码,该程序代码包括计算机操作指令。Specifically, program 1210 may include program code including computer operating instructions.
处理器1202可能是中央处理器CPU,或者是特定集成电路ASIC(ApplicationSpecific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。电子设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。The processor 1202 may be a central processing unit (CPU), an application specific integrated circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the electronic device may be the same type of processor, such as one or more CPUs; or they may be different types of processors, such as one or more CPUs and one or more ASICs.
存储器1206,用于存放程序1210。存储器1206可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。Memory 1206 is used to store programs 1210. The memory 1206 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
程序1210可以通过通信接口1204从网络上被下载及安装,和/或从可拆卸介质被安装。在该程序被处理器1202执行时,可以使得处理器1202执行上述方法实施例中的各项操作。Program 1210 may be downloaded and installed from the network through communication interface 1204, and/or installed from removable media. When the program is executed by the processor 1202, the processor 1202 can be caused to perform various operations in the above method embodiment.
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。In the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples" is intended to be in conjunction with the described implementation. A specific feature, structure, material, or characteristic described in a manner or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments, or portions of code that include one or more executable instructions for implementing the specified logical functions or steps of the process. , and the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed out of the order shown or discussed, including in a substantially simultaneous manner or in the reverse order, depending on the functionality involved, which shall It should be understood by those skilled in the technical field to which the embodiments of this application belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理模块的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered a sequenced list of executable instructions for implementing the logical functions, and may be embodied in any computer-readable medium, For use by, or in combination with, instruction execution systems, devices or equipment (such as computer-based systems, systems including processing modules, or other systems that can fetch instructions from and execute instructions from the instruction execution system, device or equipment) or equipment. For the purposes of this specification, a "computer-readable medium" may be any device that can contain, store, communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections with one or more wires (electronic device), portable computer disk cartridges (magnetic device), random access memory (RAM), Read-only memory (ROM), erasable and programmable read-only memory (EPROM or flash memory), fiber optic devices, and portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, and subsequently edited, interpreted, or otherwise suitable as necessary. process to obtain the program electronically and then store it in computer memory.
处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor can be a Central Processing Unit (CPU), other general-purpose processors, Digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC), or off-the-shelf programmable processors. Gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
应当理解,本申请的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the embodiments of the present application can be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if it is implemented in hardware, as in another embodiment, it can be implemented by any one or a combination of the following technologies known in the art: a logic gate circuit with a logic gate circuit for implementing a logic function on a data signal. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps involved in implementing the methods of the above embodiments can be completed by instructing relevant hardware through a program. The program can be stored in a computer-readable storage medium. The program can be stored in a computer-readable storage medium. When executed, one of the steps of the method embodiment or a combination thereof is included.
此外,在本申请的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in various embodiments of the present application can be integrated into a processing module, each unit can exist physically alone, or two or more units can be integrated into one module. The above integrated modules can be implemented in the form of hardware or software function modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。The storage media mentioned above can be read-only memory, magnetic disks or optical disks, etc.
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。Although the embodiments of the present application have been shown and described above, it can be understood that the above-mentioned embodiments are illustrative and cannot be understood as limitations of the present application. Those of ordinary skill in the art can make modifications to the above-mentioned embodiments within the scope of the present application. The embodiments are subject to changes, modifications, substitutions and variations.
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111426988.4A CN116197887B (en) | 2021-11-28 | 2021-11-28 | Image data processing method, device, electronic equipment and storage medium for generating capture auxiliary images |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111426988.4A CN116197887B (en) | 2021-11-28 | 2021-11-28 | Image data processing method, device, electronic equipment and storage medium for generating capture auxiliary images |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN116197887A CN116197887A (en) | 2023-06-02 |
| CN116197887B true CN116197887B (en) | 2024-01-30 |
Family
ID=86511589
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111426988.4A Active CN116197887B (en) | 2021-11-28 | 2021-11-28 | Image data processing method, device, electronic equipment and storage medium for generating capture auxiliary images |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116197887B (en) |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106863299A (en) * | 2017-02-13 | 2017-06-20 | 华北电力大学(保定) | A kind of configurable foot formula climbing robot mobile control system of multilayer |
| CN107404617A (en) * | 2017-07-21 | 2017-11-28 | 努比亚技术有限公司 | A kind of image pickup method and terminal, computer-readable storage medium |
| JP2019042853A (en) * | 2017-08-31 | 2019-03-22 | Thk株式会社 | Image information processing apparatus, gripping system, and image information processing method |
| CN109648568A (en) * | 2019-01-30 | 2019-04-19 | 北京镁伽机器人科技有限公司 | Robot control method, system and storage medium |
| CN110238840A (en) * | 2019-04-24 | 2019-09-17 | 中山大学 | A Vision-Based Robotic Arm Autonomous Grasping Method |
| JP2020021212A (en) * | 2018-07-31 | 2020-02-06 | キヤノン株式会社 | Information processing device, information processing method, and program |
| CN111080670A (en) * | 2019-12-17 | 2020-04-28 | 广州视源电子科技股份有限公司 | Image extraction method, device, equipment and storage medium |
| CN111508066A (en) * | 2020-04-16 | 2020-08-07 | 北京迁移科技有限公司 | 3D vision-based unordered stacked workpiece grabbing system and interaction method |
| CN112114929A (en) * | 2020-09-29 | 2020-12-22 | 青岛海信移动通信技术股份有限公司 | Display apparatus and image display method thereof |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2014195183A (en) * | 2013-03-29 | 2014-10-09 | Brother Ind Ltd | Program and communication apparatus |
| AU2015224397A1 (en) * | 2015-09-08 | 2017-03-23 | Canon Kabushiki Kaisha | Methods for adjusting control parameters on an image capture device |
| TWI584859B (en) * | 2016-04-27 | 2017-06-01 | 寶凱電子企業股份有限公司 | Interactive type grabbing machine and control method thereof |
| US10885622B2 (en) * | 2018-06-29 | 2021-01-05 | Photogauge, Inc. | System and method for using images from a commodity camera for object scanning, reverse engineering, metrology, assembly, and analysis |
| WO2020264418A1 (en) * | 2019-06-28 | 2020-12-30 | Auris Health, Inc. | Console overlay and methods of using same |
-
2021
- 2021-11-28 CN CN202111426988.4A patent/CN116197887B/en active Active
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106863299A (en) * | 2017-02-13 | 2017-06-20 | 华北电力大学(保定) | A kind of configurable foot formula climbing robot mobile control system of multilayer |
| CN107404617A (en) * | 2017-07-21 | 2017-11-28 | 努比亚技术有限公司 | A kind of image pickup method and terminal, computer-readable storage medium |
| JP2019042853A (en) * | 2017-08-31 | 2019-03-22 | Thk株式会社 | Image information processing apparatus, gripping system, and image information processing method |
| JP2020021212A (en) * | 2018-07-31 | 2020-02-06 | キヤノン株式会社 | Information processing device, information processing method, and program |
| CN109648568A (en) * | 2019-01-30 | 2019-04-19 | 北京镁伽机器人科技有限公司 | Robot control method, system and storage medium |
| CN110238840A (en) * | 2019-04-24 | 2019-09-17 | 中山大学 | A Vision-Based Robotic Arm Autonomous Grasping Method |
| CN111080670A (en) * | 2019-12-17 | 2020-04-28 | 广州视源电子科技股份有限公司 | Image extraction method, device, equipment and storage medium |
| CN111508066A (en) * | 2020-04-16 | 2020-08-07 | 北京迁移科技有限公司 | 3D vision-based unordered stacked workpiece grabbing system and interaction method |
| CN112114929A (en) * | 2020-09-29 | 2020-12-22 | 青岛海信移动通信技术股份有限公司 | Display apparatus and image display method thereof |
Non-Patent Citations (1)
| Title |
|---|
| 一种三维模型制作3D立体图像的方法;李永成;李孟宇;;计算机时代(09);第62-65, 68页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116197887A (en) | 2023-06-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7520187B2 (en) | Multi-camera image processing | |
| DE102019130048B4 (en) | A robotic system with a sack loss management mechanism | |
| CN113379849B (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
| CN108972549B (en) | Real-time obstacle avoidance planning and grabbing system for industrial robotic arm based on Kinect depth camera | |
| JP5429614B2 (en) | Box-shaped workpiece recognition apparatus and method | |
| WO2023092519A1 (en) | Grabbing control method and apparatus, and electronic device and storage medium | |
| CN110948492A (en) | A 3D grasping platform and grasping method based on deep learning | |
| JP7398662B2 (en) | Robot multi-sided gripper assembly and its operating method | |
| CN108908334A (en) | A kind of intelligent grabbing system and method based on deep learning | |
| JP2013184279A (en) | Information processing apparatus, and information processing method | |
| US20250249589A1 (en) | Systems and methods for teleoperated robot | |
| CN114299039B (en) | Robot and collision detection device and method thereof | |
| CN116175542B (en) | Method, device, electronic equipment and storage medium for determining gripper grabbing sequence | |
| CN116630733A (en) | Apparatus and method for training machine learning model to generate descriptor images | |
| WO2021039775A1 (en) | Image processing device, image capturing device, robot, and robot system | |
| CN110480636A (en) | A kind of mechanical arm control system based on 3D vision | |
| Vo et al. | Development of multi-robotic arm system for sorting system using computer vision | |
| CN115194774A (en) | Binocular vision-based control method for double-mechanical-arm gripping system | |
| CN114347015A (en) | Robot grabbing control method, system, device and medium | |
| CN116197885B (en) | Image data filtering method, device, equipment and medium based on press-fit detection | |
| CN116197887B (en) | Image data processing method, device, electronic equipment and storage medium for generating capture auxiliary images | |
| Lin et al. | Vision based object grasping of industrial manipulator | |
| CN116188559A (en) | Image data processing method, device, electronic equipment and storage medium | |
| JP2018146347A (en) | Image processing device, image processing method, and computer program | |
| KR20230175122A (en) | Method for controlling a robot for manipulating, in particular picking up, an object |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CP03 | Change of name, title or address | ||
| CP03 | Change of name, title or address |
Address after: 071700 Hebei Province, Baoding City, Xiongan New Area, Qibuzhou, Yining Avenue, No. 164, North District of Chuangzhi Park, Unit 3, Room 210 (Self-declared) (Enterprise with one license and multiple addresses) Patentee after: Mech-Mind Robotics Technologies Co.,Ltd. Country or region after: China Address before: Beijing City Haidian District Shangdi Information Industry Base Chuangye Road No. 6 1st floor 1100 Patentee before: MECH-MIND (BEIJING) ROBOTICS TECHNOLOGIES CO.,LTD. Country or region before: China |