[go: up one dir, main page]

CN112286178B - Identification system, vehicle control system, identification method, and storage medium - Google Patents

Identification system, vehicle control system, identification method, and storage medium Download PDF

Info

Publication number
CN112286178B
CN112286178B CN202010707780.9A CN202010707780A CN112286178B CN 112286178 B CN112286178 B CN 112286178B CN 202010707780 A CN202010707780 A CN 202010707780A CN 112286178 B CN112286178 B CN 112286178B
Authority
CN
China
Prior art keywords
vehicle
road surface
individual
plane
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010707780.9A
Other languages
Chinese (zh)
Other versions
CN112286178A (en
Inventor
李亦杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN112286178A publication Critical patent/CN112286178A/en
Application granted granted Critical
Publication of CN112286178B publication Critical patent/CN112286178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供能够更加准确地确定道路面的识别系统、车辆控制系统、识别方法及存储介质。一种识别系统,其搭载于车辆,其中,所述识别系统具备:检测部,其检测存在于所述车辆的周边的物体的位置;和确定部,其基于所述检测部的检测结果,来确定所述车辆的周边的道路面,所述确定部针对将所述检测部的检测结果在二维平面上细分化而得到的每个单独区域使用规定的算法来判定是否为平面,将针对每个所述单独区域的判定结果汇集而确定所述车辆的周边的道路面。

The present invention provides an identification system, a vehicle control system, an identification method and a storage medium that can determine the road surface more accurately. An identification system mounted on a vehicle, wherein the identification system includes: a detection unit that detects the position of an object present around the vehicle; and a determination unit that determines based on the detection result of the detection unit. The road surface surrounding the vehicle is determined, and the determination unit determines whether each individual area obtained by subdividing the detection result of the detection unit on a two-dimensional plane is a plane using a prescribed algorithm. The determination results of each of the individual areas are aggregated to determine the road surface surrounding the vehicle.

Description

识别系统、车辆控制系统、识别方法及存储介质Identification system, vehicle control system, identification method and storage medium

技术领域Technical field

本发明涉及识别系统、车辆控制系统、识别方法及存储介质。The invention relates to an identification system, a vehicle control system, an identification method and a storage medium.

背景技术Background technique

以往,公开了如下自动行驶车辆的发明,该自动行驶车辆具备识别道路形状的道路形状识别部、使用识别到的道路形状来制作行驶路径的行驶路径制作部以及按照行驶路径来使自动行驶实现的车辆行驶控制装置,道路形状识别部具备取得将平面坐标和高度信息建立了对应关系的多个坐标信息的坐标信息取得部、从多个所述坐标信息中提取高低差为规定值以上的多个关注坐标的坐标提取部以及通过将提取到的多个关注坐标统计处理而确定道路形状的形状确定部(专利文献1(日本特开2010-250743号公报))。Conventionally, there have been disclosed inventions for an autonomous vehicle that includes a road shape recognition unit that recognizes the road shape, a travel route creation unit that uses the recognized road shape to create a travel route, and a device that realizes autonomous travel according to the travel route. In the vehicle travel control device, the road shape recognition unit includes a coordinate information acquisition unit that acquires a plurality of coordinate information that associates plane coordinates with height information, and extracts a plurality of coordinate information whose height differences are equal to or greater than a predetermined value. A coordinate extraction unit for a coordinate of interest and a shape determination unit for specifying a road shape by statistically processing a plurality of extracted coordinates of interest (Patent Document 1 (Japanese Patent Application Publication No. 2010-250743)).

发明内容Contents of the invention

【发明要解决的课题】[The problem to be solved by the invention]

在以往的技术中,将高低差为规定值以上的部分例如识别为路肩、沟,将不是这样的部分识别为道路面。然而,在该方法中,有时由于道路面自身的弯曲而未将道路面的一部分识别为道路面,或者由于增大规定值而忽略尺寸小的障碍物。In the conventional technology, portions where the height difference is equal to or greater than a predetermined value are identified as road shoulders and ditches, for example, and portions that are not such are identified as road surfaces. However, in this method, a part of the road surface may not be recognized as a road surface due to the curvature of the road surface itself, or a small-sized obstacle may be ignored by increasing the predetermined value.

本发明的目的之一在于提供能够更加准确地确定道路面的识别系统、车辆控制系统、识别方法及存储介质。One object of the present invention is to provide an identification system, a vehicle control system, an identification method and a storage medium that can more accurately determine the road surface.

【用于解决课题的手段】[Means used to solve problems]

本发明的识别系统、车辆控制系统、识别方法及存储介质采用了以下的结构。The identification system, vehicle control system, identification method and storage medium of the present invention adopt the following structures.

(1):本发明的一方案的识别系统,其搭载于车辆的识别系统,其中,所述识别系统具备:检测部,其检测存在于所述车辆的周边的物体的位置;和确定部,其基于所述检测部的检测结果,来确定所述车辆的周边的道路面,所述确定部针对将所述检测部的检测结果在二维平面上细分化而得到的每个单独区域使用规定的算法来判定是否为平面,将针对每个所述单独区域的判定结果汇集而确定所述车辆的周边的道路面。(1): The recognition system according to one aspect of the present invention is mounted on a vehicle recognition system, wherein the recognition system includes: a detection unit that detects the position of an object present in the periphery of the vehicle; and a determination unit, It determines the road surface around the vehicle based on the detection result of the detection part, and the determination part uses for each individual area obtained by subdividing the detection result of the detection part on a two-dimensional plane. A predetermined algorithm is used to determine whether it is a plane, and the determination results for each of the individual areas are combined to determine the road surface around the vehicle.

(2):在上述(1)的方案中,所述检测部是激光雷达。(2): In the solution of (1) above, the detection unit is a laser radar.

(3):在上述(2)的方案中,所述检测部一边改变仰角或俯角和方位角一边将激光向所述车辆的周边照射,所述确定部针对使将由仰角或俯角、方位角及距离至少表示的物体的位置投影于二维平面的点云数据细分化而得到的每个单独区域,使用所述规定的算法来判定是否为平面。(3): In the aspect of (2) above, the detection unit irradiates the laser to the periphery of the vehicle while changing the elevation angle, depression angle, and azimuth angle, and the determination unit is configured to change the elevation angle, depression angle, azimuth angle, and Each individual area obtained by subdividing the point cloud data of the two-dimensional plane by projecting the position of the object represented by the distance at least, uses the prescribed algorithm to determine whether it is a plane.

(4):在上述(1)~(3)的任一方案中,所述确定部基于所述二维平面中的距所述车辆的距离,来使所述单独区域的尺寸不同。(4): In any one of the above (1) to (3), the determination unit makes the size of the individual area different based on the distance from the vehicle in the two-dimensional plane.

(5):在上述(1)~(4)的任一方案中,所述确定部取得表示所述车辆的周边的物体的分布的信息,基于所述取得的表示物体的分布的信息,来变更所述单独区域的尺寸。(5): In any one of the above (1) to (4), the determination unit obtains information indicating the distribution of objects around the vehicle, and based on the obtained information indicating the distribution of the objects, determines Change the size of said individual area.

(6):在上述(1)~(5)的任一方案中,所述确定部取得与所述车辆所存在的道路的类别相关的信息,在所述取得的与道路的类别相关的信息表示特定类别的道路的情况下,和所述取得的与道路的类别相关的信息不表示特定类别的道路的情况相比,增大所述单独区域的尺寸。(6): In any one of the above (1) to (5), the determining unit obtains information related to the type of road on which the vehicle exists, and the obtained information related to the type of road is When the road type indicates a specific type of road, the size of the individual area is increased compared to a case where the acquired information on the type of road does not indicate a road of the specific type.

(7):一种车辆控制系统,其具备上述(1)~(6)的任一方案的识别系统;和行驶控制装置,其基于从所述识别系统中的所述检测部的检测结果将与由所述确定部确定出的道路面相当的部分排除的信息,进行所述车辆的行驶控制。(7): A vehicle control system including the recognition system according to any one of the above (1) to (6); and a driving control device that detects the detection result from the detection unit in the recognition system. The driving control of the vehicle is performed based on the information excluding the portion corresponding to the road surface determined by the determining unit.

(8):本发明的其他的方案的识别方法中,搭载于车辆的计算机执行如下处理:取得检测存在于车辆的周边的物体的位置的检测部的检测结果,基于所述检测结果,来确定所述车辆的周边的道路面,在进行所述确定时,针对将所述检测结果在二维平面上细分化而得到的每个单独区域使用规定的算法来判定是否为平面,将针对每个所述单独区域的判定结果汇集而确定所述车辆的周边的道路面。(8): In the recognition method according to another aspect of the present invention, the computer mounted on the vehicle executes a process of acquiring the detection result of the detection unit that detects the position of an object present around the vehicle, and determining based on the detection result. When making the determination, a prescribed algorithm is used to determine whether the road surface around the vehicle is a plane for each individual area obtained by subdividing the detection result on a two-dimensional plane. The determination results of each of the individual areas are aggregated to determine the road surface surrounding the vehicle.

(9):本发明的其他的方案的存储介质存储有程序,该程序使搭载于车辆的计算机执行如下处理:取得检测存在于车辆的周边的物体的位置的检测部的检测结果,基于所述检测结果,来确定所述车辆的周边的道路面,在所述确定时,针对将所述检测结果在二维平面上细分化而得到的每个单独区域使用规定的算法来判定是否为平面,将针对每个所述单独区域的判定结果汇集而确定所述车辆的周边的道路面。(9): The storage medium according to another aspect of the present invention stores a program that causes a computer mounted on the vehicle to execute a process of acquiring a detection result of a detection unit that detects the position of an object present around the vehicle, based on the above The detection results are used to determine the road surface surrounding the vehicle. When determining, a prescribed algorithm is used to determine whether each individual area obtained by subdividing the detection results on a two-dimensional plane is a plane. , the determination results for each of the individual areas are aggregated to determine the road surface surrounding the vehicle.

【发明效果】[Effects of the invention]

根据上述(1)~(9)的方案,能够更加准确地确定道路面。According to the solutions (1) to (9) above, the road surface can be determined more accurately.

附图说明Description of the drawings

图1是示出在车辆搭载有识别系统及车辆控制系统的情形的图。FIG. 1 is a diagram showing a vehicle equipped with an identification system and a vehicle control system.

图2是物体识别装置的构成图。Fig. 2 is a structural diagram of the object recognition device.

图3是示出点云数据的一例的图。FIG. 3 is a diagram showing an example of point cloud data.

图4是示出设定的格网的图。FIG. 4 is a diagram showing a set grid.

图5是示出通过比较例的方法将判定为道路面的部分的坐标去除而得到的点云数据的图。FIG. 5 is a diagram showing point cloud data obtained by removing the coordinates of a portion determined to be a road surface using a method of a comparative example.

图6是示出通过实施方式的方法将判定为道路面的部分的坐标去除而得到的点云数据的图。FIG. 6 is a diagram showing point cloud data obtained by removing the coordinates of a portion determined to be a road surface using the method of the embodiment.

图7是示出通过实施方式的方法将判定为道路面的部分的坐标去除而得到的点云数据的图。FIG. 7 is a diagram showing point cloud data obtained by removing the coordinates of a portion determined to be a road surface using the method of the embodiment.

图8是示出由识别系统执行的处理的流程的一例的流程图。FIG. 8 is a flowchart showing an example of the flow of processing executed by the recognition system.

【附图标记说明】[Explanation of reference symbols]

10 激光雷达10 lidar

50 物体识别装置50 Object recognition device

60 激光雷达数据处理部60 LiDAR Data Processing Department

61 点云数据生成部61 Point Cloud Data Generation Department

62 信息取得部62 Information Acquisition Department

63 道路面确定部63 Road surface determination part

63A 格网设定部63A Grid Setting Department

63B 平面提取处理部63B Plane Extraction Processing Department

64 非道路面物体提取部64 Non-road surface object extraction part

65 道路划分线识别部65 Road dividing line recognition department

100 行驶控制装置。100 Ride Controls.

具体实施方式Detailed ways

以下,参照附图,对本发明的识别系统、车辆控制系统、识别方法及存储介质的实施方式进行说明。Hereinafter, embodiments of the identification system, vehicle control system, identification method, and storage medium of the present invention will be described with reference to the accompanying drawings.

图1是示出在车辆M搭载有识别系统及车辆控制系统的情形的图。在车辆M上例如搭载有激光雷达(Light Detection and Ranging:LIDAR)10(“检测部”的一例)、相机20、雷达装置30、物体识别装置50以及行驶控制装置100。激光雷达10和物体识别装置50的组合是“识别系统”的一例,对其加上行驶控制装置100则是“车辆控制系统”的一例。作为检测部,也可以使用激光雷达以外的检测装置。FIG. 1 is a diagram showing a vehicle M equipped with an identification system and a vehicle control system. The vehicle M is equipped with, for example, a LIDAR (Light Detection and Ranging: LIDAR) 10 (an example of a “detection unit”), a camera 20 , a radar device 30 , an object recognition device 50 , and a travel control device 100 . The combination of the lidar 10 and the object recognition device 50 is an example of a "recognition system", and the combination of the travel control device 100 is an example of a "vehicle control system". As the detection unit, a detection device other than laser radar may be used.

激光雷达10照射光并检测反射光,通过测定从照射到检测为止的时间来检测到物体的距离。激光雷达10能够针对仰角或俯角(以下,上下方向的照射方向φ)和方位角(水平方向的照射方向θ)这两者将光的照射方向进行变更。激光雷达10例如反复进行将照射方向φ固定而一边改变照射方向θ一边进行扫描,接着变更上下方向的照射方向φ,以变更后的角度将照射方向φ固定而一边改变照射方向θ一边进行扫描这样的动作。以下,将照射方向φ称作“层”,将使层固定而一边改变照射方向θ一边进行的一次扫描称作“循环”,将针对全部的层进行扫描称作“1扫描”。层例如从L1到Ln为止以有限数设定(n为自然数)。层的变更例如以在上次的循环中照射了的光不干涉此次的循环中的检测获知的方式,如L0→L4→L2→L5→L1…这样关于角度不连续地进行。需要说明的是,不限定于此,层的变更也可以关于角度连续地进行。The lidar 10 irradiates light, detects reflected light, and detects the distance to an object by measuring the time from irradiation to detection. The lidar 10 can change the irradiation direction of light with respect to both an elevation angle or a depression angle (hereinafter, the irradiation direction φ in the vertical direction) and the azimuth angle (the irradiation direction θ in the horizontal direction). For example, the lidar 10 repeatedly fixes the irradiation direction φ and scans while changing the irradiation direction θ, then changes the irradiation direction φ in the vertical direction, fixes the irradiation direction φ at the changed angle, and scans while changing the irradiation direction θ. Actions. Hereinafter, the irradiation direction φ is called a “layer”, one scan performed while fixing the layer while changing the irradiation direction θ is called a “cycle”, and scanning all the layers is called a “scan”. For example, a finite number of layers are set from L1 to Ln (n is a natural number). The layer is changed discontinuously with respect to the angle such as L0→L4→L2→L5→L1... so that the light irradiated in the previous cycle does not interfere with the detection in the current cycle. It should be noted that the present invention is not limited to this, and the layer may be changed continuously with respect to the angle.

激光雷达10例如将以{φ,θ,d,p}为一个单位的数据组(激光雷达数据)向物体识别装置50输出。d是距离,p是反射光的强度。物体识别装置50设置于车辆M中的任意部位。在图1中,激光雷达10设置于车辆M的顶棚上,设为能够将照射方向θ以360度变更,但该配置只不过是一例,例如也可以是设置于车辆M的前部而能够以车辆M的前方为中心将照射方向θ以180度变更的激光雷达和设置于车辆M的后部而能够以车辆M的后方为中心将照射方向θ以180度变更的激光雷达搭载于车辆M。The laser radar 10 outputs a data group (lidar data) in a unit of {φ, θ, d, p} to the object recognition device 50 , for example. d is the distance and p is the intensity of the reflected light. The object recognition device 50 is installed anywhere in the vehicle M. In FIG. 1 , the lidar 10 is installed on the ceiling of the vehicle M and is capable of changing the irradiation direction θ through 360 degrees. However, this arrangement is just an example. For example, the lidar 10 may be installed on the front of the vehicle M and capable of changing the irradiation direction θ through 360 degrees. The vehicle M is equipped with a lidar that can change the illumination direction θ by 180 degrees with the front of the vehicle M as the center, and a lidar that is installed at the rear of the vehicle M and can change the illumination direction θ with 180 degrees with the rear of the vehicle M as the center.

相机20设置于能够拍摄车辆M的周边(尤其是前方或后方)的任意的位置。例如,相机20设置于前风窗玻璃的上部。相机20是具备CCD(Charge Coupled Device)、CMOS(Complementary Metal Oxide Semiconductor)等摄像元件的数码相机,以规定周期反复拍摄车辆M的周边。The camera 20 is installed at an arbitrary position capable of photographing the periphery of the vehicle M (especially the front or rear). For example, the camera 20 is installed on the upper part of the front windshield. The camera 20 is a digital camera equipped with an imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor), and repeatedly captures images of the surroundings of the vehicle M at a predetermined cycle.

雷达装置30向车辆M的周边放射毫米波等电波,并且检测由物体反射的电波(反射波)而至少检测物体的位置(距离及方位)。雷达装置30安装于车辆M的任意部位。例如,雷达装置30安装于车辆M的前格栅的内部。The radar device 30 radiates radio waves such as millimeter waves to the periphery of the vehicle M and detects radio waves (reflected waves) reflected by an object to detect at least the position (distance and direction) of the object. The radar device 30 is installed at any part of the vehicle M. For example, the radar device 30 is installed inside the front grille of the vehicle M.

图2是物体识别装置50的构成图。物体识别装置50例如具备激光雷达数据处理部60、相机图像处理部70、雷达数据处理部80以及传感器融合部90。激光雷达数据处理部60例如具备点云数据生成部61、信息取得部62、道路面确定部63(“确定部”的一例)、非道路面物体提取部64以及道路划分线识别部65。道路面确定部63例如具备格网设定部63A和平面提取处理部63B。这些构成要素例如通过CPU(Central Processing Unit)等硬件处理器执行程序(软件)来实现。这些构成要素中的一部分或全部可以通过LSI(Large ScaleIntegration)、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、GPU(Graphics Processing Unit)等硬件(包含电路部;circuitry)来实现,也可以通过软件和硬件的协同配合来实现。程序可以预先保存于HDD(Hard Disk Drive)、闪存器等存储装置(具备非暂时性存储介质的存储装置),也可以保存于DVD、CD-ROM等能够装卸的存储介质(非暂时性存储介质),通过存储介质被装配于驱动装置而被安装。FIG. 2 is a structural diagram of the object recognition device 50. The object recognition device 50 includes, for example, a lidar data processing unit 60, a camera image processing unit 70, a radar data processing unit 80, and a sensor fusion unit 90. The lidar data processing unit 60 includes, for example, a point cloud data generation unit 61, an information acquisition unit 62, a road surface identification unit 63 (an example of a “identification unit”), a non-road surface object extraction unit 64, and a road dividing line recognition unit 65. The road surface specifying unit 63 includes, for example, a grid setting unit 63A and a plane extraction processing unit 63B. These components are realized by, for example, a hardware processor such as a CPU (Central Processing Unit) executing a program (software). Some or all of these components can be realized by hardware (including circuitry) such as LSI (Large Scale Integration), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), and GPU (Graphics Processing Unit). , can also be realized through the cooperation of software and hardware. The program can be stored in advance in a storage device (a storage device with a non-transitory storage medium) such as an HDD (Hard Disk Drive) or a flash memory, or in a removable storage medium (a non-transitory storage medium) such as a DVD or CD-ROM. ), which is installed by mounting the storage medium on the drive device.

点云数据生成部61基于激光雷达数据,来生成点云数据。本实施方式中的点云数据是将从激光雷达数据识别的物体的三维空间上的位置投影于从上空观察的二维平面上的位置而得到的数据。图3是示出点云数据的一例的图。规定点云数据的二维平面(图中,由X轴和Y轴表示的二维平面)例如是从激光雷达10观察而得到的相对的二维平面。虽然在图中没有显现,但在点云数据的各坐标赋予有高度的信息(与X轴和Y轴正交的方向的位移)。高度的信息基于坐标间的连续性、分散和层间的照射角度的差等,由点云数据生成部61计算。The point cloud data generating unit 61 generates point cloud data based on lidar data. The point cloud data in this embodiment is data obtained by projecting the position in the three-dimensional space of the object recognized from the lidar data onto the position on the two-dimensional plane viewed from above. FIG. 3 is a diagram showing an example of point cloud data. The two-dimensional plane defining the point cloud data (the two-dimensional plane represented by the X-axis and the Y-axis in the figure) is, for example, an opposing two-dimensional plane observed from the lidar 10 . Although not shown in the figure, height information (displacement in the direction orthogonal to the X-axis and Y-axis) is given to each coordinate of the point cloud data. The height information is calculated by the point cloud data generating unit 61 based on continuity and dispersion between coordinates, differences in irradiation angles between layers, and the like.

信息取得部62取得在格网设定部63A设定格网时利用的各种信息。The information acquisition unit 62 acquires various information used when the grid setting unit 63A sets the grid.

例如,信息取得部62取得表示车辆M的周边的物体的分布的信息。所谓车辆M的周边的物体的分布是指,例如,取得将识别系统的可识别范围内的车辆的数量、行人的数量、自行车的数量、信号、人行横道、交叉路口的数量等中的一部分或全部指标化而得到的值(拥挤指数)。对于拥挤指数,例如上述的要素越高密度地存在,则表示越高的值。信息取得部62可以自行计算拥挤指数,也可以从相机图像处理部70、雷达数据处理部80等取得。For example, the information acquisition unit 62 acquires information indicating the distribution of objects around the vehicle M. The distribution of objects around the vehicle M means, for example, obtaining some or all of the number of vehicles, the number of pedestrians, the number of bicycles, the number of signals, crosswalks, intersections, etc. within the recognition range of the recognition system. The value obtained by indexing (crowding index). Regarding the crowding index, for example, the higher the density of the above-mentioned elements, the higher the value. The information acquisition unit 62 may calculate the congestion index by itself, or may obtain it from the camera image processing unit 70, the radar data processing unit 80, or the like.

另外,信息取得部62也可以取得与车辆M存在的道路的类别相关的信息。信息取得部62可以从搭载于车辆M的导航装置(未图示)取得与道路的类别相关的信息,也可以根据相机图像处理部70从相机图像识别道路标识出的结果来导出。In addition, the information acquisition unit 62 may acquire information on the type of road on which the vehicle M is present. The information acquisition unit 62 may acquire the information related to the type of road from a navigation device (not shown) mounted on the vehicle M, or may derive the information based on the result of the camera image processing unit 70 recognizing the road sign from the camera image.

道路面确定部63的格网设定部63A假想地设定将规定点云数据的二维平面分割而得到的单独区域即多个格网。图4是示出设定的格网G的图。格网设定部63A例如将格网G以矩形(可以是正方形,也可以是长方形)设定。格网设定部63A可以设定相同尺寸的格网G,也可以基于二维平面中的距车辆M的距离,来使格网G的尺寸不同。例如,如图所示,格网设定部63A也可以距车辆M越远则越增大格网G的尺寸。格网设定部63A也可以针对通过参照识别系统的过去的识别结果而得到的不需要识别区域(例如,护栏的对向侧、建筑物等道路外区域)不设定格网G。格网设定部63A可以以三角形、六边形等任意的多边形(也可以包含两种以上的多边形)来设定格网G,也可以以不定形来设定格网G。The grid setting unit 63A of the road surface specifying unit 63 virtually sets a plurality of grids that are individual areas obtained by dividing the two-dimensional plane of the predetermined point cloud data. FIG. 4 is a diagram showing the set grid G. The grid setting unit 63A sets the grid G in a rectangular shape (it may be a square or a rectangle), for example. The grid setting unit 63A may set the grids G of the same size, or may set the sizes of the grids G to be different based on the distance from the vehicle M in the two-dimensional plane. For example, as shown in the figure, the grid setting unit 63A may increase the size of the grid G as the distance from the vehicle M increases. The grid setting unit 63A may not set the grid G for unnecessary recognition areas (for example, off-road areas such as opposite sides of guardrails and buildings) obtained by referring to past recognition results of the recognition system. The grid setting unit 63A may set the grid G in any polygon (which may include two or more polygons) such as a triangle or a hexagon, or may set the grid G in an amorphous shape.

另外,格网设定部63A也可以基于拥挤指数来决定格网G的尺寸。例如,格网设定部63A也可以拥挤指数越高则越减小格网G的尺寸。In addition, the grid setting unit 63A may determine the size of the grid G based on the congestion index. For example, the grid setting unit 63A may reduce the size of the grid G as the congestion index becomes higher.

另外,格网设定部63A也可以基于道路的类别来决定格网G的尺寸。例如,格网设定部63A在道路的类别为高速道路、机动车专用道路等车辆以外的交通参加者少的特定类别的情况下,与不是特定类别的情况相比,也可以增大格网G的尺寸。In addition, the grid setting unit 63A may determine the size of the grid G based on the type of road. For example, when the type of the road is a specific type with few traffic participants other than vehicles, such as an expressway or a motor vehicle road, the grid setting unit 63A may enlarge the grid compared to the case where it is not a specific type. G size.

平面提取处理部63B针对每个格网G,对格网G所包含的点云数据,进行基于RANSAC(Random Sample Consensus)等稳健回归推定方法的平面提取处理,判定该格网G是否为道路面(在其上不存在物体),将判定结果与各格网G建立对应关系。需要说明的是,平面提取处理部63B也可以取代RANSAC,而进行其他种类的平面提取处理。For each grid G, the plane extraction processing unit 63B performs plane extraction processing based on a robust regression estimation method such as RANSAC (Random Sample Consensus) on the point cloud data included in the grid G, and determines whether the grid G is a road surface. (no object exists on it), establish a corresponding relationship between the judgment result and each grid G. It should be noted that the plane extraction processing unit 63B may perform other types of plane extraction processing instead of RANSAC.

RANSAC例如按照以下的顺序来进行。首先,从数据集合随机地选择模型的决定所需要的数量以上的(并非全部的)样本,从选择出的样本以最小二乘法等导出临时模型,将临时模型填入数据,偏离值若并没有那么多则加入至模型候补。反复执行几次该处理,将与数据集合的整体最吻合的模型候补设为正解模型。在本实施方式中,平面提取处理部63B针对正解模型成为了平面的格网G,判定为是道路面。RANSAC is performed in the following sequence, for example. First, randomly select more than the number of samples (not all) required for model determination from the data set, derive a temporary model from the selected samples using the least squares method, etc., and fill the temporary model into the data. If the deviation value does not exist That many are added to the model candidates. This process is repeated several times, and the model candidate that best matches the entire data set is set as the correct solution model. In the present embodiment, the plane extraction processing unit 63B determines that the grid G whose correct solution model is a plane is a road surface.

非道路面物体提取部64针对由平面提取处理部63B判定为道路面的格网G以外的格网G,解析点云数据而提取存在于格网G上的物体的轮廓,基于轮廓来识别与该轮廓对应的物体的位置。或者,非道路面物体提取部64也可以基于激光雷达数据中与由平面提取处理部63B判定为道路面的格网G以外的格网G对应的数据,提取物体的轮廓,基于轮廓来识别与该轮廓对应的物体的位置。The non-road surface object extraction unit 64 analyzes the point cloud data for the grid G other than the grid G determined to be a road surface by the plane extraction processing unit 63B, extracts the outline of the object existing on the grid G, and identifies the object based on the outline. The position of the object corresponding to this outline. Alternatively, the non-road surface object extraction unit 64 may extract the outline of the object based on the data corresponding to the grid G other than the grid G determined to be the road surface by the plane extraction processing unit 63B in the lidar data, and identify the object based on the outline. The position of the object corresponding to this outline.

道路划分线识别部65着眼于激光雷达数据中的反射光的强度p,将由于道路面与白线、黄线等道路划分线的颜色的差异而产生的强度p的变化率高的部分识别为道路划分线的轮廓。由此,道路划分线识别部65识别白线等道路划分线的位置。The road dividing line identification unit 65 focuses on the intensity p of the reflected light in the lidar data, and identifies a portion with a high rate of change in the intensity p due to the color difference between the road surface and road dividing lines such as white lines and yellow lines. The outline of a road dividing line. Thereby, the road dividing line recognition unit 65 recognizes the position of the road dividing line such as a white line.

非道路面物体提取部64及道路划分线识别部65的处理结果被向传感器融合部90输出。对传感器融合部90,也输入相机图像处理部70、雷达数据处理部80的处理结果。The processing results of the non-road surface object extraction unit 64 and the road dividing line recognition unit 65 are output to the sensor fusion unit 90 . The processing results of the camera image processing unit 70 and the radar data processing unit 80 are also input to the sensor fusion unit 90 .

相机图像处理部70对从相机20取得的相机图像进行各种图像处理,识别存在于车辆M的周边的物体的位置、尺寸、种类等。相机图像处理部70进行的图像处理也可以包含对通过机械学习得到的学习完毕模型输入相机图像的处理、从提取边缘点而将边缘点相连得到的轮廓线识别物体的处理。The camera image processing unit 70 performs various image processing on the camera image acquired from the camera 20 and recognizes the position, size, type, etc. of objects existing around the vehicle M. The image processing performed by the camera image processing unit 70 may include a process of inputting a camera image of a learned model obtained by mechanical learning, and a process of identifying an object from a contour line obtained by extracting edge points and connecting the edge points.

雷达数据处理部80对从雷达装置30取得的雷达数据进行各种物体提取处理,识别存在于车辆M的周边的物体的位置、尺寸、种类等。雷达数据处理部80例如基于来自物体的反射波的强度来推定物体的材质,由此推定物体的种类。The radar data processing unit 80 performs various object extraction processes on the radar data acquired from the radar device 30 to identify the position, size, type, etc. of objects existing around the vehicle M. The radar data processing unit 80 estimates the type of the object by estimating the material of the object based on, for example, the intensity of the reflected wave from the object.

传感器融合部90综合分别从激光雷达数据处理部60、相机图像处理部70及雷达数据处理部80输入的处理结果,决定物体、道路划分线的位置并向行驶控制装置100输出。在传感器融合部90的处理中,例如也可以包含针对各个处理结果求出逻辑和、逻辑积、加权和等的处理。The sensor fusion unit 90 integrates the processing results input from the lidar data processing unit 60 , the camera image processing unit 70 and the radar data processing unit 80 , determines the positions of objects and road dividing lines, and outputs the results to the travel control device 100 . The processing of the sensor fusion unit 90 may include, for example, processing of obtaining a logical sum, a logical product, a weighted sum, etc. for each processing result.

通过如上述那样进行处理,从而识别系统能够更加准确地确定道路面。图5是示出通过比较例的方法将判定为道路面的部分的坐标去除而得到的点云数据的图。所谓比较例的方法,是对激光雷达数据的整体(不区分格网G)应用RANSAC,去除判定为道路面的区域的坐标的方法。如图所示,在比较例的方法中,在与道路面对应的区域A1、A2中,也残留较多的坐标。尤其是区域A1从车辆M观察时成为上坡,在对于数据整体应用了RANSAC的情况下变得难以识别为道路面。另外,一般的道路构造为道路的中央部高、随着趋向端部而变低,所以在比较例的方法中,存在由于道路的中央部和端部的高低差而判定为不是道路面的可能性。进而,由于在道路上存在细小的凹部等,所以存在局部识别为不是道路面的可能性。By performing the processing as described above, the recognition system can determine the road surface more accurately. FIG. 5 is a diagram showing point cloud data obtained by removing the coordinates of a portion determined to be a road surface using a method of a comparative example. The method of the comparative example is a method of applying RANSAC to the entire lidar data (without distinguishing the grid G) and removing the coordinates of the area determined to be a road surface. As shown in the figure, even in the method of the comparative example, many coordinates remain in the areas A1 and A2 corresponding to the road surface. In particular, the area A1 is uphill when viewed from the vehicle M, and becomes difficult to recognize as a road surface when RANSAC is applied to the entire data. In addition, the general road structure is such that the central portion of the road is high and becomes lower toward the end portions. Therefore, in the method of the comparative example, there is a possibility that the road surface is not determined to be a road surface due to the height difference between the central portion and the end portions of the road. sex. Furthermore, since there are fine recesses and the like on the road, there is a possibility that the part may be recognized as not being a road surface.

与此相对,图6及图7是示出通过实施方式的方法将判定为道路面的部分的坐标去除而得到的点云数据的图。图6示出将作为正方形的格网G的一边设为X1的情况下的结果,图7示出将作为正方形的格网G的一片设为X2的情况下的结果(X1>X2)。如这些图所示那样,根据实施方式的方法,在与道路面对应的区域A1、A2中坐标大部分被去除,由此,降低将实际上不存在的物体识别为障碍物的可能性。需要说明的是,将格网G的一边缩小的方法(即将格网G的尺寸缩小的方法)能够提高判定为道路面的精度,但由于将格网G的尺寸设为小的方法使处理负荷变高,所以它们具有权衡的关系。鉴于这一点,通过距车辆M越远则越增大格网G的尺寸,从而即使存在误识别,关于影响小的远方也能够以低负荷进行处理,能够进行识别精度和处理负荷的平衡好的处理。另外,通过拥挤指数越高,则越减小格网G的尺寸,从而在市区等交通参加者多的场所中,进行使识别精度优先的处理,在不是这样的场所中进行使处理负荷的减轻优先的处理,所以能够进行识别精度和处理负荷的平衡好的处理。另外,在道路的类别是特定类别的情况下,与不是特定类别的情况相比,通过增大格网G的尺寸,从而能够在交通参加者少的场所中进行使处理负荷的减轻优先的处理,在不是这样的场所中进行使识别精度优先的处理,所以能够进行识别精度和处理负荷的平衡好的处理。In contrast, FIGS. 6 and 7 are diagrams showing point cloud data obtained by removing the coordinates of a portion determined to be a road surface using the method of the embodiment. FIG. 6 shows the results when one side of the square grid G is set to X1, and FIG. 7 shows the results when one side of the square grid G is set to X2 (X1>X2). As shown in these figures, according to the method of the embodiment, most of the coordinates in the areas A1 and A2 corresponding to the road surface are removed, thereby reducing the possibility of recognizing an object that does not actually exist as an obstacle. It should be noted that the method of reducing one side of the grid G (that is, the method of reducing the size of the grid G) can improve the accuracy of determining the road surface, but the method of making the size of the grid G small increases the processing load. become higher, so they have a trade-off relationship. In view of this, by increasing the size of the grid G as the distance from the vehicle M increases, even if there is misrecognition, processing can be performed with a low load on distant places with little impact, and a good balance between recognition accuracy and processing load can be achieved. deal with. In addition, by reducing the size of the grid G as the congestion index becomes higher, the recognition accuracy is prioritized in places such as urban areas where there are many traffic participants, and the processing load is increased in places where there are not such places. By reducing priority processing, it is possible to perform processing with a good balance between recognition accuracy and processing load. In addition, when the road type is a specific type, compared with the case where the road type is not a specific type, by increasing the size of the grid G, it is possible to perform processing that prioritizes the reduction of the processing load in a place with few traffic participants. , processing that prioritizes recognition accuracy is performed in a place that is not such a place, so it is possible to perform processing with a good balance between recognition accuracy and processing load.

行驶控制装置100例如是控制车辆M的加减速和转向这两者的自动驾驶控制装置。行驶控制装置100基于由物体识别装置50输出的物体、白线等的位置,自动地使车辆M以不与物体接触的方式在设定的车道内行驶,或者根据需要而进行车道变更、超车、分支、汇合、停止等的自动控制。也可以取代于此,行驶控制装置100是在物体接近时进行自动停止的驾驶支援装置等。这样,行驶控制装置100基于非道路面物体提取部64对于由平面提取处理部63B判定为道路面的格网G以外的格网G识别出的物体的位置(将与确定出的道路面相当的部分排除了的信息)经由传感器融合部90而输出的信息,来进行车辆M的行驶控制。The driving control device 100 is, for example, an automatic driving control device that controls both acceleration, deceleration and steering of the vehicle M. Based on the positions of objects, white lines, etc. output from the object recognition device 50 , the driving control device 100 automatically causes the vehicle M to drive in the set lane without coming into contact with the objects, or performs lane changes, overtaking, etc. as necessary. Automatic control of branching, merging, stopping, etc. Alternatively, the driving control device 100 may be a driving support device that automatically stops when an object approaches, or the like. In this way, the driving control device 100 determines the position of the object recognized by the non-road surface object extraction unit 64 for the grid G other than the grid G determined to be a road surface by the plane extraction processing unit 63B (the location corresponding to the specified road surface). Partially excluded information) is output via the sensor fusion unit 90 to control the driving of the vehicle M.

图8是示出由识别系统执行的处理的流程的一例的流程图。激光雷达10检测物体,将激光雷达数据反复向激光雷达数据处理部60输出(步骤S100)。FIG. 8 is a flowchart showing an example of the flow of processing executed by the recognition system. The lidar 10 detects an object and repeatedly outputs the lidar data to the lidar data processing unit 60 (step S100).

激光雷达数据处理部60待机至取得1扫描量的激光雷达数据为止(步骤S102)。当取得1扫描量的激光雷达数据时,使处理前进到步骤S106。The lidar data processing unit 60 waits until one scan amount of lidar data is acquired (step S102). When one scan amount of laser radar data is acquired, the process proceeds to step S106.

另一方面,信息取得部62例如与激光雷达数据处理部60中的信息取得部62以外的构成要素非同步地动作,取得用于进行格网G的设定的信息,并向道路面确定部63提供(步骤S104)。On the other hand, the information acquisition unit 62 operates asynchronously with components other than the information acquisition unit 62 in the laser radar data processing unit 60 , acquires information for setting the grid G, and provides the information to the road surface determination unit. 63 is provided (step S104).

点云数据生成部61从激光雷达数据生成点云数据(步骤S106)。格网设定部63A基于从信息取得部62提供的信息,来设定格网G(步骤S108)。The point cloud data generating unit 61 generates point cloud data from lidar data (step S106). The grid setting unit 63A sets the grid G based on the information supplied from the information acquisition unit 62 (step S108).

平面提取处理部63B针对每个格网G,确定是否为道路面(步骤S110)。非道路面物体提取部64关于没有被确定为道路面的非道路面的格网G,进行物体识别(步骤S112)。然后,激光雷达数据处理部60将非道路面物体提取部64及道路划分线识别部65的处理结果向传感器融合部90输出(步骤S114)。由此图8的流程图的例程结束。The plane extraction processing unit 63B determines whether each grid G is a road surface (step S110). The non-road surface object extraction unit 64 performs object recognition on the grid G of the non-road surface that has not been identified as a road surface (step S112). Then, the lidar data processing unit 60 outputs the processing results of the non-road surface object extraction unit 64 and the road dividing line recognition unit 65 to the sensor fusion unit 90 (step S114). This completes the routine of the flowchart of FIG. 8 .

根据以上说明的实施方式的识别系统,具备:检测部(激光雷达10),其检测存在于车辆(M)的周边的物体的位置;和确定部(道路面确定部63),其基于检测部的检测结果来确定车辆的周边的道路面,确定部针对将检测部的检测结果在二维平面上细分化而得到的单独区域(格网G)的每个使用规定的算法(RANSAC)来判定是否为平面,将针对每个单独区域的判定结果汇集而确定车辆的周边的道路面,所以能够更加准确地确定道路面。The recognition system according to the above-described embodiment includes: a detection unit (lidar 10) that detects the position of an object present around the vehicle (M); and a determination unit (road surface determination unit 63) that is based on the detection unit The determination unit uses a predetermined algorithm (RANSAC) for each individual area (grid G) obtained by subdividing the detection results of the detection unit on a two-dimensional plane to determine the road surface surrounding the vehicle. To determine whether or not it is a plane, the determination results for each individual area are aggregated to determine the road surface around the vehicle, so the road surface can be determined more accurately.

需要说明的是,识别系统也可以不具备相机20、雷达装置30、相机图像处理部70、雷达数据处理部80、传感器融合部90中的一部分或全部。例如,识别系统也可以包含激光雷达10和激光雷达数据处理部60,将非道路面物体提取部64及道路划分线识别部65的处理结果向行驶控制装置100输出。It should be noted that the recognition system may not include some or all of the camera 20 , the radar device 30 , the camera image processing unit 70 , the radar data processing unit 80 , and the sensor fusion unit 90 . For example, the recognition system may include the lidar 10 and the lidar data processing unit 60 , and output the processing results of the non-road surface object extraction unit 64 and the road dividing line identification unit 65 to the travel control device 100 .

以上使用实施方式说明了本发明的具体实施方式,但本发明丝毫不被这样的实施方式限定,在不脱离本发明的主旨的范围内能够施加各种变形及替换。Specific embodiments of the present invention have been described above using the embodiments. However, the present invention is not limited to such embodiments at all, and various modifications and substitutions can be made without departing from the gist of the present invention.

Claims (6)

1. An identification system mounted on a vehicle, wherein,
the identification system is provided with:
a detection unit that detects a position of an object existing in the periphery of the vehicle; and
a determination unit that determines a road surface around the vehicle based on a detection result of the detection unit,
the determination unit determines whether or not the vehicle is a plane using a predetermined algorithm for each individual region obtained by dividing the detection result of the detection unit in a two-dimensional plane, and determines a road surface around the vehicle by aggregating the determination results for each individual region,
the prescribed algorithm is the following algorithm: randomly selecting a predetermined number of samples from a data set as a detection result of the detection unit for each individual area, deriving model candidates based on the selected samples, setting a model candidate most matching the whole data set among the model candidates having a deviation value equal to or smaller than a predetermined number when the model candidates are filled with data as a forward solution model, determining that the individual area is a road surface when the forward solution model becomes a plane,
the detection unit is a laser radar that irradiates laser light to the periphery of the vehicle while changing an elevation angle, a depression angle, and an azimuth angle,
the determination unit determines whether or not the object is a plane by using the predetermined algorithm for each individual area obtained by dividing point cloud data obtained by projecting a position of the object represented by at least an elevation angle, a depression angle, an azimuth angle, and a distance onto a two-dimensional plane,
the determination section makes the sizes of the individual areas different based on the distances from the vehicle in the two-dimensional plane,
the determination unit does not set the individual area for the unnecessary recognition area obtained by referring to the past recognition result of the recognition system.
2. The identification system of claim 1, wherein,
the determination unit obtains information indicating a distribution of objects around the vehicle, and changes the size of the individual region based on the obtained information indicating the distribution of the objects.
3. The identification system according to claim 1 or 2, wherein,
the determination unit obtains information on a road type of the vehicle, and increases the size of the individual area when the obtained information on the road type indicates a road of a specific type, compared with when the obtained information on the road type does not indicate a road of a specific type.
4. A vehicle control system, wherein,
the vehicle control system includes:
the identification system of any one of claims 1 to 3; and
and a travel control device that performs travel control of the vehicle based on information excluding a portion corresponding to the road surface specified by the specifying unit from a detection result of the detecting unit in the identifying system.
5. An identification method, wherein,
the computer mounted on the vehicle performs the following processing:
a detection result of a detection unit for detecting the position of an object existing in the periphery of the vehicle is obtained,
determining a road surface of a periphery of the vehicle based on the detection result,
in the course of the determination as to what has been described,
a predetermined algorithm is used for each individual region obtained by dividing the detection result on a two-dimensional plane to determine whether or not the individual region is a plane,
the determination results for each of the individual regions are aggregated to determine the road surface of the periphery of the vehicle,
the prescribed algorithm is the following algorithm: randomly selecting a predetermined number of samples from a data set as a detection result of the detection unit for each individual area, deriving model candidates based on the selected samples, setting a model candidate most matching the whole data set among the model candidates having a deviation value equal to or smaller than a predetermined number when the model candidates are filled with data as a forward solution model, determining that the individual area is a road surface when the forward solution model becomes a plane,
the detection unit is a laser radar that irradiates laser light to the periphery of the vehicle while changing an elevation angle, a depression angle, and an azimuth angle,
in the course of the determination as to what has been described,
for each individual area obtained by dividing point cloud data obtained by projecting a position of an object represented by at least an elevation angle or depression angle, an azimuth angle, and a distance onto a two-dimensional plane, determining whether the object is a plane or not using the predetermined algorithm,
the sizes of the individual areas are made different based on the distance from the vehicle in the two-dimensional plane,
the individual area is not set for the unnecessary recognition area obtained by referring to the past recognition result of the computer.
6. A storage medium storing a program, wherein,
the program causes a computer mounted on a vehicle to execute:
a detection result of a detection unit for detecting the position of an object existing in the periphery of the vehicle is obtained,
determining a road surface of a periphery of the vehicle based on the detection result,
in the course of the said determination, the data of the said data,
a predetermined algorithm is used for each individual region obtained by dividing the detection result on a two-dimensional plane to determine whether or not the individual region is a plane,
the determination results for each of the individual regions are aggregated to determine the road surface of the periphery of the vehicle,
the prescribed algorithm is the following algorithm: randomly selecting a predetermined number of samples from a data set as a detection result of the detection unit for each individual area, deriving model candidates based on the selected samples, setting a model candidate most matching the whole data set among the model candidates having a deviation value equal to or smaller than a predetermined number when the model candidates are filled with data as a forward solution model, determining that the individual area is a road surface when the forward solution model becomes a plane,
the detection unit is a laser radar that irradiates laser light to the periphery of the vehicle while changing an elevation angle, a depression angle, and an azimuth angle,
in the course of the said determination, the data of the said data,
for each individual area obtained by dividing point cloud data obtained by projecting a position of an object represented by at least an elevation angle or depression angle, an azimuth angle, and a distance onto a two-dimensional plane, determining whether the object is a plane or not using the predetermined algorithm,
the sizes of the individual areas are made different based on the distance from the vehicle in the two-dimensional plane,
the individual area is not set for the unnecessary recognition area obtained by referring to the past recognition result of the computer.
CN202010707780.9A 2019-07-24 2020-07-21 Identification system, vehicle control system, identification method, and storage medium Active CN112286178B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019136089A JP7165630B2 (en) 2019-07-24 2019-07-24 Recognition system, vehicle control system, recognition method, and program
JP2019-136089 2019-07-24

Publications (2)

Publication Number Publication Date
CN112286178A CN112286178A (en) 2021-01-29
CN112286178B true CN112286178B (en) 2023-12-01

Family

ID=74420120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010707780.9A Active CN112286178B (en) 2019-07-24 2020-07-21 Identification system, vehicle control system, identification method, and storage medium

Country Status (2)

Country Link
JP (1) JP7165630B2 (en)
CN (1) CN112286178B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022152402A (en) * 2021-03-29 2022-10-12 本田技研工業株式会社 Recognition device, vehicle system, recognition method and program
CN116958918A (en) * 2022-11-15 2023-10-27 北京车和家信息技术有限公司 Road plane determining method, device, equipment, medium and vehicle
WO2024218879A1 (en) * 2023-04-18 2024-10-24 日本電信電話株式会社 Estimation device, estimation method, and estimation program
JP2025015988A (en) * 2023-07-21 2025-01-31 日立Astemo株式会社 Obstacle Detection Device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010071942A (en) * 2008-09-22 2010-04-02 Toyota Motor Corp Object detecting device
JP2013140515A (en) * 2012-01-05 2013-07-18 Toyota Central R&D Labs Inc Solid object detection device and program
JP2018112887A (en) * 2017-01-11 2018-07-19 株式会社東芝 Information processing apparatus, information processing method, and information processing program
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN109359614A (en) * 2018-10-30 2019-02-19 百度在线网络技术(北京)有限公司 A kind of plane recognition methods, device, equipment and the medium of laser point cloud

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011191239A (en) 2010-03-16 2011-09-29 Mazda Motor Corp Mobile object position detecting device
JP6385745B2 (en) * 2014-07-22 2018-09-05 日立建機株式会社 Mining work vehicle
JP6668740B2 (en) 2015-12-22 2020-03-18 いすゞ自動車株式会社 Road surface estimation device
US10444759B2 (en) 2017-06-14 2019-10-15 Zoox, Inc. Voxel based ground plane estimation and object segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010071942A (en) * 2008-09-22 2010-04-02 Toyota Motor Corp Object detecting device
JP2013140515A (en) * 2012-01-05 2013-07-18 Toyota Central R&D Labs Inc Solid object detection device and program
JP2018112887A (en) * 2017-01-11 2018-07-19 株式会社東芝 Information processing apparatus, information processing method, and information processing program
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN109359614A (en) * 2018-10-30 2019-02-19 百度在线网络技术(北京)有限公司 A kind of plane recognition methods, device, equipment and the medium of laser point cloud

Also Published As

Publication number Publication date
JP2021021967A (en) 2021-02-18
JP7165630B2 (en) 2022-11-04
CN112286178A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN112286178B (en) Identification system, vehicle control system, identification method, and storage medium
US11320833B2 (en) Data processing method, apparatus and terminal
CN107272021B (en) Object detection using radar and visually defined image detection areas
JP5407898B2 (en) Object detection apparatus and program
JP6519262B2 (en) Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
JP4650079B2 (en) Object detection apparatus and method
JP5822255B2 (en) Object identification device and program
CN111989709B (en) Processing device, object recognition system, object recognition method, automobile, and vehicle lamp
JP5820774B2 (en) Road boundary estimation apparatus and program
JP5145585B2 (en) Target detection device
JP6702340B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and program
JP6340850B2 (en) Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
JP3674400B2 (en) Ambient environment recognition device
JP2011511281A (en) Map matching method with objects detected by sensors
EP3324359B1 (en) Image processing device and image processing method
CN111046719A (en) Apparatus and method for converting images
JP6753134B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and image processing program
EP3115933A1 (en) Image processing device, image capturing device, mobile body control system, image processing method, and computer-readable recording medium
EP3410345B1 (en) Information processing apparatus and non-transitory recording medium storing thereon a computer program
CN115038990A (en) Object recognition method and object recognition device
JP6038422B1 (en) Vehicle determination device, vehicle determination method, and vehicle determination program
JP2022076876A (en) Position estimation device, position estimation method, and position estimation program
JP6340849B2 (en) Image processing apparatus, image processing method, image processing program, and mobile device control system
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
EP3540643A1 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant