WO2018176195A1 - 一种室内场景的分类方法及装置 - Google Patents
一种室内场景的分类方法及装置 Download PDFInfo
- Publication number
- WO2018176195A1 WO2018176195A1 PCT/CN2017/078291 CN2017078291W WO2018176195A1 WO 2018176195 A1 WO2018176195 A1 WO 2018176195A1 CN 2017078291 W CN2017078291 W CN 2017078291W WO 2018176195 A1 WO2018176195 A1 WO 2018176195A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- classification
- observation area
- scene
- picture
- training
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2115—Selection of the most significant subset of features by evaluating different subsets according to an optimisation criterion, e.g. class separability, forward selection or backward elimination
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Definitions
- the invention belongs to the technical field of computers, and in particular relates to a method and a device for classifying indoor scenes.
- Intelligent identification and classification is a key issue in computer vision.
- hotspots focus on object recognition (a picture contains one or more objects) and face recognition (an image with a face).
- indoor scene recognition is extremely challenging and is one of the most difficult classification tasks.
- the difficulty lies in the fact that indoor scenes not only contain a large number of different objects, but also the placement of these objects in space is very different.
- To accurately classify indoor scenes not only the information of objects in the scene but also the entire scene structure needs to be extracted. Characteristics.
- the current scene recognition classification methods mainly include spatial pyramid method, high-level semantic information-based methods and convolutional neural network-based methods.
- the feature representation of the spatial pyramid method relies only on low-level geometric information. Without the extraction of high-level semantic information, the ability to identify scenes is limited. Scene recognition methods based on high-level semantic information are limited. The range of selected objects greatly affects the ability of model classification.
- the main disadvantage of the method based on convolutional neural network is that the training process needs to consume a lot of resources, and the effect is mainly on the detection and classification of objects.
- the use is based on The convolutional neural network method can achieve 94% recognition rate when performing object recognition on the computer vision system identification (ImageNet) data set, and use the convolutional neural network based method to perform scenes on the public MIT-67 data set. When the classification is performed, only 69% of the recognition rate can be achieved. The reason is that the identification of the indoor scene does not only depend on the objects in the scene, but also the overall relationship between the connected objects, and the features directly extracted by the convolutional neural network method are not good. Grasp the integration of overall and local information.
- An object of the present invention is to provide a method and a device for classifying indoor scenes, which aim to solve the problem that the existing scene recognition classification method is not accurate and the classification rate is not good.
- the present invention provides a method for classifying an indoor scene, the method comprising the steps of:
- the classification label of the to-be-classified scene picture is obtained according to the classification prediction result.
- the present invention provides a classification device for an indoor scene, the device comprising:
- a picture receiving unit configured to receive an input picture of the scene to be classified
- An area obtaining unit configured to acquire a current local observation area from the picture to be classified according to a preset observation area positioning model
- a vector acquiring unit configured to process image information of the current local observation area to obtain a feature vector of the picture to be classified
- condition determining unit configured to acquire, according to the feature vector, a classification prediction result of the to-be-classified scene picture, and determine whether the classification prediction result satisfies a preset scene picture classification condition
- a repeating execution unit configured to: when the classification prediction result does not satisfy the scene picture classification condition, acquire a next partial observation area from the to-be-classified scene image according to the observation area positioning model, and Setting a local observation area as the current local observation area, and triggering the vector acquisition unit to process image information of the current local observation area;
- a scene classification unit configured to acquire, according to the classification prediction result, a classification label of the to-be-classified scene picture, when the classification prediction result satisfies the scene picture classification condition.
- the present invention After receiving the input scene image to be classified, the present invention acquires the current local observation area from the image to be classified according to the preset observation area positioning model, and processes the image information of the current local observation area to obtain the picture of the scene to be classified.
- the feature vector obtains the classification prediction result of the scene image to be classified according to the feature vector, and determines whether the classification prediction result satisfies the preset scene picture classification condition. When the classification prediction result does not satisfy the scene picture classification condition, the classification area model is to be classified according to the observation area.
- Embodiment 1 is a flowchart of implementing a method for classifying an indoor scene according to Embodiment 1 of the present invention
- FIG. 2 is a flowchart showing an implementation of establishing an observation area positioning model in a method for classifying an indoor scene according to Embodiment 2 of the present invention
- FIG. 3 is a schematic structural diagram of a device for classifying an indoor scene according to Embodiment 3 of the present invention.
- FIG. 4 is a schematic structural diagram of an apparatus for classifying an indoor scene according to Embodiment 4 of the present invention.
- Embodiment 1 is a diagrammatic representation of Embodiment 1:
- FIG. 1 is a flowchart showing an implementation process of a method for classifying an indoor scene according to Embodiment 1 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
- step S101 the input scene picture to be classified is received.
- step S102 the current local observation area is obtained from the picture to be classified according to the preset observation area positioning model.
- the picture to be classified is a picture corresponding to the indoor scene to be identified.
- the observation area positioning model only one local observation area is selected from the scene picture at a time for identification and classification.
- step S103 the image information of the current local observation area is processed to obtain a feature vector of the scene picture to be classified.
- the image information of the current local observation area is acquired, after processing the image information of the current local observation area, the image information of the current local observation area is first encoded to obtain a local feature vector. Then, the obtained local feature vector and the previously obtained feature vector are subjected to a fusion operation to obtain a feature vector of the image information of the image to be classified, thereby improving the comprehensiveness of the feature vector and improving the accuracy of the classification of the scene image.
- step S104 the classification prediction result of the picture to be classified is acquired according to the feature vector.
- step S105 it is determined whether the classification prediction result satisfies a preset scene picture classification condition.
- the classification prediction result includes a classification result and a corresponding prediction probability.
- the plurality of classification results of the scene image and the corresponding prediction probability may be predicted according to the feature vector.
- the sum of the predicted probabilities of the plurality of classification results is 100%, and it is judged whether there is a classification result corresponding to the preset prediction probability in the plurality of classification results, that is, whether the classification prediction result satisfies the preset classification scene image to be classified.
- the preset threshold of the prediction probability may be set to 65%, and it is determined whether there is a classification result corresponding to the prediction probability greater than 65% among the plurality of classification results.
- step S106 when the classification prediction result does not satisfy the scene picture classification condition, the next partial observation area is obtained from the picture to be classified according to the observation area positioning model, and the next partial observation area is set as the current local observation area, and the jump is performed. Go to the step of processing the image information of the current local observation area to obtain the feature vector of the scene picture to be classified.
- the existing classification prediction result does not satisfy the preset condition for classifying the scene to be classified.
- the next local observation area is acquired according to the observation area positioning model, and the next partial observation area is set as the current local observation area, and the repetition is repeated.
- the image information processing is performed and the classification prediction result is obtained until the classification prediction result satisfies the scene picture classification condition.
- step S107 when the classification prediction result satisfies the scene picture classification condition, the classification label of the scene picture to be classified is obtained according to the classification prediction result.
- the condition that the classification prediction result has met the preset classification of the scene to be classified is determined, that is, The classification of the picture to be classified can be implemented. Therefore, the classification result of the corresponding prediction probability in the classification prediction result is obtained, and the classification result is set as the classification label of the picture to be classified, thereby improving the classification of the scene picture. accuracy.
- the input scene image to be classified is received, and the current local observation area is obtained from the image to be classified according to the preset observation area positioning model, thereby reducing the complexity of the image recognition classification of the scene to be classified, and improving the complexity.
- Embodiment 2 is a diagrammatic representation of Embodiment 1:
- step S201 the input scene to be trained is received, and the current training local observation area is obtained from the to-be-trained scene picture according to a preset Gaussian distribution.
- step S202 the classification operation is performed on the training scene picture according to the current training local observation area, and the reward value of the classification operation is calculated.
- the training scene image is classified according to the feature vector, and the classification label of the scene image to be trained is obtained.
- the image information of the local observation area is acquired, when the image information of the current training local observation area is processed, the image information of the current training local observation area is first encoded to obtain a local feature vector, and then The obtained local feature vector performs a fusion operation with the previously obtained feature vector to obtain a feature vector of the image information of the image to be classified, thereby improving the comprehensiveness of the feature vector and improving the accuracy of the classification of the scene image.
- the dimensions of the feature vector can be adjusted during training to optimize the training results.
- the standard classification label of the image to be trained is obtained, and the classification label of the image to be trained is determined to be correct, and the reward value of the classification is calculated.
- the feedback value in the calculation formula of the reward value may be appropriately changed during the training process to optimize the speed of the model convergence, thereby optimizing the training model.
- step S204 when the preset training end condition is not reached, the next training local observation area is obtained from the to-be-trained scene picture according to the Gaussian distribution, and the next training local observation area is set as the current training local observation area. And jumping to the step of classifying the training scene picture according to the current training local observation area and calculating the reward value of the classification operation.
- the next training local observation area may be sampled from a Gaussian distribution of a given variance.
- the next training local observation area obtained by sampling is repeatedly identified, and the training scene picture is classified according to the identified information to obtain a classification label, and each classification can be calculated to obtain a corresponding reward value.
- step S205 when the preset training end condition is reached, the algebraic sum of the reward values of each picture to be trained in the picture to be trained is acquired, to obtain the total reward value of each picture to be trained, according to the total The reward value establishes an observation area location model that maximizes the total reward value.
- the algebraic sum of the reward values of the scene to be trained is obtained, to obtain the total reward value of the image to be trained, and each of the images to be trained is
- the picture to be trained has a corresponding total reward value, and an observation area positioning model that can maximize the total reward value is established according to the total reward value, for use in classifying the picture of the scene to be classified. Determine the optimal next local observation area to improve the classification rate and accuracy of the scene recognition classification.
- Embodiment 3 is a diagrammatic representation of Embodiment 3
- FIG. 3 is a diagram showing the structure of an indoor scene classification apparatus according to Embodiment 3 of the present invention. For the convenience of description, only parts related to the embodiment of the present invention are shown.
- the classification device of the indoor scene includes a picture receiving unit 31, an area obtaining unit 32, a vector obtaining unit 33, a condition determining unit 34, and a scene classifying unit 35, wherein:
- the picture receiving unit 31 is configured to receive the input picture of the scene to be classified.
- the area obtaining unit 32 is configured to obtain a current local observation area from the picture to be classified according to the preset observation area positioning model.
- the image information of the local observation area is acquired after the image information of the local observation area is acquired, the image information of the local observation area is first encoded to obtain a local feature vector, and then The obtained local feature vector performs a fusion operation with the previously obtained feature vector to obtain a feature vector of the image information of the image to be classified, thereby improving the comprehensiveness of the feature vector and improving the accuracy of the classification of the scene image.
- the vector obtaining unit 33 includes:
- the encoding operation unit 331 is configured to encode image information of the current local observation area to obtain a local feature vector
- the merging operation unit 332 is configured to perform a merging operation on the local feature vector and the pre-stored feature vector to obtain a feature vector of the scene picture.
- the scene classification unit 36 is configured to obtain a classification label of the scene image to be classified according to the classification prediction result when the classification prediction result satisfies the scene picture classification condition.
- the training area obtaining unit 401 is configured to receive the input scene image to be trained, and obtain the current training local observation area from the to-be-trained scene picture according to the preset Gaussian distribution.
- the feedback value in the calculation formula of the reward value may be appropriately changed during the training process to optimize the speed of the model convergence, thereby optimizing the training model.
- the regional training unit 402 includes:
- the loop training unit 403 is configured to: when the preset training end condition is not reached, obtain the next training local observation area from the to-be-trained scene picture according to the Gaussian distribution, and set the next training local observation area as the current training local part.
- the observation area, and the trigger area training unit 402 performs a classification operation on the training scene picture according to the current training local observation area and calculates a bonus value of the classification operation.
- the area obtaining unit 406 is configured to obtain a current local observation area from the picture to be classified according to the preset observation area positioning model.
- the vector obtaining unit 407 is configured to process image information of the current local observation area to obtain a feature vector of the picture to be classified.
- the condition determining unit 408 is configured to obtain a classification prediction result of the picture to be classified according to the feature vector, and determine whether the classification prediction result satisfies a preset scene picture classification condition.
- the plurality of classification results of the scene image and the corresponding prediction probability may be predicted according to the feature vector, and the total predicted probability of the multiple classification results is 100%.
- the condition judging unit judges whether there is a classification result corresponding to the preset prediction probability in the plurality of classification results, that is, whether the classification prediction result satisfies a condition for classifying the preset scene image to be classified.
- the repetition execution unit 409 is configured to: when the classification prediction result does not satisfy the scene picture classification condition, obtain the next partial observation area from the image to be classified according to the observation area positioning model, and set the next partial observation area as the current local observation area.
- trigger vector acquisition unit 407 processes the image information of the current local observation area.
- the scene classification unit 410 is configured to obtain, when the classification prediction result satisfies the scene picture classification condition, the classification label of the scene picture to be classified according to the classification prediction result.
- each unit of the classification device of the indoor scene may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into one soft and hardware unit, and is not limited thereto. this invention.
- each unit may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into one soft and hardware unit, and is not limited thereto.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种室内场景的分类方法,其特征在于,所述方法包括下述步骤:接收输入的待分类场景图片;根据预设的观测区域定位模型从所述待分类场景图片中获取当前局部观测区域;对所述当前局部观测区域的图像信息进行处理,以得到所述待分类场景图片的特征向量;根据所述特征向量获取所述待分类场景图片的分类预测结果,判断所述分类预测结果是否满足预设的场景图片分类条件;当所述分类预测结果不满足所述场景图片分类条件时,根据所述观测区域定位模型从所述待分类场景图片中获取下一局部观测区域,并将所述下一局部观测区域设置为所述当前局部观测区域,跳转至所述对所述当前局部观测区域的图像信息进行处理,以得到所述待分类场景图片的特征向量的步骤;当所述分类预测结果满足所述场景图片分类条件时,根据所述分类预测结果获取所述待分类场景图片的分类标签。
- 如权利要求1所述的方法,其特征在于,接收输入的待分类场景图片的步骤之前,所述方法还包括:接收输入的待训练场景图片,根据预设的高斯分布从所述待训练场景图片中获取当前训练用局部观测区域;根据所述当前训练用局部观测区域对所述待训练场景图片进行分类操作并计算所述分类操作的奖励值;当未达到预设的训练结束条件时,根据所述高斯分布从所述待训练场景图片中获取下一训练用局部观测区域,将所述下一训练用局部观测区域设置为当前训练用局部观测区域,跳转至根据所述当前训练用局部观测区域对所述待训练场景图片进行分类操作并计算所述分类操作的奖励值的步骤;当达到预设的训练结束条件时,获取所有待训练场景图片中每张待训练场 景图片的所述奖励值的代数和,以得到每张待训练场景图片的总奖励值,根据所述总奖励值建立总奖励值最大化的观测区域定位模型。
- 如权利要求2所述的方法,其特征在于,根据所述当前训练用局部观测区域对所述待训练场景图片进行分类操作并计算所述分类操作的奖励值的步骤,包括:对所述当前训练用局部观测区域的图像信息进行处理,得到所述待训练场景图片的当前特征向量,根据所述当前特征向量对所述待训练场景图片进行分类,得到所述待训练场景图片的分类标签;获取所述待训练场景图片的标准分类标签,将所述得到的分类标签与所述标准分类标签进行比较,判断所述得到的分类标签是否正确,根据所述判断结果计算所述分类的奖励值。
- 如权利要求1所述的方法,其特征在于,对所述当前局部观测区域的图像信息进行处理,以得到所述待分类场景图片的特征向量的步骤,包括:对所述当前局部观测区域的图像信息进行编码,得到局部特征向量;对所述局部特征向量与预先存储的特征向量执行融合操作,得到所述场景图片的特征向量。
- 一种室内场景的分类装置,其特征在于,所述装置包括:图片接收单元,用于接收输入的待分类场景图片;区域获取单元,用于根据预设的观测区域定位模型从所述待分类场景图片中获取当前局部观测区域;向量获取单元,用于对所述当前局部观测区域的图像信息进行处理,以得到所述待分类场景图片的特征向量;条件判断单元,用于根据所述特征向量获取所述待分类场景图片的分类预测结果,判断所述分类预测结果是否满足预设的场景图片分类条件;重复执行单元,用于当所述分类预测结果不满足所述场景图片分类条件时,根据所述观测区域定位模型从所述待分类场景图片中获取下一局部观测区域,并将所述下一局部观测区域设置为所述当前局部观测区域,并触发所述向量获取单元对所述当前局部观测区域的图像信息进行处理;以及场景分类单元,用于当所述分类预测结果满足所述场景图片分类条件时,根据所述分类预测结果获取所述待分类场景图片的分类标签。
- 如权利要求6所述的装置,其特征在于,所述装置还包括:训练区域获取单元,用于接收输入的待训练场景图片,根据预设的高斯分布从所述待训练场景图片中获取当前训练用局部观测区域;区域训练单元,用于根据所述当前训练用局部观测区域对所述待训练场景图片进行分类操作并计算所述分类操作的奖励值;循环训练单元,用于当未达到预设的训练结束条件时,根据所述高斯分布从所述待训练场景图片中获取下一训练用局部观测区域,将所述下一训练用局部观测区域设置为当前训练用局部观测区域,并触发所述区域训练单元根据所述当前训练用局部观测区域对所述待训练场景图片进行分类操作并计算所述分类操作的奖励值;以及定位模型建立单元,用于当达到预设的训练结束条件时,获取所有待训练场景图片中每张待训练场景图片的所述奖励值的代数和,以得到每张待训练场景图片的总奖励值,根据所述总奖励值建立总奖励值最大化的观测区域定位模 型。
- 如权利要求7所述的装置,其特征在于,所述区域训练单元包括:训练分类单元,用于对所述当前训练用局部观测区域的图像信息进行处理,得到所述待训练场景图片的当前特征向量,根据所述当前特征向量对所述待训练场景图片进行分类,得到所述待训练场景图片的分类标签;以及奖励值计算单元,用于获取所述待训练场景图片的标准分类标签,将所述得到的分类标签与所述标准分类标签进行比较,判断所述得到的分类标签是否正确,根据所述判断结果计算所述分类的奖励值。
- 如权利要求6所述的装置,其特征在于,所述向量获取单元包括:编码操作单元,用于对所述当前局部观测区域的图像信息进行编码,得到局部特征向量;以及融合操作单元,用于对所述局部特征向量与预先存储的特征向量执行融合操作,得到所述场景图片的特征向量。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/078291 WO2018176195A1 (zh) | 2017-03-27 | 2017-03-27 | 一种室内场景的分类方法及装置 |
US16/495,401 US11042777B2 (en) | 2017-03-27 | 2017-03-27 | Classification method and classification device of indoor scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/078291 WO2018176195A1 (zh) | 2017-03-27 | 2017-03-27 | 一种室内场景的分类方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018176195A1 true WO2018176195A1 (zh) | 2018-10-04 |
Family
ID=63673984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/078291 WO2018176195A1 (zh) | 2017-03-27 | 2017-03-27 | 一种室内场景的分类方法及装置 |
Country Status (2)
Country | Link |
---|---|
US (1) | US11042777B2 (zh) |
WO (1) | WO2018176195A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898082A (zh) * | 2018-06-19 | 2018-11-27 | Oppo广东移动通信有限公司 | 图片处理方法、图片处理装置及终端设备 |
CN111753386A (zh) * | 2019-03-11 | 2020-10-09 | 北京嘀嘀无限科技发展有限公司 | 一种数据处理方法及装置 |
CN112261719A (zh) * | 2020-07-24 | 2021-01-22 | 大连理智科技有限公司 | 一种slam技术结合深度学习的区域定位方法 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875821A (zh) * | 2018-06-08 | 2018-11-23 | Oppo广东移动通信有限公司 | 分类模型的训练方法和装置、移动终端、可读存储介质 |
EP3660750B1 (en) * | 2018-11-30 | 2022-01-05 | Secondmind Limited | Method and system for classification of data |
CN111625674A (zh) * | 2020-06-01 | 2020-09-04 | 联想(北京)有限公司 | 一种图片处理方法及装置 |
CN114612706B (zh) * | 2020-12-04 | 2024-10-15 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、电子设备及可读存储介质 |
CN113743459B (zh) * | 2021-07-29 | 2024-04-02 | 深圳云天励飞技术股份有限公司 | 目标检测方法、装置、电子设备及存储介质 |
CN113643136B (zh) * | 2021-09-01 | 2024-06-18 | 京东科技信息技术有限公司 | 信息处理方法、系统和装置 |
CN113850023B (zh) * | 2021-09-30 | 2022-11-29 | 深圳市瑞云科技有限公司 | 基于场景文件参数预估硬件配置的模型建立方法、装置 |
CN113762426A (zh) * | 2021-11-09 | 2021-12-07 | 紫东信息科技(苏州)有限公司 | 一种胃镜视野丢失帧检测方法及系统 |
CN114781548B (zh) * | 2022-05-18 | 2024-07-12 | 平安科技(深圳)有限公司 | 图像场景分类方法、装置、设备及存储介质 |
CN115035146A (zh) * | 2022-05-18 | 2022-09-09 | 咪咕文化科技有限公司 | 图片拍摄方法、设备、存储介质及装置 |
CN116681997B (zh) * | 2023-06-13 | 2024-05-17 | 北京数美时代科技有限公司 | 一种不良场景图像的分类方法、系统、介质及设备 |
CN118447443A (zh) * | 2024-04-30 | 2024-08-06 | 浙江能珈信息技术有限公司 | 基于深度学习的大数据智能分析处理系统及方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101389004A (zh) * | 2007-09-13 | 2009-03-18 | 中国科学院自动化研究所 | 一种基于在线学习的运动目标分类方法 |
CN102722520A (zh) * | 2012-03-30 | 2012-10-10 | 浙江大学 | 一种基于支持向量机的图片重要性分类方法 |
CN104268546A (zh) * | 2014-05-28 | 2015-01-07 | 苏州大学 | 一种基于主题模型的动态场景分类方法 |
US20160086015A1 (en) * | 2007-01-09 | 2016-03-24 | Si Corporation | Method and system for automated face detection and recognition |
CN105809146A (zh) * | 2016-03-28 | 2016-07-27 | 北京奇艺世纪科技有限公司 | 一种图像场景识别方法和装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012012943A1 (en) * | 2010-07-28 | 2012-02-02 | Shenzhen Institute Of Advanced Technology Chinese Academy Of Sciences | Method for reconstruction of urban scenes |
KR102096398B1 (ko) * | 2013-07-03 | 2020-04-03 | 삼성전자주식회사 | 자율 이동 로봇의 위치 인식 방법 |
US10452924B2 (en) * | 2018-01-10 | 2019-10-22 | Trax Technology Solutions Pte Ltd. | Withholding alerts due to temporary shelf occlusion |
-
2017
- 2017-03-27 WO PCT/CN2017/078291 patent/WO2018176195A1/zh active Application Filing
- 2017-03-27 US US16/495,401 patent/US11042777B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160086015A1 (en) * | 2007-01-09 | 2016-03-24 | Si Corporation | Method and system for automated face detection and recognition |
CN101389004A (zh) * | 2007-09-13 | 2009-03-18 | 中国科学院自动化研究所 | 一种基于在线学习的运动目标分类方法 |
CN102722520A (zh) * | 2012-03-30 | 2012-10-10 | 浙江大学 | 一种基于支持向量机的图片重要性分类方法 |
CN104268546A (zh) * | 2014-05-28 | 2015-01-07 | 苏州大学 | 一种基于主题模型的动态场景分类方法 |
CN105809146A (zh) * | 2016-03-28 | 2016-07-27 | 北京奇艺世纪科技有限公司 | 一种图像场景识别方法和装置 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898082A (zh) * | 2018-06-19 | 2018-11-27 | Oppo广东移动通信有限公司 | 图片处理方法、图片处理装置及终端设备 |
CN108898082B (zh) * | 2018-06-19 | 2020-07-03 | Oppo广东移动通信有限公司 | 图片处理方法、图片处理装置及终端设备 |
CN111753386A (zh) * | 2019-03-11 | 2020-10-09 | 北京嘀嘀无限科技发展有限公司 | 一种数据处理方法及装置 |
CN111753386B (zh) * | 2019-03-11 | 2024-03-26 | 北京嘀嘀无限科技发展有限公司 | 一种数据处理方法及装置 |
CN112261719A (zh) * | 2020-07-24 | 2021-01-22 | 大连理智科技有限公司 | 一种slam技术结合深度学习的区域定位方法 |
CN112261719B (zh) * | 2020-07-24 | 2022-02-11 | 大连理智科技有限公司 | 一种slam技术结合深度学习的区域定位方法 |
Also Published As
Publication number | Publication date |
---|---|
US20200019816A1 (en) | 2020-01-16 |
US11042777B2 (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018176195A1 (zh) | 一种室内场景的分类方法及装置 | |
CN107169503B (zh) | 一种室内场景的分类方法及装置 | |
KR102077260B1 (ko) | 확룔 모델에 기반한 신뢰도를 이용하여 얼굴을 인식하는 방법 및 장치 | |
EP3792818A1 (en) | Video processing method and device, and storage medium | |
US9053358B2 (en) | Learning device for generating a classifier for detection of a target | |
WO2020253127A1 (zh) | 脸部特征提取模型训练方法、脸部特征提取方法、装置、设备及存储介质 | |
CN103093212B (zh) | 基于人脸检测和跟踪截取人脸图像的方法和装置 | |
CN104036236B (zh) | 一种基于多参数指数加权的人脸性别识别方法 | |
CN108090406B (zh) | 人脸识别方法及系统 | |
US20120093362A1 (en) | Device and method for detecting specific object in sequence of images and video camera device | |
CN109190446A (zh) | 基于三元组聚焦损失函数的行人再识别方法 | |
CN108230291B (zh) | 物体识别系统训练方法、物体识别方法、装置和电子设备 | |
US11126827B2 (en) | Method and system for image identification | |
CN111401374A (zh) | 基于多任务的模型训练方法、字符识别方法及装置 | |
CN112016353B (zh) | 一种基于视频的人脸图像进行身份识别方法及装置 | |
WO2021031954A1 (zh) | 对象数量确定方法、装置、存储介质与电子设备 | |
WO2022041830A1 (zh) | 行人重识别方法和装置 | |
CN111626371A (zh) | 一种图像分类方法、装置、设备及可读存储介质 | |
CN115715385A (zh) | 用于预测体育运动中的队形的系统和方法 | |
CN113327212B (zh) | 人脸驱动、模型的训练方法、装置、电子设备及存储介质 | |
CN110717407A (zh) | 基于唇语密码的人脸识别方法、装置及存储介质 | |
US20190114470A1 (en) | Method and System for Face Recognition Based on Online Learning | |
CN114332071A (zh) | 一种基于前景信息增强的视频异常检测方法 | |
CN111241930A (zh) | 一种用于人脸识别的方法及系统 | |
CN118172546B (zh) | 模型生成方法、检测方法、装置、电子设备、介质和产品 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17903809 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17903809 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/02/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17903809 Country of ref document: EP Kind code of ref document: A1 |