[go: up one dir, main page]

CN111435444A - Method and device for identifying grains - Google Patents

Method and device for identifying grains Download PDF

Info

Publication number
CN111435444A
CN111435444A CN201910033093.0A CN201910033093A CN111435444A CN 111435444 A CN111435444 A CN 111435444A CN 201910033093 A CN201910033093 A CN 201910033093A CN 111435444 A CN111435444 A CN 111435444A
Authority
CN
China
Prior art keywords
sub
picture
grain
pictures
grains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910033093.0A
Other languages
Chinese (zh)
Inventor
黄智刚
陈翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201910033093.0A priority Critical patent/CN111435444A/en
Publication of CN111435444A publication Critical patent/CN111435444A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for identifying grains. Wherein, the method comprises the following steps: acquiring a picture containing grains; dividing the picture to obtain a plurality of sub-pictures; the plurality of sub-pictures are processed using a convolutional neural network model, and grains in each sub-picture are identified. The grain recognition method is based on the artificial intelligence technology, removes the noise signals in the background in an image segmentation mode, achieves the purpose of grain recognition, and further solves the technical problems of low speed, high cost and low accuracy of grain recognition in the image in the prior art.

Description

Method and device for identifying grains
Technical Field
The invention relates to the field of image recognition, in particular to a method and a device for recognizing grains.
Background
China is the biggest rice producing country in the world, however, the quality problem of rice has become the bottleneck restricting the production, sale and export of rice in China. The broken rice rate of rice is an important index influencing the quality and selling price of rice and is an important index reflecting the planting level and processing level of rice.
The detection of broken rice grains in the national standard needs to accurately measure the length and the width of the rice grains, and a detection instrument in the existing national standard 'inspection method for broken rice of grains and oil plants' is expensive, long in time and slow in speed, and is not suitable for inspection and control of the content of broken rice in the production process of rice. The automatic recognition of grains in images by using a computer is an important field of pattern recognition application, and a neural network is proposed according to the neural structure of human brain, and great progress is made in the field of image recognition.
Aiming at the problems of slow speed, high cost and low accuracy of grain identification in images in the prior art, no effective solution is provided at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying grains, which at least solve the technical problems of low speed, high cost and low accuracy of grain identification in an image in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a method of identifying grains, including: acquiring a picture containing grains; dividing the picture to obtain a plurality of sub-pictures; the plurality of sub-pictures are processed using a convolutional neural network model, and grains in each sub-picture are identified.
Optionally, processing the plurality of sub-pictures using a convolutional neural network model, and identifying grains in each sub-picture, including: acquiring a plurality of sub-pictures; extracting image characteristics of an object contained in each sub-picture; and judging whether the image characteristics of the object contained in each sub-picture accord with the grain characteristics or not, and if so, determining that the object is the grain.
Optionally, the convolutional neural network model comprises at least: the system comprises an input layer, a convolution layer, a pooling layer, a full link layer and a softmax layer, wherein a plurality of sub-pictures are acquired through the input layer.
Optionally, extracting image features of the object included in each sub-picture includes: scanning the corresponding sub-picture through each convolution kernel in the convolution layer to obtain a characteristic layer of an object contained in each sub-picture; performing redundancy removal processing on the feature layer of the object contained in each sub-picture through a pooling layer; and converting the characteristic layers subjected to redundancy removal processing through at least one full connecting layer to obtain the image characteristics of the object.
Optionally, whether the image feature of the object contained in each sub-picture conforms to the grain feature is judged through the softmax layer, and a judgment result is output.
Optionally, after identifying the grain in each sub-picture, the method further comprises: carrying out contour extraction on the sub-pictures with the recognized grains, and setting the areas without the recognized grains as a preset background to obtain a plurality of processed sub-pictures; and (4) splicing the plurality of processed sub-pictures to obtain a reduced grain picture.
Optionally, before performing contour extraction on the sub-picture in which grains are identified and setting an area in which no grains are identified as a predetermined background to obtain a plurality of processed sub-pictures, the method further includes: judging whether grains exist in the plurality of sub-pictures or not; if the grain exists, identifying the sub-picture in which the grain exists, and determining the position of the grain in the sub-picture; if not, the sub-picture of the grain not existing is set as a predetermined background.
Optionally, after the multiple processed sub-pictures are combined to obtain the reduced grain picture, the method further includes: outputting the reduced grain picture, and extracting a grain outline in the reduced grain picture; and determining the quality parameters of the grains based on the grain outlines in the reduced grain pictures.
There is also provided, in accordance with an aspect of an embodiment of the present invention, apparatus for recognizing grains, including: the acquisition module is used for acquiring pictures containing grains; the segmentation module is used for segmenting the picture to obtain a plurality of sub-pictures; and the identification module is used for processing the plurality of sub-pictures by using the convolutional neural network model and identifying the grains in each sub-picture.
Optionally, the obtaining sub-module is configured to obtain a plurality of sub-pictures; the extraction module is used for extracting the image characteristics of the object contained in each sub-picture; and the judging module is used for judging whether the image characteristics of the object contained in each sub-picture accord with the grain characteristics or not, and if so, determining that the object is the grain.
Optionally, the convolutional neural network model comprises at least: the system comprises an input layer, a convolution layer, a pooling layer, a full link layer and a softmax layer, wherein a plurality of sub-pictures are acquired through the input layer.
Optionally, the extraction module comprises: the scanning module is used for scanning the corresponding sub-picture through each convolution kernel in the convolution layer to obtain a characteristic layer of an object contained in each sub-picture; the pooling module is used for performing redundancy removal processing on the feature map layer of the object contained in each sub-picture through a pooling layer; and the conversion module is used for converting the characteristic layers subjected to redundancy removal processing through at least one full connection layer to obtain the image characteristics of the object.
Optionally, the judging module includes a judging submodule, configured to judge, through the softmax layer, whether the image feature of the object included in each sub-picture matches the grain feature, and output a judgment result.
Optionally, the apparatus further comprises: the extraction sub-module is used for extracting the outline of the sub-picture in which the grains are identified after the grains in each sub-picture are identified, and setting the area in which the grains are not identified as a preset background to obtain a plurality of processed sub-pictures; and the splicing module is used for splicing the plurality of processed sub-pictures to obtain a reduced grain picture.
Optionally, the apparatus further comprises: the grain judging module is used for judging whether grains exist in the sub-pictures before extracting the outlines of the sub-pictures in which the grains are identified, setting the areas in which the grains are not identified as a preset background and obtaining a plurality of processed sub-pictures; if the grain exists, identifying the sub-picture in which the grain exists, and determining the position of the grain in the sub-picture; if not, the sub-picture of the grain not existing is set as a predetermined background.
Optionally, the apparatus further comprises: the output module is used for outputting the reduced grain picture after splicing the plurality of processed sub-pictures to obtain the reduced grain picture and extracting the grain outline in the reduced grain picture; and the determining module is used for determining the quality parameters of the grains based on the grain outlines in the reduced grain pictures.
According to an aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein when the program is executed, the apparatus on which the storage medium is located is controlled to perform any one of the above methods for identifying grains.
According to an aspect of the embodiments of the present invention, there is also provided a mobile device, including: and a processor for executing the program, wherein any one of the above methods for identifying grains is performed when the program is executed.
In the embodiment of the invention, the image containing the grains is obtained firstly, then the image is divided to obtain a plurality of sub-images, then the plurality of sub-images are processed by using the convolutional neural network model, and the grains in each sub-image are identified. The technical scheme is based on the artificial intelligence technology, the noise signals in the background are removed in an image segmentation mode, the purpose of grain identification is achieved, and the technical problems that in the prior art, grain identification in the image is slow in result speed, high in cost and low in accuracy are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an alternative method of identifying grain according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative convolutional neural network architecture in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of an alternative convolutional neural network for identifying grain images in accordance with embodiments of the present application;
FIG. 4 is a flow diagram of an alternative convolutional neural network for identifying grain images in accordance with an embodiment of the present application; and
fig. 5 is a schematic view of an alternative apparatus for identifying grains according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, system, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, apparatus, article, or device.
Example 1
While the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions, and while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than presented herein, in accordance with embodiments of the present invention.
Fig. 1 is a flow chart of a method for identifying grains according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, obtaining pictures containing grains.
In an alternative, the grain may be particulate matter such as rice, glutinous rice, wheat, soybean, etc. The above-mentioned acquisition mode can be obtained by image acquisition devices such as a camera, a video camera, an image sensor, and the like. The grain-containing picture can be a grain picture which is tiled without overlapping.
And step S104, dividing the picture to obtain a plurality of sub-pictures.
In one alternative, the original image may be segmented into 200 x 200 small images.
In the above steps, the original image is divided to obtain a plurality of sub-pictures. The greater the number of sub-pictures, the greater the accuracy of identifying grain.
It should be noted that the picture containing grains is composed of grain regions and non-grain regions, and the original image is segmented, so that the grain regions and the non-grain regions can be furthest distinguished, and the subsequent identification is convenient.
And step S106, processing the plurality of sub-pictures by using the convolutional neural network model, and identifying grains in each sub-picture.
The convolutional neural network belongs to a supervised learning algorithm, is a special case in a deep neural network, and has the advantages of small weight number, high training speed and the like compared with a deep artificial neural network.
The convolutional neural network is mainly composed of three parts, namely an input layer, a hidden layer and an output layer. The input layer and the output layer are only one layer, the hidden layer can be a plurality of layers, and the deep neural network is a neural network with a plurality of hidden layers. The input layer of the convolutional neural network is a plurality of sub-pictures after segmentation, grains in each sub-picture are calculated by using the convolutional neural network, and the identified grain region is output.
Based on the scheme provided by the embodiment of the application, the grain in each sub-picture is identified by acquiring the picture containing the grain, dividing the picture to obtain a plurality of sub-pictures, and processing the plurality of sub-pictures by using the convolutional neural network model. Above-mentioned scheme is based on artificial intelligence technique, has reached the purpose of accurate discernment cereal through the image segmentation mode, and then has solved the technical problem that the result speed of cereal in the prior art discernment image is slow, with high costs and the degree of accuracy is low.
Optionally, processing the plurality of sub-pictures using a convolutional neural network model, and identifying grains in each sub-picture, including: acquiring a plurality of sub-pictures; extracting image characteristics of an object contained in each sub-picture; and judging whether the image characteristics of the object contained in each sub-picture accord with the grain characteristics or not, and if so, determining that the object is the grain.
In one alternative, the image features may be the color, texture, shape, etc. of the grain.
For example, if a rice region in a picture needs to be identified, it is necessary to perform feature extraction on a plurality of sub-pictures input to the convolutional neural network model, determine the color, texture, shape, and other features of an object included in each sub-picture, and when a determination result that the color is white, the texture is sparse, and the shape is elliptical is obtained, determine that the region is a rice region so as to be distinguished from a non-rice region. For another example, if it is necessary to identify a soybean region in a picture, when a determination result that the color is yellow, the texture is sparse, and the shape is spherical is obtained, the soybean region is determined to be distinguished from a non-soybean region.
Optionally, the convolutional neural network model comprises at least: the system comprises an input layer, a convolution layer, a pooling layer, a full link layer and a softmax layer, wherein a plurality of sub-pictures are acquired through the input layer.
The convolutional neural network acquires a plurality of sub-pictures through the input layer, the hidden layer can comprise a convolutional layer, a pooling layer and a full-link layer, the output layer is a softmax layer, and a softmax classification function is adopted.
Optionally, extracting image features of the object included in each sub-picture includes: scanning the corresponding sub-picture through each convolution kernel in the convolution layer to obtain a characteristic layer of an object contained in each sub-picture; performing redundancy removal processing on the feature layer of the object contained in each sub-picture through a pooling layer; and converting the characteristic layers subjected to redundancy removal processing through at least one full connecting layer to obtain the image characteristics of the object.
The convolution layer and the pooling layer can be combined in various different ways, the fully-connected layer can also be provided with multiple layers, and the specific layers and the network depth can be selected according to actual needs. The more the number of layers, the more accurate the recognition result and the more complex the network.
Fig. 2 is a schematic diagram of a convolutional neural network structure according to an embodiment of the present application. As shown in fig. 2, the input layer 1 is connected to the convolutional layer 21. The convolution layer 21 is mainly composed of convolution kernels, each convolution kernel is equivalent to a fully connected layer with a slightly smaller size, the size of a small square in the sub-picture is consistent with that of the convolution kernel, the convolution kernels are scanned from left to right and from top to bottom from the top left of the sub-picture, when a unit area (convolution kernel area) is swept, matrix calculation is performed on pixel points in the sub-picture and the convolution kernels to obtain a mapping area, namely a feature map, and a plurality of feature maps form a convolution feature map layer, for example, a left cube in the module 2 becomes the convolution feature map layer. And performing convolution kernel processing to realize feature extraction on the sub-picture.
Similarly, the left convolution feature map in the module 2 needs to be subjected to further feature extraction through the pooling layer 22, so that feature redundancy is reduced, pooling processing is similar to processing of convolution kernels, namely, a unit area with a fixed size is scanned on a convolution feature map layer, and different from the fact that matrix calculation is not performed any more, the interior of a matrix is subjected to block processing, for example, maximum pooling processing is performed, namely, maximum pixel points are obtained in the matrix; and (4) carrying out average pooling treatment, namely averaging the values of the pixel points in the matrix. And obtaining a pooling feature layer after pooling treatment, such as a right cube in the module 2. The convolutional layer and the pooling layer form a module 2, and after n-1 modules 2 are connected subsequently, three full-connection layers 3, namely the three full-connection layers 3 in the attached drawing 2, are connected, so that a three-dimensional characteristic diagram is converted into a one-dimensional full-connection layer.
Optionally, whether the image feature of the object contained in each sub-picture conforms to the grain feature is judged through the softmax layer, and a judgment result is output.
And the output layer classifies the output result of the full connecting layer by adopting a softmax classification function, judges whether the small square size area in the sub-picture accords with the grain characteristics or not, and outputs the result of whether the grain is the output result or not.
Optionally, after identifying grains in each sub-picture, the method further comprises: carrying out contour extraction on the sub-pictures with the recognized grains, and setting the areas without the recognized grains as a preset background to obtain a plurality of processed sub-pictures; and (4) splicing the plurality of processed sub-pictures to obtain a reduced grain picture.
In an alternative, the predetermined background may be a single color such as black, white, etc.
After identifying the grain area and the non-grain area of the small square area in each sub-picture by using a convolutional neural network, setting all the non-grain areas in the small square area as a preset background, and extracting the outline of grains in the grain area. And then reducing each sub-picture according to the segmentation sequence, and finally obtaining the outline of the whole rice grains.
Optionally, before performing contour extraction on the sub-picture in which grains are identified and setting an area in which no grains are identified as a predetermined background to obtain a plurality of processed sub-pictures, the method further includes: judging whether grains exist in the plurality of sub-pictures or not; if the grain exists, identifying the sub-picture in which the grain exists, and determining the position of the grain in the sub-picture; if not, the sub-picture of the grain not existing is set as a predetermined background.
Before the sub-pictures of the identified grains are processed, whether grains exist in each sub-picture can be judged in advance. If the grains exist, carrying out contour extraction on the sub-picture of the identified grains, and further determining the positions of the grains in the sub-picture; if no grain exists, the sub-picture is directly set as a preset background so as to remove noise signals in the background and improve the identification efficiency. Fig. 3 is a flowchart of identifying grain images by a convolutional neural network according to an embodiment of the present application, which is to divide an original image into a plurality of small pictures (200 × 200 in size), then calculate the grains in each sub-picture by using the convolutional neural network, set all the pixels of the image without grains to be black, and perform contour extraction on the sub-pictures with grains. And then, carrying out in-situ splicing on each sub-picture according to a segmentation sequence to finally obtain the outline of the whole grain, and outputting the identified grain image.
Optionally, after the multiple processed sub-pictures are combined to obtain the reduced grain picture, the method further includes: outputting the reduced grain picture, and extracting a grain outline in the reduced grain picture; and determining the quality parameters of the grains based on the grain outlines in the reduced grain pictures.
In an alternative, the quality parameter of the grain may be a quality parameter of the grain, such as a broken rice rate and an aspect ratio.
In the above steps, reduced grain pictures are obtained by splicing, then the reduced grain pictures are output and the reduced grain outlines are extracted, and further the broken rice rate and the length-width ratio are calculated to determine the excellence of the grains. Fig. 4 is a flowchart of another convolution neural network recognition grain image according to an embodiment of the present application, and unlike fig. 3, after performing contour extraction on the whole grain from the in-situ stitched image, quality parameters such as a broken rice rate and an aspect ratio can also be calculated.
Optionally, the method for identifying grains is applied to a cooking appliance, and the cooking appliance may include a heating component, a timing module, a decision module, a display module, a communication module and an alarm module.
Optionally, the decision module determines cooking data according to the set cooking data and the type of the grain, wherein the cooking data at least includes: heating data of the heating resistor, exhaust time of the exhaust valve and heating temperatures of different cooking stages; and controlling the cooking appliance to cook the grains based on the cooking data.
In one alternative, the decision module has actuators inside it, such as heating resistors, timing modules, etc. The set cooking data may be cooking data preset by the user, such as taste and preference (soft, moderate, hard, porridge cooking, soup cooking, etc.).
Optionally, before the decision module determines the cooking data according to the set cooking data and the type of the grains, the method further comprises: the decision-making module receives the types of the grains in the grain images transmitted by the communication module and receives cooking data received by the external interactive interface.
In an alternative, the communication module may be a wired communication module or a wireless communication module, such as a wifi module. The external interactive interface may be a display panel disposed on an outer surface of the cooking appliance, or may be a remote controller.
In the actual cooking process, the grain type identified by the convolutional neural network is transmitted to the decision module through the communication module, and the decision module can select the optimal cooking data according to the cooking data and the grain type preset by a user.
Optionally, the communication module is further configured to receive an update instruction transmitted by the remote server, where the update instruction is used to upgrade a function of the cooking appliance.
The cooking utensil with unchanged functions can not meet the user with continuously increased requirements, and after new functions are developed, the server can transmit the running program of a new version to the electric cooker through the communication module, so that remote updating is realized, and the service effect is more ideal.
According to the scheme, the image containing the grains is obtained, the image is segmented to obtain a plurality of sub-images, the plurality of sub-images are processed by using the convolutional neural network model, and the grains in each sub-image are identified. Compared with the prior art, the grain recognition method and the grain recognition device have the advantages that the noise signals in the background are removed through the image segmentation mode based on the artificial intelligence technology, the grain contours are extracted, the purpose of grain recognition is achieved, and the technical problems that the grain recognition speed is low, the cost is high, and the accuracy is low in the grain recognition image in the prior art are solved.
Example 2
According to an embodiment of the present invention, there is provided an apparatus for recognizing grains, and fig. 5 is a schematic view of the apparatus for recognizing grains according to an embodiment of the present application. As shown in fig. 5, the apparatus includes:
an obtaining module 502 is used for obtaining a picture containing grains.
The segmentation module 504 is configured to segment the picture to obtain a plurality of sub-pictures.
And an identifying module 506, configured to process the plurality of sub-pictures by using the convolutional neural network model, and identify grains in each sub-picture.
Optionally, the identification module comprises: the acquisition sub-module is used for acquiring a plurality of sub-pictures; the extraction module is used for extracting the image characteristics of the object contained in each sub-picture; and the judging module is used for judging whether the image characteristics of the object contained in each sub-picture accord with the grain characteristics or not, and if so, determining that the object is the grain.
Optionally, the convolutional neural network model comprises at least: the system comprises an input layer, a convolution layer, a pooling layer, a full link layer and a softmax layer, wherein a plurality of sub-pictures are acquired through the input layer.
Optionally, the extraction module comprises: the scanning module is used for scanning the corresponding sub-picture through each convolution kernel in the convolution layer to obtain a characteristic layer of an object contained in each sub-picture; the pooling module is used for performing redundancy removal processing on the feature map layer of the object contained in each sub-picture through a pooling layer; and the conversion module is used for converting the characteristic layers subjected to redundancy removal processing through at least one full connection layer to obtain the image characteristics of the object.
Optionally, the judging module includes a judging submodule, configured to judge, through the softmax layer, whether the image feature of the object included in each sub-picture matches the grain feature, and output a judgment result.
Optionally, the apparatus further comprises: the extraction sub-module is used for extracting the outline of the sub-picture in which the grains are identified after the grains in each sub-picture are identified, and setting the area in which the grains are not identified as a preset background to obtain a plurality of processed sub-pictures; and the splicing module is used for splicing the plurality of processed sub-pictures to obtain a reduced grain picture.
Optionally, the apparatus further comprises: the grain judging module is used for judging whether grains exist in the sub-pictures before extracting the outlines of the sub-pictures in which the grains are identified, setting the areas in which the grains are not identified as a preset background and obtaining a plurality of processed sub-pictures; if the grain exists, identifying the sub-picture in which the grain exists, and determining the position of the grain in the sub-picture; if not, the sub-picture of the grain not existing is set as a predetermined background.
Optionally, the apparatus further comprises: the output module is used for outputting the reduced grain picture after splicing the plurality of processed sub-pictures to obtain the reduced grain picture and extracting the grain outline in the reduced grain picture; and the determining module is used for determining the quality parameters of the grains based on the grain outlines in the reduced grain pictures.
It should be noted that, reference may be made to the relevant description in embodiment 1 for optional or preferred embodiments of this embodiment, but the present invention is not limited to the disclosure in embodiment 1, and is not described herein again.
Example 3
According to an embodiment of the present invention, there is provided a storage medium including a stored program, wherein the apparatus in which the storage medium is controlled to perform the method of identifying grains in embodiment 1 when the program is executed.
Example 4
According to an embodiment of the present invention, there is provided a mobile device including: a processor for running the program, wherein the method of identifying grain in embodiment 1 is performed while the program is running.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the apparatus according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (18)

1. A method of identifying grain comprising:
acquiring a picture containing grains;
segmenting the picture to obtain a plurality of sub-pictures;
processing the plurality of sub-pictures using a convolutional neural network model to identify grains in each sub-picture.
2. The method of claim 1, wherein processing the plurality of sub-pictures using a convolutional neural network model to identify grains in each sub-picture comprises:
acquiring the plurality of sub-pictures;
extracting image characteristics of an object contained in each sub-picture;
and judging whether the image characteristics of the object contained in each sub-picture accord with the grain characteristics, and if so, determining that the object is the grain.
3. The method of claim 1, wherein the convolutional neural network model comprises at least: the picture processing device comprises an input layer, a convolution layer, a pooling layer, a full link layer and a softmax layer, wherein the plurality of sub-pictures are acquired through the input layer.
4. The method of claim 2, wherein extracting image features of the object contained in each sub-picture comprises:
scanning the corresponding sub-picture through each convolution kernel in the convolution layer to obtain a characteristic layer of an object contained in each sub-picture;
performing redundancy removal processing on the feature map layer of the object contained in each sub-picture through a pooling layer;
and converting the characteristic layers subjected to redundancy removal processing through at least one full connecting layer to obtain the image characteristics of the object.
5. The method according to claim 3, wherein whether the image feature of the object contained in each sub-picture conforms to the grain feature is judged by the softmax layer, and the judgment result is output.
6. The method according to any one of claims 1 to 5, wherein after identifying the grain in each sub-picture, the method further comprises:
carrying out outline extraction on the sub-pictures in which the grains are identified, and setting the areas in which the grains are not identified as a preset background to obtain a plurality of processed sub-pictures;
and splicing the plurality of processed sub-pictures to obtain a reduced grain picture.
7. The method of claim 6, wherein before the sub-picture with the grain identified is subjected to contour extraction and the area without the grain identified is set as a predetermined background, the method further comprises:
judging whether the grains exist in a plurality of sub-pictures or not;
if the grain exists, identifying the sub-picture in which the grain exists, and determining the position of the grain in the sub-picture;
if not, setting the sub-picture without the grain as the preset background.
8. The method of claim 6, wherein after the combining the plurality of processed sub-pictures to obtain the reduced grain picture, the method further comprises:
outputting the reduced grain picture, and extracting a grain profile in the reduced grain picture;
determining a quality parameter of the grain based on a grain profile in the reduced grain picture.
9. An apparatus for identifying grain, comprising:
the acquisition module is used for acquiring pictures containing grains;
the segmentation module is used for segmenting the picture to obtain a plurality of sub-pictures;
and the identification module is used for processing the sub-pictures by using a convolutional neural network model to identify the grains in each sub-picture.
10. The apparatus of claim 9, wherein the identification module comprises:
the obtaining sub-module is used for obtaining the plurality of sub-pictures;
the extraction module is used for extracting the image characteristics of the object contained in each sub-picture;
and the judging module is used for judging whether the image characteristics of the object contained in each sub-picture accord with the grain characteristics or not, and if so, determining that the object is the grain.
11. The apparatus of claim 10, wherein the convolutional neural network model comprises at least: the picture processing device comprises an input layer, a convolution layer, a pooling layer, a full link layer and a softmax layer, wherein the plurality of sub-pictures are acquired through the input layer.
12. The apparatus of claim 10, wherein the extraction module comprises:
the scanning module is used for scanning the corresponding sub-picture through each convolution kernel in the convolution layer to obtain a characteristic layer of an object contained in each sub-picture;
the pooling module is used for performing redundancy removal processing on the feature map layer of the object contained in each sub-picture through a pooling layer;
and the conversion module is used for converting the characteristic layers subjected to redundancy removal processing through at least one full connection layer to obtain the image characteristics of the object.
13. The apparatus according to claim 11, wherein the judging module comprises a judging sub-module, configured to judge whether the image feature of the object included in each sub-picture conforms to the grain feature through the softmax layer, and output a judgment result.
14. The apparatus of any one of claims 9 to 13, further comprising:
the extraction sub-module is used for extracting the outline of the sub-picture in which the grains are identified after the grains in each sub-picture are identified, and setting the area in which the grains are not identified as a preset background to obtain a plurality of processed sub-pictures;
and the splicing module is used for splicing the plurality of processed sub-pictures to obtain a reduced grain picture.
15. The apparatus of claim 14, further comprising:
the grain judging module is used for judging whether grains exist in the sub-pictures before extracting the outlines of the sub-pictures in which the grains are identified, setting the areas in which the grains are not identified as a preset background and obtaining a plurality of processed sub-pictures; if the grain exists, identifying the sub-picture in which the grain exists, and determining the position of the grain in the sub-picture; if not, setting the sub-picture without the grain as the preset background.
16. The apparatus of claim 14, further comprising:
the output module is used for outputting the reduced grain picture after the plurality of processed sub-pictures are spliced to obtain the reduced grain picture and extracting the grain outline in the reduced grain picture;
and the determining module is used for determining the quality parameters of the grains based on the grain outlines in the reduced grain pictures.
17. A storage medium comprising a stored program, wherein the program, when executed, controls an apparatus on which the storage medium is located to perform the method of identifying grain according to any one of claims 1 to 8.
18. A processor for running a program, wherein the method of identifying grain according to any one of claims 1 to 8 is performed when the program is running.
CN201910033093.0A 2019-01-14 2019-01-14 Method and device for identifying grains Pending CN111435444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910033093.0A CN111435444A (en) 2019-01-14 2019-01-14 Method and device for identifying grains

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910033093.0A CN111435444A (en) 2019-01-14 2019-01-14 Method and device for identifying grains

Publications (1)

Publication Number Publication Date
CN111435444A true CN111435444A (en) 2020-07-21

Family

ID=71580595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910033093.0A Pending CN111435444A (en) 2019-01-14 2019-01-14 Method and device for identifying grains

Country Status (1)

Country Link
CN (1) CN111435444A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101275824A (en) * 2008-05-16 2008-10-01 中国农业大学 A kind of method of rice grain shape detection
CN101788497A (en) * 2009-12-30 2010-07-28 深圳先进技术研究院 Embedded bean classifying system based on image recognition technology
WO2018185786A1 (en) * 2017-04-06 2018-10-11 Nebulaa Innovations Private Limited Method and system for assessing quality of commodities

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101275824A (en) * 2008-05-16 2008-10-01 中国农业大学 A kind of method of rice grain shape detection
CN101788497A (en) * 2009-12-30 2010-07-28 深圳先进技术研究院 Embedded bean classifying system based on image recognition technology
WO2018185786A1 (en) * 2017-04-06 2018-10-11 Nebulaa Innovations Private Limited Method and system for assessing quality of commodities

Similar Documents

Publication Publication Date Title
JP7413400B2 (en) Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic equipment and storage medium
CN108108767A (en) A kind of cereal recognition methods, device and computer storage media
WO2021110066A1 (en) Food maturity level identification method and device, and computer storage medium
CN108596197A (en) A kind of seal matching process and device
CN112215861A (en) Football detection method and device, computer readable storage medium and robot
CN113808277B (en) Image processing method and related device
CN110009722A (en) Three-dimensional rebuilding method and device
CN108710916B (en) Picture classification method and device
CN101551853A (en) Human ear detection method under complex static color background
CN114723601B (en) Model structured modeling and rapid rendering method under virtual scene
CN116259052A (en) Method, device and cooking equipment for identifying food maturity
CN116645569A (en) A method and system for colorizing infrared images based on generative confrontation network
CN111383054A (en) Advertisement checking method and device
CN111435541A (en) Method, device and cooking utensil for obtaining chalkiness of rice grains
CN108090517A (en) A kind of cereal recognition methods, device and computer storage media
CN111435427B (en) Method and device for identifying rice and cooking utensil
CN117593540A (en) Pressure injury staged identification method based on intelligent image identification technology
CN113139557A (en) Feature extraction method based on two-dimensional multivariate empirical mode decomposition
CN116993982A (en) Infrared image segmentation method and system
CN108665450A (en) A kind of corn ear mechanical damage area recognizing method
CN115731172A (en) Crack detection method, device and medium based on image enhancement and texture extraction
CN111435444A (en) Method and device for identifying grains
CN113592849A (en) External insulation equipment fault diagnosis method based on convolutional neural network and ultraviolet image
US20170274285A1 (en) Method and apparatus for automating the creation of a puzzle pix playable on a computational device from a photograph or drawing
CN111435426A (en) Method and device for determining cooking mode based on rice grain recognition result and cooking appliance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200721