Disclosure of Invention
The invention provides a land utilization rate calculation method, a land utilization rate calculation device, electronic equipment and a storage medium, wherein the method can solve one or more of the technical problems.
A first aspect of an embodiment of the present invention provides a method for calculating a land utilization rate, the method including:
after determining the target land and the use information of the target land, acquiring a monitoring image of the target land according to the use information;
and acquiring characteristic data of the above-ground attachments on the monitoring image when the above-ground attachments on the target land are acquired from the monitoring image and the above-ground attachments are identical to objects contained in the application information, wherein the characteristic data comprises: number of objects and object density;
And calculating the land utilization rate according to the application rules contained in the application information by adopting the characteristic data.
In a possible implementation manner of the first aspect, the determining that the above-ground attachment is the same as the object included in the usage information includes:
Calling a preset identification model to identify the ground attachments to obtain identification information, wherein the identification information comprises: object chromaticity values and object contours;
searching a corresponding application object from the application information according to the object outline, and acquiring a chromaticity value corresponding to the application object to obtain an application chromaticity value;
If the object chromaticity value is the same as the application chromaticity value, determining that the on-ground attachment is the same as the object contained in the application information;
and if the object chromaticity value is different from the application chromaticity value or the corresponding application object cannot be found from the application information according to the object outline, determining that the above-ground attachment is different from the object contained in the application information.
In a possible implementation manner of the first aspect, the acquiring the land attachment of the target land from the monitoring image includes:
if the monitoring image is one, preprocessing the monitoring image to obtain a processed image;
And calling a preset CNN neural network to identify the processed image to obtain an on-ground attachment, wherein the preset CNN neural network comprises a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are sequentially connected.
In a possible implementation manner of the first aspect, the acquiring the land attachment of the target land from the monitoring image includes:
If the number of the monitoring images is multiple, respectively carrying out definition adjustment processing on each monitoring image to obtain multiple adjustment images;
Extracting view features from each monitoring image to obtain a plurality of two-dimensional view features, wherein the ground attachment is a three-dimensional object;
And after the two-dimensional view features are classified and combined to obtain combined features, the combined features are subjected to classification decision processing by using a preset similarity matching model, and the attachments on the ground are determined according to classification categories.
In a possible implementation manner of the first aspect, the acquiring a monitoring image of the target land according to the usage information includes:
if the land type contained in the application information is an agricultural land, determining whether a camera of the real-time shooting equipment is blocked in the horizontal direction of the target land;
if the camera of the real-time shooting equipment does not block in the horizontal direction of the target land, the camera of the real-time shooting equipment is called to shoot towards the center of the target land, and a monitoring image is obtained;
if the camera of the real-time shooting equipment is blocked in the horizontal direction towards the target land, the interval distance between the camera of the real-time shooting equipment and the center of the target land in the horizontal direction is acquired, the camera is controlled to ascend according to the interval distance and then shot towards the center of the target land, and a monitoring image is acquired.
In a possible implementation manner of the first aspect, after the step of calculating the land utilization rate according to the usage rule included in the usage information using the feature data, the method further includes:
if the land utilization rate is lower than the preset utilization rate, acquiring a current time node, and searching for the historical density according to the current time node;
If the difference value between the object density and the historical density is larger than a preset value, generating and sending land planning suggestion information to a management terminal for reference of management staff.
A second aspect of an embodiment of the present invention provides a land utilization calculation device, the device including:
the image acquisition module is used for acquiring a monitoring image of the target land according to the application information after determining the target land and the application information of the target land;
The feature acquisition module is configured to acquire, from the monitoring image, feature data of an above-ground attachment of a target land on the monitoring image when the above-ground attachment is identical to an object included in the usage information, where the feature data includes: number of objects and object density;
and the utilization rate calculation module is used for calculating the land utilization rate according to the application rules contained in the application information by adopting the characteristic data.
In a possible implementation manner of the second aspect, the determining that the above-ground attachment is the same as the object contained in the usage information includes:
Calling a preset identification model to identify the ground attachments to obtain identification information, wherein the identification information comprises: object chromaticity values and object contours;
searching a corresponding application object from the application information according to the object outline, and acquiring a chromaticity value corresponding to the application object to obtain an application chromaticity value;
If the object chromaticity value is the same as the application chromaticity value, determining that the on-ground attachment is the same as the object contained in the application information;
and if the object chromaticity value is different from the application chromaticity value or the corresponding application object cannot be found from the application information according to the object outline, determining that the above-ground attachment is different from the object contained in the application information.
In a possible implementation manner of the second aspect, the acquiring the land attachment of the target land from the monitoring image includes:
if the monitoring image is one, preprocessing the monitoring image to obtain a processed image;
And calling a preset CNN neural network to identify the processed image to obtain an on-ground attachment, wherein the preset CNN neural network comprises a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are sequentially connected.
In a possible implementation manner of the second aspect, the acquiring the land attachment of the target land from the monitoring image includes:
If the number of the monitoring images is multiple, respectively carrying out definition adjustment processing on each monitoring image to obtain multiple adjustment images;
Extracting view features from each monitoring image to obtain a plurality of two-dimensional view features, wherein the ground attachment is a three-dimensional object;
And after the two-dimensional view features are classified and combined to obtain combined features, the combined features are subjected to classification decision processing by using a preset similarity matching model, and the attachments on the ground are determined according to classification categories.
In a possible implementation manner of the second aspect, the acquiring a monitoring image of the target land according to the usage information includes:
if the land type contained in the application information is an agricultural land, determining whether a camera of the real-time shooting equipment is blocked in the horizontal direction of the target land;
if the camera of the real-time shooting equipment does not block in the horizontal direction of the target land, the camera of the real-time shooting equipment is called to shoot towards the center of the target land, and a monitoring image is obtained;
if the camera of the real-time shooting equipment is blocked in the horizontal direction towards the target land, the interval distance between the camera of the real-time shooting equipment and the center of the target land in the horizontal direction is acquired, the camera is controlled to ascend according to the interval distance and then shot towards the center of the target land, and a monitoring image is acquired.
In a possible implementation manner of the second aspect, the apparatus further includes:
the density acquisition module is used for acquiring a current time node and searching for historical density according to the current time node if the land utilization rate is lower than a preset utilization rate;
And the suggestion sending module is used for generating and sending land planning suggestion information to the management terminal for reference of management staff if the difference value between the object density and the historical density is larger than a preset value.
A third aspect of an embodiment of the present invention provides a land use efficiency calculation system, the system comprising: the cloud system is applicable to the land utilization rate calculation method;
And the cloud system is respectively in communication connection with each real-time shooting device.
Compared with the prior art, the land utilization rate calculation method, the land utilization rate calculation device, the electronic equipment and the storage medium provided by the embodiment of the invention have the beneficial effects that: according to the invention, after the target land and the purpose information of the target land are determined, the monitoring image of the target land can be obtained according to the purpose information; acquiring the ground attachment of the target land from the monitoring image, and acquiring the characteristic data of the ground attachment in the monitoring image when the ground attachment is identical to the object contained in the application information; and calculating the land utilization rate according to the application rules contained in the application information by adopting the characteristic data. The invention carries out recognition and calculation through the image detected in real time, not only can simplify the calculation flow and shorten the calculation processing time so as to improve the calculation efficiency, but also can effectively improve the calculation precision by combining the number and the density of objects to carry out comprehensive calculation.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the above problems, a land utilization calculating method according to an embodiment of the present application will be described and illustrated in detail by the following specific examples.
Referring to fig. 1, a flow chart of a land utilization calculation method according to an embodiment of the present invention is shown.
In an embodiment, the land utilization rate calculating method is suitable for a cloud platform, a cloud system or a background management system, and the system can perform rapid calculation according to the purpose information and the monitoring image of the target land, simplify the calculation flow and improve the calculation accuracy.
In an embodiment, the land utilization calculating method may calculate a land utilization of the agricultural land. For example, corn is planted in a farm, and the land utilization rate can be calculated based on the data such as the density of planting and the time of planting. For example, the wheat is planted in the agricultural land, and the land utilization rate can be calculated from the data such as the density of the planting, the time of the planting, and the height of the wheat.
The method for calculating the land utilization rate may include:
s11, after the target land and the purpose information of the target land are determined, a monitoring image of the target land is acquired according to the purpose information.
Specifically, the target land may be a farm land for which the user needs to perform a land utilization calculation. For example, an agricultural land in xx city xx town xx village. After the target land is determined, usage information of the target land may be acquired. The usage information may be usage information declared by the land to the relevant departments, and may include: the type of plant to be planted, the land area size, the planting time, the land authority and the like. For example, the target land is a suburban agricultural land for planting peanuts, and the usage information can include: the land is planted in the place of xx, zhenxx, xx, way xx meters of xx, and the land area is 1.5 mu, the planting starting time is xx, month, xx, day, the predicted planting area is 1 mu, and the land ownership is Zhang Sandeng.
After the purpose information of the target land is determined, a real-time shooting device (for example, a camera or a remote sensing tower) arranged at the side edge of the target land can be called to shoot the target land according to the land position of the purpose information, so that a monitoring image of the target land can be obtained.
In an embodiment, the real-time shooting device may be in communication with a cloud system, and the real-time shooting device may be an iron tower type ground remote sensing platform. The general height range is about 30-300 meters, and detection equipment such as remote sensing sensors, cameras and the like in each wave band and each type can be assembled and disassembled on the real-time shooting equipment so as to shoot images and obtain various remote sensing data and images.
In one implementation, the real-time photographing apparatus may be a side of the set target land. For example, the target land is an agricultural land, the real-time photographing apparatus may be disposed beside a field bed of the agricultural land, and the surrounding field may be detected by the real-time photographing apparatus.
During specific operation, the cloud system can extract land position information from the application information, then determine real-time shooting equipment of the accessory according to the land position information of the application information, and then control a camera of the real-time shooting equipment to shoot towards a target land, so that a monitoring image is obtained.
In an operation mode, the actual position of each real-time shooting device is different, and the shooting direction of the real-time shooting device may be blocked, so that the definition of an image shot by the real-time shooting device is not high or the whole target land is difficult to shoot, further, the subsequent calculation is biased, and the calculation accuracy is reduced. For example, the real-time photographing apparatus is provided in a field path beside two farmlands, and may photograph both left and right farmlands at the same time. The plants planted in the left farmland are shorter, and the real-time shooting equipment can shoot to obtain the full view of the farmland; and the plants planted in the farmland on the right are higher, and part of plants are shielded from shooting by the camera of the real-time shooting equipment, so that the real-time shooting equipment cannot shoot to obtain the full view of the farmland.
To avoid the above, step S11 may include the following sub-steps, as an example:
And S111, if the land type contained in the application information is an agricultural land, determining whether the horizontal direction of the camera of the real-time shooting equipment facing the target land is blocked or not.
And S112, if the camera of the real-time shooting device does not block in the horizontal direction of the target land, calling the camera of the real-time shooting device to shoot in the center of the target land, and obtaining a monitoring image.
And S113, if the camera of the real-time shooting equipment is blocked in the horizontal direction towards the target land, acquiring the interval distance between the camera of the real-time shooting equipment and the center of the target land in the horizontal direction, controlling the camera to rise according to the interval distance, and shooting towards the center of the target land to obtain a monitoring image.
Specifically, it may be determined first whether the type of land included in the usage information is an agricultural land, and if the type of land included in the usage information is not an agricultural land, warning information or error prompt information may be fed back to the user.
If the type of land contained in the usage information is an agricultural land, it may be determined whether there is a blockage of the real-time photographing apparatus camera in a horizontal direction toward the target land. Specifically, it may be determined whether there is a blockage of a preset distance in front of the camera.
If no shielding exists in the preset distance in front of the camera, the real-time shooting equipment camera can be directly called to shoot towards the center of the target land, and a monitoring image is obtained.
Otherwise, if the camera is shielded within the preset distance in front of the camera, it is determined that the camera of the real-time shooting device is shielded in the horizontal direction towards the target land.
In order to reduce the influence of shielding, the interval distance between the camera and the center of the target land in the horizontal direction can be acquired, then the camera is controlled to ascend according to the interval distance, and then the camera is controlled to shoot towards the center of the target land after ascending, so that a monitoring image is obtained.
For example, assuming that the distance between the camera and the center of the target land in the horizontal direction is 5 meters, then the current height of the camera is 3 meters, the camera can be controlled to rise by 5 meters by calling the lifting device in the real-time photographing device, so that the current height of the camera is 8 meters, and then the camera shoots towards the center of the target land, so that a monitoring image is obtained.
Alternatively, if the adjusted height is greater than the highest height of the camera, the camera may be adjusted to the highest height.
S12, acquiring the ground attachment of the target land from the monitoring image, and acquiring the characteristic data of the ground attachment on the monitoring image when the ground attachment is identical to the object contained in the application information, wherein the characteristic data comprises the following components: number of objects and object density.
In an embodiment, an above-ground attachment of the target land may be obtained from the monitoring image, the above-ground attachment may be an object planted or placed on the land, and it may be determined whether the above-ground attachment is identical to an object contained in the usage information, if it is determined that the above-ground attachment is identical to an object contained in the usage information, it is determined that the currently detected land meets the requirements, and the feature data may be extracted from the monitoring image. Wherein the feature data includes: number of objects and object density.
In an embodiment, if it is determined that the attachment on the land is different from the object contained in the usage information, the land may be used illegally, it may be determined that the land utilization rate of the land is 0, and alarm information may be sent to the background terminal.
In an alternative embodiment, the captured monitoring image may be one or more of the following sub-steps for real-time identification of one monitoring image, where, as an example, the capturing of the attachment on the target land from the monitoring image may include:
S21, if the monitoring image is one, preprocessing the monitoring image to obtain a processed image.
S22, calling a preset CNN neural network to identify the processed image to obtain the on-ground attachment, wherein the preset CNN neural network comprises a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are sequentially connected.
In one implementation, if the monitoring image is one piece, the monitoring image may be preprocessed to obtain the processed image. Wherein, the preprocessing may include: the null value removal process, boundary removal process, normalization process, and noise removal process may alternatively be performed in a conventional operation manner. In one embodiment, the adjustment may be made as desired. The null value and image boundary are removed, and the minimum boundary left by the null value and the image boundary is used as a later research scope.
In order to eliminate phase interference and remove noise, time domain median filtering is needed. The principle is that the reflectivity of the image is located in an abnormal maximum value area in each wave band, and the image is located in an abnormal minimum value area in most, so that the effect of taking the median is good.
After pretreatment is completed, a preset CNN neural network can be called to identify the treated image to obtain an on-ground attachment, wherein the preset CNN neural network comprises a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are sequentially connected.
During recognition, the convolution layer and the pooling layer are actually used for extracting the features, and the extracted features are input to the full-connection layer and recognized by the full-connection layer.
Specifically, in the first convolution layer in the CNN, a local area (filter) of the image is selected to scan the whole image, and all nodes circled by the local area perform multiplication accumulation operation with the filter, and then are connected to a node of the next layer. Assuming that the picture to be scanned is a gray-scale picture (i.e. only one color channel), all filters are also a two-dimensional matrix.
It should be noted that, if the picture to be scanned by the convolution layer is a color picture (i.e., the picture has three channels of RGB). If a filter convolves a picture (whether the picture is a single channel or multiple channels) only can obtain information of one plane, and multiple filters obtain multiple planes (understanding of the image can be understood as such), the process of scanning the picture by the filter (convolution) can be regarded as a process of extracting features, and one convolution extracts features of a picture (such as extracting pixel features of red channel, green channel and blue channel of the picture respectively by multiple filters).
The foremost effect of the pooling layer is to compress the image. If the pooling layer is sandwiched between successive convolution layers, it is used to compress the data convolved by the previous convolution layer (or may be understood as filtering) to reduce the overfitting.
The largest number is selected for the window 3*3 as the value of the corresponding element of the output matrix, e.g., the largest number in the first 3*3 window of the input matrix is 5, then the first element of the output matrix is 5, and so on. Note that the window herein does not overlap every scanned pixel, avoiding repeated filtering of pixels.
The fully connected layer uses the outputs of the previous convolution layer and the pooling layer as characteristic values to train the network, and uses a back propagation algorithm to correct parameters and threshold values of the fully connected layer, weights of the previous convolution layer filter and the like.
The Convolutional Neural Network (CNN) can feed the value output by a series of processing (convolution and pooling) of the pixel value of the picture as a characteristic to model training. The method has the advantages of keeping multidimensional information of the pictures and improving the precision of model classification and identification.
In an alternative embodiment, the captured monitoring image may be one or more, for real-time identification of a plurality of monitoring images, where, as an example, the capturing of the attachment on the target land from the monitoring image may include the following sub-steps:
and S31, if the number of the monitoring images is multiple, respectively performing definition adjustment processing on each monitoring image to obtain multiple adjustment images.
S32, extracting view features from each monitoring image to obtain a plurality of two-dimensional view features, wherein the ground attachments are three-dimensional objects.
S33, after the two-dimensional view features are classified and combined to obtain combined features, the combined features are subjected to classification decision processing by using a preset similarity matching model, and attachments are determined according to classification categories.
In an embodiment, the real-time photographing device may control the camera to photograph the target land multiple times, so as to obtain multiple monitoring images. When the number of the monitoring images is plural, in order to comprehensively identify the plural monitoring images, the sharpness adjustment processing may be performed on each monitoring image to obtain plural adjustment images. Specifically, a super-resolution reconstruction algorithm may be used to perform sharpness adjustment processing on the monitored image.
After the adjustment is completed, each monitoring image can be adjusted and extracted, and the view characteristics of the monitoring image are extracted.
Specifically, a preset convolution network model may be adopted to adjust and extract each monitoring image to obtain a plurality of two-dimensional view features, wherein the ground attachment is a three-dimensional object, and the plurality of monitoring images are images of the ground attachment under a plurality of angles. Specifically, the plurality of monitoring images are images of the attached matter on the ground.
In this embodiment, the ground attachment is an object whose kind is recognized by the real-time photographing apparatus in the current scene.
In this embodiment, a three-dimensional ground attachment is understood to mean any object that is actually present, that is to say, the three-dimensional ground attachment is based on the actual parameters of the three directions of the three-dimensional coordinate system. Specifically, in the case of the above agricultural land scenario, it may be a planted agricultural product, including: peanut, corn, rice or tomato, etc.
In this embodiment, the preset convolutional network model may include a feature extraction network, which may use DenseNet, resNet or other convolutional network structure.
As an alternative implementation manner, the plurality of monitoring images can be captured by calling real-time capturing devices on a plurality of sides of the farmland.
By implementing the embodiment, the real-time shooting equipment can be controlled to acquire a plurality of monitoring images around the target land, so that acquisition errors of the plurality of images are avoided, the accuracy of image acquisition is improved, and the accuracy of object identification is further improved.
As an alternative embodiment, after the step of acquiring the plurality of monitoring images of the target land, the method may further include:
And checking the plurality of monitoring images, judging whether each monitoring image in the plurality of monitoring images is matched with the preset ground attachments, if so, executing the subsequent steps, and if not, removing the monitoring image.
It should be noted that the preset attachment on the ground may be provided with a plurality of attachments, including planted plants, crops, related devices or tools of crops, and the like.
In this embodiment, another result of verification of multiple monitoring images may be implemented, so as to improve accuracy of multiple monitoring images and improve accuracy of object recognition.
In this embodiment, the different angles at which the plurality of monitoring images are located are based on the spatial angle.
In this embodiment, each monitoring image may include two-dimensional view features, where a plurality of two-dimensional view features are for a plurality of monitoring images in a whole, so that one monitoring image may or may not include two-dimensional view features, and may also include a plurality of two-dimensional image features, which is not limited in this embodiment.
In this embodiment, the preset convolutional network model is further required to obtain a class score corresponding to each two-dimensional view feature in the multiple two-dimensional view features in the training process, and the class score can be identified by a machine or an operator and checked, so that the accuracy of extracting the two-dimensional view features by the convolutional network model is improved, and the accuracy of the convolutional network model in the use process is improved.
In this embodiment, the class score refers to a specific value corresponding to each two-dimensional view feature, where the specific value is used to match a corresponding object class, and it can be seen that the class score is used to match the class of the object.
In this embodiment, the category score may be a specific score value (actual numerical value), may be a percentage numerical value, or may be other types of data, which is not limited in this embodiment.
In this embodiment, a specific score value may be used directly for category matching, while a percentage value is used to indicate how likely the feature is to be an object of a given category. In this embodiment, the convolutional network model is an artificial intelligence model.
And then, combining the plurality of two-dimensional view features according to a preset aggregation network model to obtain combined features and classification results corresponding to the combined features. In this embodiment, the aggregate network model is an artificial intelligence model.
In this embodiment, the combined feature refers to a registered three-dimensional feature set.
In this embodiment, the classification result may be represented in the form of a numerical value, may be represented in the form of a percentage, or may be represented in the form of a class name of an object, which is not limited in this embodiment.
In this embodiment, the above numerical value means a numerical value for matching a category; the percentage means the coincidence proportion with a certain type of object; the category names are names of categories such as tables, chairs, or cups, etc.
And finally, carrying out classification decision processing on the plurality of combined features according to a preset similarity matching model to obtain a positive and negative case score result.
In this embodiment, the similarity matching model is an artificial intelligence model.
As an optional implementation manner, when the classification decision process is performed on the plurality of combined features according to the preset similarity matching model, part of two-dimensional view features in the plurality of combined features can be obtained;
And carrying out classification decision processing on part of the two-dimensional view features according to a preset similarity matching model to obtain a positive and negative case score result.
By implementing the embodiment, the classification decision process can be performed according to part of two-dimensional view features in the plurality of combined features, so that the round robin of all data is avoided.
In this embodiment, the partial two-dimensional view feature may refer to a part of the two-dimensional view features in the plurality of combined features.
In this embodiment, the two-dimensional view feature is the current minimum feature unit, and is not subdivided, so that there is no understanding deviation of a part of the two-dimensional view feature.
In this embodiment, the positive and negative case score result may be a positive case score or a negative case score, and specifically, a true case score, a false positive case score, a true negative case score, and a false negative case score may also be divided.
In this embodiment, the positive and negative case score results are used to indicate whether the combined feature corresponds to a plurality of two-dimensional view features, that is, whether the combined feature is a feature of the same object, specifically, whether the combined feature is a feature of an above-ground attachment, or whether the combined feature satisfies an identifying feature of the above-ground attachment.
When the positive and negative case score results are positive case scores, the classification result is used as the basis to determine the category of the attachments on the ground.
In this embodiment, when the positive and negative case score result is the positive case score, the positive case score is used to indicate that the combined feature is a feature of the ground attachment, that is, after the verification is completed, it can be indicated that the combined feature has no deviation problem, so that the classification of the object can be determined according to the classification result.
By the method, the overground attachments (three-dimensional objects) can be preferentially acquired based on a plurality of images under a plurality of angles, and the plurality of images are subjected to feature extraction by using the convolution network model to obtain a plurality of two-dimensional view features, so that the two-dimensional feature views are extracted by using the artificial intelligent model, and the feature extraction precision can be effectively improved compared with the conventional extraction method by using the artificial intelligent model; after the two-dimensional view features are obtained, the features are combined in the aggregation network model, and the combined features and classification results corresponding to the combined features are obtained. The steps can combine the features through the artificial intelligent model to obtain effective combined features, so that the subsequent steps can carry out classification decision according to the combined features, and further, the accuracy of identification is improved through comparison of more comprehensive comparison objects, and the problem of low universality caused by single features is avoided; finally, comprehensively classifying and deciding the two-dimensional view features and the combined features according to a preset similarity matching model, determining the object category corresponding to the classifying result while judging that the two features are the same or similar, thereby realizing the effect of improving the recognition accuracy through verification, and simultaneously improving the stability through the use of positive and negative case scores. Therefore, the implementation process can confirm the types of the objects by combining a plurality of monitoring images with the artificial intelligent model, so that the technical effects of improving the accuracy and universality of ground attachment identification can be realized.
In an alternative embodiment, the determining that the above-ground attachment is the same as the object contained in the usage information may include the following sub-steps:
S121, calling a preset identification model to identify the ground attachment to obtain identification information, wherein the identification information comprises: object chromaticity values and object contours.
S122, searching a corresponding application object from the application information according to the object outline, and acquiring a chromaticity value corresponding to the application object to obtain an application chromaticity value.
And S123, if the object chromaticity value is the same as the application chromaticity value, determining that the ground attachment is the same as the object contained in the application information.
And S124, if the object chromaticity value is different from the application chromaticity value or the corresponding application object cannot be found from the application information according to the object outline, determining that the ground attachment is different from the object contained in the application information.
After the above-ground attachments are obtained, in order to further determine whether the above-ground attachments meet the requirements, a preset recognition model can be called to recognize the above-ground attachments, so that recognition information is obtained, and the recognition information comprises: object chromaticity values and object contours.
In one embodiment, the predetermined identification model may use DenseNet, resNet or other convolutional network structures.
Then, the object with the same outline can be searched from the application information according to the outline of the object to obtain the corresponding application object, and meanwhile, the chromaticity value corresponding to the application object can be obtained to obtain the application chromaticity value.
If the object chromaticity value is the same as the usage chromaticity value, the attached matter is determined to be the same as the object contained in the usage information. On the contrary, if the object chromaticity value is different from the application chromaticity value or the corresponding application object cannot be found from the application information according to the object outline, the attached object is determined to be different from the object contained in the application information.
In a practical application scene, a photographed image may have a chromatic value deviation, or have a chromatic value deviation due to brightness or angle.
In a preferred embodiment, the chromaticity value interval corresponding to the object of use may be obtained to obtain the chromaticity interval of use.
If the object chromaticity value is within the use chromaticity section, the attached matter is surely the same as the object contained in the use information. On the contrary, if the object chromaticity value and the usage chromaticity value are not within the usage chromaticity section, the attached matter is definitely different from the object contained in the usage information.
And S13, calculating the land utilization rate according to the application rules contained in the application information by adopting the characteristic data.
In an embodiment, a corresponding calculation formula or a conversion formula may be searched in the database according to a usage rule included in the usage information, then the feature data is converted into a corresponding text vector, and the text vector is substituted into the calculation formula or the conversion formula to calculate the land utilization rate.
In an embodiment, the calculation formula of the land utilization rate may be as follows:
C is the final calculated land utilization rate; a and B are calculation constants or calculation weight values, and can be specifically adjusted according to actual needs; p i is the number of objects and P k is the object density.
In one embodiment, the calculated land utilization may also represent a unit yield of the planted crop, with a higher land utilization if the number of planted crops is large and the density is high, and a smaller number of planted crops and the density is low, on the contrary.
In one embodiment, the calculated land utilization may be less than desired due to image deviation, or may be less than desired due to user planting violations. The illustrated method may further include, by way of example, the steps of:
And S14, if the land utilization rate is lower than the preset utilization rate, acquiring a current time node, and searching the historical density according to the current time node.
And S15, if the difference value between the object density and the historical density is larger than a preset value, generating and sending land planning suggestion information to a management terminal for reference of management staff.
Specifically, if the land utilization rate is lower than the preset utilization rate, acquiring a current time node, and searching the historical density according to the current time node. The historical density may be the density of the crop being planted.
Then, the difference between the object density and the historical density can be calculated, then the difference between the object density and the historical density is larger than a preset value, the fact that the land utilization rate is lower than the preset utilization rate due to the fact that the user is likely to plant is explained, and at the moment, land planning suggestion information can be generated and sent to the management terminal for reference of management staff. The management terminal can be a terminal which is communicated with the cloud system, and particularly can be used by a manager.
In this embodiment, the embodiment of the present invention provides a method for calculating a land utilization rate, which has the following beneficial effects: according to the invention, after the target land and the purpose information of the target land are determined, the monitoring image of the target land can be obtained according to the purpose information; acquiring the ground attachment of the target land from the monitoring image, and acquiring the characteristic data of the ground attachment in the monitoring image when the ground attachment is identical to the object contained in the application information; and calculating the land utilization rate according to the application rules contained in the application information by adopting the characteristic data. The invention carries out recognition and calculation through the image detected in real time, not only can simplify the calculation flow and shorten the calculation processing time so as to improve the calculation efficiency, but also can effectively improve the calculation precision by combining the number and the density of objects to carry out comprehensive calculation.
The embodiment of the invention also provides a land utilization rate calculating device, and referring to fig. 2, a schematic structural diagram of the land utilization rate calculating device is shown.
Wherein, as an example, the land utilization calculating means may include:
an image acquisition module 201, configured to acquire a monitoring image of a target land according to usage information after determining the target land and the usage information of the target land;
The obtaining feature module 202 is configured to obtain, from the monitoring image, feature data of the ground attachment on the monitoring image when the ground attachment on the target land is obtained and it is determined that the ground attachment is the same as an object included in the usage information, where the feature data includes: number of objects and object density;
and the utilization rate calculating module 203 is configured to calculate a land utilization rate according to a usage rule included in the usage information by using the feature data.
Optionally, the determining that the above-ground attachment is the same as the object contained in the usage information includes:
Calling a preset identification model to identify the ground attachments to obtain identification information, wherein the identification information comprises: object chromaticity values and object contours;
searching a corresponding application object from the application information according to the object outline, and acquiring a chromaticity value corresponding to the application object to obtain an application chromaticity value;
If the object chromaticity value is the same as the application chromaticity value, determining that the on-ground attachment is the same as the object contained in the application information;
and if the object chromaticity value is different from the application chromaticity value or the corresponding application object cannot be found from the application information according to the object outline, determining that the above-ground attachment is different from the object contained in the application information.
Optionally, the acquiring the attachment on the target land from the monitoring image includes:
if the monitoring image is one, preprocessing the monitoring image to obtain a processed image;
And calling a preset CNN neural network to identify the processed image to obtain an on-ground attachment, wherein the preset CNN neural network comprises a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are sequentially connected.
Optionally, the acquiring the attachment on the target land from the monitoring image includes:
If the number of the monitoring images is multiple, respectively carrying out definition adjustment processing on each monitoring image to obtain multiple adjustment images;
Extracting view features from each monitoring image to obtain a plurality of two-dimensional view features, wherein the ground attachment is a three-dimensional object;
And after the two-dimensional view features are classified and combined to obtain combined features, the combined features are subjected to classification decision processing by using a preset similarity matching model, and the attachments on the ground are determined according to classification categories.
Optionally, the acquiring the monitoring image of the target land according to the usage information includes:
if the land type contained in the application information is an agricultural land, determining whether a camera of the real-time shooting equipment is blocked in the horizontal direction of the target land;
if the camera of the real-time shooting equipment does not block in the horizontal direction of the target land, the camera of the real-time shooting equipment is called to shoot towards the center of the target land, and a monitoring image is obtained;
if the camera of the real-time shooting equipment is blocked in the horizontal direction towards the target land, the interval distance between the camera of the real-time shooting equipment and the center of the target land in the horizontal direction is acquired, the camera is controlled to ascend according to the interval distance and then shot towards the center of the target land, and a monitoring image is acquired.
Optionally, the apparatus further comprises:
the density acquisition module is used for acquiring a current time node and searching for historical density according to the current time node if the land utilization rate is lower than a preset utilization rate;
And the suggestion sending module is used for generating and sending land planning suggestion information to the management terminal for reference of management staff if the difference value between the object density and the historical density is larger than a preset value.
The embodiment of the invention also provides a land utilization rate calculation system, and referring to fig. 3, a schematic structural diagram of the land utilization rate calculation system is shown.
Wherein, as an example, the land utilization calculation system may comprise: the cloud system is suitable for the land utilization rate calculation method according to the embodiment;
And the cloud system is respectively in communication connection with each real-time shooting device.
It will be clearly understood by those skilled in the art that, for convenience and brevity, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Further, an embodiment of the present application further provides an electronic device, including: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the land utilization calculation method according to the embodiment.
Further, an embodiment of the present application also provides a computer-readable storage medium storing a computer-executable program for causing a computer to execute the land utilization calculation method according to the above embodiment.
It will be appreciated by those skilled in the art that embodiments of the present application may also be provided including a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), devices and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.