[go: up one dir, main page]

CN117456620A - A multi-modal cow individual identification method and system incorporating action information - Google Patents

A multi-modal cow individual identification method and system incorporating action information Download PDF

Info

Publication number
CN117456620A
CN117456620A CN202311397300.3A CN202311397300A CN117456620A CN 117456620 A CN117456620 A CN 117456620A CN 202311397300 A CN202311397300 A CN 202311397300A CN 117456620 A CN117456620 A CN 117456620A
Authority
CN
China
Prior art keywords
cow
image
features
feature
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311397300.3A
Other languages
Chinese (zh)
Inventor
司永胜
宁泽普
王克俭
李秋凤
高艳霞
陈硕
陈厅
贾智博
刘孟浩
王恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heibei Agricultural University
Original Assignee
Heibei Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heibei Agricultural University filed Critical Heibei Agricultural University
Priority to CN202311397300.3A priority Critical patent/CN117456620A/en
Publication of CN117456620A publication Critical patent/CN117456620A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/72Data preparation, e.g. statistical preprocessing of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-mode milk cow individual identification method and system integrating action information, comprising the following steps: collecting video data of dairy cows during activity, and extracting complete cow body images of the dairy cows; converting the complete cow body image extracted from the video data into a clip image and an RGB image; preprocessing the cut-off map and the RGB picture to generate a data set, and dividing the data set into a training set and a testing set; constructing a deep learning network model based on the training set and the testing set, and extracting action features and pattern features through the deep learning network model; feature fusion is carried out on the action features and the pattern features, and training is carried out on a preset support vector machine based on the fusion features; and performing individual dairy cow identification on the dairy cow image acquired in real time based on the trained support vector machine model. The invention solves the problem of low individual identification accuracy of the existing dairy cows.

Description

Multi-mode milk cow individual identification method and system integrating action information
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-mode milk cow individual identification method and system integrating action information.
Background
The individual identification of the dairy cows is the premise and the basis of the refined breeding of the dairy cows, the tracing of products, the analysis of behaviors, the monitoring of illness state and the grading of body conditions, and the accurate and reliable identification of each dairy cow has important significance.
In the traditional dairy cow individual identification method by means of manpower, an ear tag is the most common one. However, the ear tag is frequently lost and damaged in actual production and life, and is not suitable for long-term use. Electronic tags based on RFID technology are also common. However, the technology has limited recognition distance, high cost and large limitation. Machine vision is a popular research direction in the field of individual identification of cows, and the method is characterized in that under the condition that human intervention is not needed, the cows are recorded by using a camera, and the positioning, identification and tracking of targets in a scene are realized by analyzing videos. The identification method improves the real-time performance and the automation degree of identification, and can reduce the management cost of farms and reduce the stress response of cattle. In the aspect of machine vision identification of cows, as the pattern information of each cow is different, the cow has certain uniqueness, and the deep learning technology based on the cow body back or side pattern has a higher application prospect in individual identification. However, there are often many difficulties in applying these methods in a practical environment. For example, the appearance of cows can be greatly affected in different light and even dark environments. In addition, for a variety (e.g., zebra cows) having a pattern similar to or even a pure color of a non-pattern, a variety having a non-pattern characteristic, it is difficult to perform individual identification of cows using a pattern-based method. Therefore, the method can be used for individual identification by integrating the shape change (namely action information) of the dairy cows during walking by referring to the field of human research, and the algorithm has robustness to the identification of the dairy cows with illumination change and similar patterns. However, a single passing action feature can only be identified from a part of the moving object in a limited way, and the change of road conditions and environment and the like can have negative effects on the moving object. To improve the recognition performance of the algorithm, it is necessary to attempt to use multi-feature fusion to improve the accuracy of the single-modality recognition technology. Therefore, it is needed to propose a multi-mode milk cow individual identification method integrating action information.
Disclosure of Invention
The invention provides a multi-mode individual dairy cow identification method and system integrating action information, which are used for solving the problem of low individual dairy cow identification accuracy in the prior art.
The invention provides a multi-mode milk cow individual identification method integrating action information, which comprises the following steps:
collecting video data of dairy cows during activity, and extracting complete cow body images of the dairy cows;
converting the complete cow body image extracted from the video data into a clip image and an RGB image;
preprocessing the cut-off map and the RGB picture to generate a data set, and dividing the data set into a training set and a testing set;
constructing a deep learning network model based on the training set and the testing set, and extracting action features and pattern features through the deep learning network model;
feature fusion is carried out on the action features and the pattern features, and training is carried out on a preset support vector machine based on the fusion features;
and performing individual dairy cow identification on the dairy cow image acquired in real time based on the trained support vector machine model.
According to the multi-mode individual identification method of dairy cows integrating action information, which is provided by the invention, the video data of the dairy cows during the activity is collected, and the complete cow body image of the dairy cows is extracted, which comprises the following steps:
selecting to collect video data when the milking of the cows is completed to leave the milking parlor;
the camera is fixedly arranged on the inner side of the aisle and is set at a distance from the dairy cow, the focal length of the camera is adjusted to enable the whole camera to have a visual field which is about 3 times of the length of the body of the dairy cow, and the shooting range at least comprises 2 gait cycles;
and extracting the whole cow body of the dairy cow through the acquired video data of the walking of the dairy cow.
According to the multi-mode milk cow individual identification method integrating action information provided by the invention, the complete cow body image extracted from the video data is converted into a clip image and an RGB image, and the method specifically comprises the following steps:
preprocessing video data of walking cows into video frames under a side view angle;
dividing a complete cow body contour side view through a preset semantic dividing network model based on the video frame;
and converting the segmented cattle body into a shearing image and an RGB image by modifying the semantic segmentation network model parameters.
According to the multi-mode milk cow individual identification method integrating action information provided by the invention, preprocessing is carried out on the clip map and the RGB picture to generate a data set, and the method specifically comprises the following steps:
cutting the clipping image and the RGB image, automatically detecting and cutting edge regions in the image through an image boundary detection algorithm, and removing irrelevant or unnecessary parts in the image to obtain an interested region;
aligning based on the cut image data, finding key feature points in the image through a feature point detection and matching algorithm, and aligning through a transformation matrix to enable objects or features in the image to be in the same position;
normalization is performed based on the aligned image data, and a linear normalization method is used to scale the pixel values to between 0 and 1 for normalization processing.
According to the multi-mode dairy cow individual identification method integrating action information provided by the invention, a deep learning network model is constructed based on the training set and the testing set, and action feature extraction and pattern feature extraction are carried out through the deep learning network model, and the method specifically comprises the following steps:
inputting the silhouette into a deep learning network model, carrying out feature extraction on a video frame sequence of the silhouette through a 3D convolution network and a two-way long-short-term memory network, and outputting action features in the form of probability vectors;
and inputting the RGB image into a deep learning network model, extracting deep features through a convolutional neural network, and outputting pattern features in the form of probability vectors through an output layer.
According to the multi-mode dairy cow individual identification method integrating the action information, which is provided by the invention, the action characteristics and the pattern characteristics are subjected to characteristic fusion, and a preset support vector machine is trained based on the fusion characteristics, and the method specifically comprises the following steps:
feature fusion is carried out on the features in the form of the two probability vectors of the action feature and the pattern feature through three fusion methods of feature weighted summation, feature value summation and feature maximum value;
selecting an optimal fusion strategy as characteristic weighted summation, wherein the weight ratio of the action characteristic to the pattern characteristic is 4:6;
and training a preset support vector machine based on the fusion characteristics.
According to the multi-mode milk cow individual identification method integrating action information provided by the invention, the trained support vector machine model is used for carrying out milk cow individual identification on milk cow images acquired in real time, and the method specifically comprises the following steps:
acquiring cow video data in real time, and inputting the video data into a trained support vector machine model;
and performing individual identification of the dairy cows through the support vector machine model.
The invention also provides a multi-mode milk cow individual identification system integrating action information, which comprises:
the data acquisition module is used for acquiring video data of dairy cows during activity and extracting complete cow body images of the dairy cows;
the image conversion module is used for converting the complete cow body image extracted from the video data into a clip image and an RGB image;
the preprocessing module is used for preprocessing the clip images and the RGB images to generate a data set, and dividing the data set into a training set and a testing set;
the feature extraction module is used for constructing a deep learning network model based on the training set and the testing set, and extracting action features and pattern features through the deep learning network model;
the feature fusion module is used for carrying out feature fusion on the action features and the pattern features, and training a preset support vector machine based on the fusion features;
and the identification module is used for carrying out individual dairy cow identification on the dairy cow image acquired in real time based on the trained support vector machine model.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the multi-mode milk cow individual identification method integrated with the action information according to any one of the above when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a multi-modal cow individual identification method incorporating motion information as any one of the above.
According to the multi-mode individual dairy cow identification method and system integrating action information, complete dairy cow bodies are extracted by collecting video data of dairy cow walking, and the video data are converted into two data forms of a shearing image and an RGB image; preprocessing such as cutting, aligning, normalizing and the like is carried out on cow data; dividing the two parts of data into two parts of a training set and a testing set respectively; constructing a deep learning network model to extract action features of the body of the dairy cow during walking and pattern features of the dairy cow body; and carrying out feature fusion on the two features, classifying the dairy cows by using a support vector machine model based on the fused features, and completing individual identification of the dairy cows. The accurate identification of the dairy cows is realized, and the individual dairy cows cannot be influenced.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a multi-modal milk cow individual identification method incorporating motion information provided by the invention;
FIG. 2 is a second flow chart of a method for identifying individual multi-modal cows incorporating motion information provided by the invention;
FIG. 3 is a third flow chart of a method for identifying individual multi-modal cows incorporating motion information according to the present invention;
FIG. 4 is a flow chart of a method for identifying individual multi-modal cows incorporating motion information according to the present invention;
FIG. 5 is a schematic flow chart of a method for identifying individual multi-modal cows incorporating motion information provided by the invention;
FIG. 6 is a flowchart of a method for identifying individual multi-modal cows incorporating motion information according to the present invention;
FIG. 7 is a schematic diagram of a multi-modal milk cow individual identification method incorporating motion information according to the present invention;
FIG. 8 is a schematic diagram of module connection of a multi-modal individual identification system for cows incorporating motion information provided by the invention;
fig. 9 is a schematic diagram of the extraction of cow whole bovine body and conversion into a cut-off and RGB diagram provided by the invention;
fig. 10 is a schematic diagram of preprocessing cow data provided by the present invention;
FIG. 11 is a schematic diagram of the deep learning network model constructed to extract body motion features and body pattern features of dairy cows during walking;
fig. 12 is a schematic diagram of feature fusion of motion features and body pattern features of a cow during walking and classification of the cow based on the fused features, provided by the invention;
fig. 13 is a schematic structural diagram of an electronic device provided by the present invention.
Reference numerals:
110: a data acquisition module; 120: an image conversion module; 130: a preprocessing module; 140: a feature extraction module; 150: a feature fusion module; 160: an identification module;
1310: a processor; 1320: a communication interface; 1330: a memory; 1340: a communication bus.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following describes a multi-modal milk cow individual recognition method integrating action information according to the present invention with reference to fig. 1 to 7, including:
s100, acquiring video data of dairy cows during activity, and extracting complete cow body images of the dairy cows;
s200, converting the complete cow body image extracted from the video data into a clip image and an RGB image;
s300, preprocessing the clip diagram and the RGB picture to generate a data set, and dividing the data set into a training set and a testing set;
s400, constructing a deep learning network model based on the training set and the testing set, and extracting action features and pattern features through the deep learning network model;
s500, carrying out feature fusion on the action features and the pattern features, and training a preset support vector machine based on the fusion features;
s600, performing individual dairy cow identification on the dairy cow image acquired in real time based on the trained support vector machine model.
According to the invention, the action features and the pattern features can be accurately extracted after the deep learning network model is constructed through training, the training of the support vector machine is completed after feature fusion, and the individual dairy cows are accurately identified through the support vector machine model, so that the identification accuracy is improved, and adverse effects on the individual dairy cows are avoided.
The method for extracting the complete cow body image of the dairy cow comprises the following steps of:
s101, selecting to acquire video data when the milking of the dairy cows is completed to leave a milking parlor;
s102, a camera is fixedly arranged on the inner side of a passageway and is set to be away from a dairy cow, the focal length of the camera is adjusted to enable the whole camera to be approximately 3 times of the length of the body of the dairy cow, and the shooting range at least comprises 2 gait cycles;
s103, extracting the whole cow body of the dairy cow through the acquired video data of the dairy cow walking.
In the invention, video data acquisition is selected when the milking of the cows leaves the milking parlor, the camera is fixedly arranged at the inner side of the passageway and is about 3 meters away from the cows, and the focal length of the camera is adjusted to ensure that the whole camera vision is about 3 times of the body length of the cows, so that the shooting range at least comprises 2 gait cycles. The video data of the complete dairy cow after passing is ensured to be acquired in 2 gait cycles, and a clear image is provided for the follow-up.
Converting the complete cow body image extracted from the video data into a clip image and an RGB image, wherein the method specifically comprises the following steps:
s201, preprocessing video data of walking cows into video frames under a side view angle;
s202, dividing a complete cow body contour side view through a preset semantic division network model based on a video frame;
s203, converting the segmented cattle bodies into a shearing image and an RGB image by modifying semantic segmentation network model parameters.
In the invention, the whole cow body of the dairy cow is extracted through the acquired video data of the dairy cow walking and converted into two data forms of a scissoring image and an RGB image, and the method comprises the following steps: under the side view angle, preprocessing the video of the walking dairy cow into a video frame, and then using a deep learning method to divide the whole dairy cow body outline side view by using a deep LabV3+ semantic division network model. And finally, converting the segmented cattle body into data in two forms of a shearing image and an RGB image by modifying network parameters.
Referring to fig. 9, a complete bovine body is extracted according to the present invention and converted into two data forms of a scissors map and an RGB map. In the side view, the deep LabV3+ semantic segmentation network model is used for segmenting the whole cow body outline as shown in fig. 9 (a), and then network parameters are modified to convert the segmented cow body into data in two forms of a shearing map and an RGB map as shown in fig. 9 (b) and 9 (c).
Preprocessing the clip map and the RGB picture to generate a data set, which specifically comprises the following steps:
s301, cutting the cut-off map and the RGB picture, automatically detecting and cutting out an edge area in the image through an image boundary detection algorithm, and removing irrelevant or unnecessary parts in the image to obtain an interested area;
s302, aligning based on the cut image data, finding key feature points in the image through a feature point detection and matching algorithm, and aligning through a transformation matrix to enable objects or features in the image to be in the same position;
s303, normalizing based on the aligned image data, and scaling the pixel value to be between 0 and 1 by using a linear normalization method to perform normalization processing.
In the invention, the cow data in two forms are preprocessed, and the method comprises the following steps:
the clipping refers to automatically detecting and clipping edge regions in an image by using an image boundary detection algorithm (Canny edge detection), removing irrelevant or unnecessary parts in the image, and obtaining an interested region;
alignment means that a feature point detection and matching algorithm ORB is used to find key feature points in an image, and the key feature points are aligned through a transformation matrix, so that objects or features in the image are ensured to be in the same position, and the objects or features are kept consistent for subsequent processing steps;
normalization refers to the process of scaling pixel values between 0 and 1 for normalization using a linear normalization method.
By preprocessing the two forms of cow image data, the related image data can be more in line with the subsequent training requirement. The data after clipping, feature point detection and matching algorithm ORB alignment and linear normalization are processed by a Canny edge detection algorithm, as shown in fig. 10 (a) and 10 (b).
Constructing a deep learning network model based on the training set and the testing set, and extracting action features and pattern features through the deep learning network model, wherein the method specifically comprises the following steps:
s401, inputting the silhouette graph into a deep learning network model, carrying out feature extraction on a video frame sequence of the silhouette graph through a 3D convolution network and a two-way long-short-term memory network, and outputting action features in the form of probability vectors;
s402, inputting the RGB image into a deep learning network model, extracting deep features through a convolutional neural network, and outputting pattern features in the form of probability vectors through an output layer.
In the invention, when the data set is divided, the data set is divided into a training set and a testing set according to the proportion of 7:3.
The method for constructing the deep learning network model to extract the action characteristics and the body pattern characteristics of the body of the dairy cow during walking comprises the following steps: in terms of motion feature extraction, the input silhouette data is a sequence of video frames that first pass through a 3D convolution network (r3d_18) that replaces all of the 2D convolution kernels, convolution layers, and pooling layers in the res net18 with 3D convolution kernels, convolution layers, and pooling layers, which are 3D versions of the res net18 responsible for extracting features from the video. The 3D convolution operation is performed in the time dimension and in the space dimension, capturing spatio-temporal feature information in the video. The convolved feature map is fed to a bi-directional LSTM (BiLSTM) layer that captures long-term and short-term dependencies in the input sequence, with 3200 hidden units. Finally, through an output layer, the device consists of two full-connection layers, two ReLU linear rectification function layers and a SoftMax layer, and finally, the depth characteristic with Softmax normalized exponential function activation is finally output in the form of a probability vector; in the aspect of pattern feature extraction, input RGB image data is an image, deep features are extracted through operations such as convolution, pooling and the like of an AlexNet convolutional neural network, and the deep features are finally output in the form of probability vectors through an output layer, wherein the deep features consist of three full-connection layers, two ReLU layers and a softMax layer. According to the invention, a deep learning network model is constructed to extract the action characteristics and body pattern characteristics of the body of the dairy cow during walking. The 3D CNN-BiLSTM with R3D_18 as the main trunk is used for extracting the body motion characteristics of the dairy cows, alexNet is used for extracting the body pattern characteristics of the dairy cows, and the body pattern characteristics are output in the form of probability vectors, as shown in fig. 11 (a) and 11 (b).
Feature fusion is carried out on the action features and the pattern features, and training is carried out on a preset support vector machine based on the fusion features, and the method specifically comprises the following steps:
s501, feature fusion is carried out on the features in the form of the motion feature and the pattern feature in the form of two probability vectors through three fusion methods of feature weighted summation, feature value summation and feature maximum value;
s502, selecting an optimal fusion strategy as characteristic weighted summation, wherein the weight ratio of action characteristics to pattern characteristics is 4:6;
s503, training a preset support vector machine based on the fusion characteristics.
The method for classifying the dairy cows comprises the steps of carrying out feature fusion on action features and body pattern features of the dairy cows during walking, and classifying the dairy cows based on the fused features, wherein the method comprises the following steps: and carrying out feature fusion on the features in the two probability vector forms through three fusion methods of feature weighted summation, feature value summation and feature maximum value, selecting an optimal fusion strategy as the feature weighted summation, enabling the weight ratio of action features to pattern features to be 4:6, training an SVM support vector machine model by using fusion features, and carrying out individual identification on dairy cows.
According to the invention, the action features and the body pattern features of the dairy cows are fused, and the dairy cows are classified and identified based on the fused features. Fig. 12 (a) is a CMC curve for individual identification of cows using three feature fusion strategies, namely, feature value summation, feature maximum value and feature weighted summation, for motion feature, pattern feature and feature fusion, wherein the optimal fusion strategy is feature weighted summation, and the motion feature and pattern feature weight ratio is 4:6. The SVM model is trained using the fused probability vectors (i.e., fusion features) and individual identification is performed on the cows, the confusion matrix of which is shown in FIG. 12 (b).
Based on the trained support vector machine model, the method for identifying the dairy cow individuals in the dairy cow image acquired in real time specifically comprises the following steps:
s601, acquiring cow video data in real time, and inputting the video data into a trained support vector machine model;
s602, performing individual identification of the dairy cows through the support vector machine model.
By the multi-mode individual dairy cow identification method integrating the motion information, provided by the invention, video data of dairy cow walking are collected, a complete cow body is extracted, and the data are converted into two data forms of a shearing image and an RGB image; preprocessing such as cutting, aligning, normalizing and the like is carried out on cow data; dividing the two parts of data into two parts of a training set and a testing set respectively; constructing a deep learning network model to extract action features of the body of the dairy cow during walking and pattern features of the dairy cow body; and carrying out feature fusion on the two features, classifying the dairy cows by using a support vector machine model based on the fused features, and completing individual identification of the dairy cows. The accurate identification of the dairy cows is realized, and the individual dairy cows cannot be influenced.
Referring to fig. 8, the invention also discloses a multi-mode milk cow individual identification system integrating action information, which comprises:
the data acquisition module 110 is used for acquiring video data of dairy cows during activity and extracting complete cow body images of the dairy cows;
the image conversion module 120 is configured to convert the complete bovine body image extracted from the video data into a clip image and an RGB image;
the preprocessing module 130 is configured to preprocess the clip map and the RGB image to generate a data set, and divide the data set into a training set and a testing set;
the feature extraction module 140 is configured to construct a deep learning network model based on the training set and the test set, and perform action feature extraction and pattern feature extraction through the deep learning network model;
the feature fusion module 150 is configured to perform feature fusion on the motion feature and the pattern feature, and train a preset support vector machine based on the fusion feature;
and the identification module 160 is used for carrying out individual dairy cow identification on the dairy cow images acquired in real time based on the trained support vector machine model.
The data acquisition module is used for selecting video data acquisition when the dairy cows are milked to leave the milking parlor;
the camera is fixedly arranged on the inner side of the aisle and is set at a distance from the dairy cow, the focal length of the camera is adjusted to enable the whole camera to have a visual field which is about 3 times of the length of the body of the dairy cow, and the shooting range at least comprises 2 gait cycles;
and extracting the whole cow body of the dairy cow through the acquired video data of the walking of the dairy cow.
The image conversion module is used for preprocessing the video data of the walking dairy cows into video frames under the view angle of the side view;
dividing a complete cow body contour side view through a preset semantic dividing network model based on the video frame;
and converting the segmented cattle body into a shearing image and an RGB image by modifying the semantic segmentation network model parameters.
The preprocessing module is used for cutting the cut-off images and the RGB images, automatically detecting and cutting edge areas in the images through an image boundary detection algorithm, removing irrelevant or unnecessary parts in the images, and obtaining an interested area;
aligning based on the cut image data, finding key feature points in the image through a feature point detection and matching algorithm, and aligning through a transformation matrix to enable objects or features in the image to be in the same position;
normalization is performed based on the aligned image data, and a linear normalization method is used to scale the pixel values to between 0 and 1 for normalization processing.
The feature extraction module inputs the silhouette graph into a deep learning network model, performs feature extraction on a video frame sequence of the silhouette graph through a 3D convolution network and a two-way long-short-term memory network, and outputs action features in the form of probability vectors;
and inputting the RGB image into a deep learning network model, extracting deep features through a convolutional neural network, and outputting pattern features in the form of probability vectors through an output layer.
The feature fusion module is used for carrying out feature fusion on the features in the form of the two probability vectors of the action feature and the pattern feature through three fusion methods of feature weighted summation, feature value summation and feature maximum value;
selecting an optimal fusion strategy as characteristic weighted summation, wherein the weight ratio of the action characteristic to the pattern characteristic is 4:6;
and training a preset support vector machine based on the fusion characteristics.
The recognition module is used for acquiring the video data of the dairy cows in real time and inputting the video data into the trained support vector machine model;
and performing individual identification of the dairy cows through the support vector machine model.
The multi-mode individual dairy cow identification system integrating the action information provided by the invention is used for collecting video data of dairy cow walking, extracting a complete cow body and converting the video data into two data forms of a shearing image and an RGB image; preprocessing such as cutting, aligning, normalizing and the like is carried out on cow data; dividing the two parts of data into two parts of a training set and a testing set respectively; constructing a deep learning network model to extract action features of the body of the dairy cow during walking and pattern features of the dairy cow body; and carrying out feature fusion on the two features, classifying the dairy cows by using a support vector machine model based on the fused features, and completing individual identification of the dairy cows. The accurate identification of the dairy cows is realized, and the individual dairy cows cannot be influenced.
Fig. 13 illustrates a physical structure diagram of an electronic device, as shown in fig. 13, which may include: processor 1310, communication interface (Communications Interface) 1320, memory 1330 and communication bus 1340, wherein processor 1310, communication interface 1320, memory 1330 communicate with each other via communication bus 1340. Processor 1310 may invoke logic instructions in memory 1330 to perform a multi-modal individual identification method for dairy cows incorporating motion information, the method comprising: collecting video data of dairy cows during activity, and extracting complete cow body images of the dairy cows;
converting the complete cow body image extracted from the video data into a clip image and an RGB image;
preprocessing the cut-off map and the RGB picture to generate a data set, and dividing the data set into a training set and a testing set;
constructing a deep learning network model based on the training set and the testing set, and extracting action features and pattern features through the deep learning network model;
feature fusion is carried out on the action features and the pattern features, and training is carried out on a preset support vector machine based on the fusion features;
and performing individual dairy cow identification on the dairy cow image acquired in real time based on the trained support vector machine model.
Further, the logic instructions in the memory 1330 can be implemented in the form of software functional units and can be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product including a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of executing a multi-modal milk cow individual identification method incorporating motion information provided by the above methods, the method comprising: collecting video data of dairy cows during activity, and extracting complete cow body images of the dairy cows;
converting the complete cow body image extracted from the video data into a clip image and an RGB image;
preprocessing the cut-off map and the RGB picture to generate a data set, and dividing the data set into a training set and a testing set;
constructing a deep learning network model based on the training set and the testing set, and extracting action features and pattern features through the deep learning network model;
feature fusion is carried out on the action features and the pattern features, and training is carried out on a preset support vector machine based on the fusion features;
and performing individual dairy cow identification on the dairy cow image acquired in real time based on the trained support vector machine model.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a multi-modal milk cow individual identification method incorporating motion information provided by the above methods, the method comprising: collecting video data of dairy cows during activity, and extracting complete cow body images of the dairy cows;
converting the complete cow body image extracted from the video data into a clip image and an RGB image;
preprocessing the cut-off map and the RGB picture to generate a data set, and dividing the data set into a training set and a testing set;
constructing a deep learning network model based on the training set and the testing set, and extracting action features and pattern features through the deep learning network model;
feature fusion is carried out on the action features and the pattern features, and training is carried out on a preset support vector machine based on the fusion features;
and performing individual dairy cow identification on the dairy cow image acquired in real time based on the trained support vector machine model.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1.一种融入动作信息的多模态奶牛个体识别方法,其特征在于,包括:1. A multi-modal cow individual identification method incorporating action information, which is characterized by including: 采集奶牛活动时的视频数据,提取奶牛完整牛体图像;Collect video data of cows during their activities and extract complete body images of cows; 将所述视频数据中提取的完整牛体图像转换为剪影图和RGB图片;Convert the complete cow body image extracted from the video data into silhouette images and RGB images; 对所述剪影图和RGB图片进行预处理生成数据集,将所述数据集划分为训练集和测试集;Preprocess the silhouette images and RGB images to generate a data set, and divide the data set into a training set and a test set; 基于所述训练集和测试集构建深度学习网络模型,通过所述深度学习网络模型进行动作特征提取和花纹特征提取;Construct a deep learning network model based on the training set and test set, and perform action feature extraction and pattern feature extraction through the deep learning network model; 将所述动作特征和花纹特征进行特征融合,基于融合特征对预设的支持向量机进行训练;Perform feature fusion on the action features and pattern features, and train the preset support vector machine based on the fused features; 基于训练后的支持向量机模型对实时采集的奶牛图像进行奶牛个体识别。Based on the trained support vector machine model, individual cows are identified on real-time collected cow images. 2.根据权利要求1所述的融入动作信息的多模态奶牛个体识别方法,其特征在于,所述采集奶牛活动时的视频数据,提取奶牛完整牛体图像,具体包括:2. The multi-modal cow individual identification method incorporating action information according to claim 1, characterized in that the video data collected during cow activity and the complete body image of the cow are extracted, specifically including: 选择在奶牛挤奶完成离开挤奶厅时进行视频数据采集;Choose to collect video data when the cows leave the milking parlor after milking; 相机固定安装在过道内侧,距离奶牛设定距离,调整相机焦距使得整个相机视野大约为奶牛身体长度的3倍,摄像范围至少包含2个步态周期;The camera is fixedly installed on the inside of the aisle at a set distance from the cow. The camera focus is adjusted so that the entire camera field of view is approximately 3 times the length of the cow's body. The camera range includes at least 2 gait cycles; 通过采集到的奶牛行走的视频数据,提取奶牛完整牛体。Through the collected video data of cow walking, the complete body of the cow is extracted. 3.根据权利要求1所述的融入动作信息的多模态奶牛个体识别方法,其特征在于,将所述视频数据中提取的完整牛体图像转换为剪影图和RGB图片,具体包括:3. The multi-modal cow individual identification method incorporating action information according to claim 1, characterized by converting the complete cow body image extracted from the video data into a silhouette image and an RGB image, specifically including: 在侧视图视角下,将行走奶牛的视频数据预处理成视频帧;From the side view perspective, video data of walking cows are preprocessed into video frames; 基于视频帧通过预设的语义分割网络模型分割出完整的奶牛身体轮廓侧视图;Based on the video frame, the complete cow body profile side view is segmented through the preset semantic segmentation network model; 通过修改语义分割网络模型参数,将分割出来的牛体转换成剪影图和RGB图片。By modifying the semantic segmentation network model parameters, the segmented cow body is converted into silhouette images and RGB images. 4.根据权利要求1所述的融入动作信息的多模态奶牛个体识别方法,其特征在于,对所述剪影图和RGB图片进行预处理生成数据集,具体包括:4. The multi-modal cow individual identification method incorporating action information according to claim 1, characterized in that the silhouette image and RGB image are preprocessed to generate a data set, which specifically includes: 对所述剪影图和RGB图片进行裁剪,通过图像边界检测算法自动检测并裁剪图像中的边缘区域,去除图像中不相关或不需要的部分,获得感兴趣区域;Crop the silhouette image and RGB image, automatically detect and crop the edge area in the image through the image boundary detection algorithm, remove irrelevant or unnecessary parts of the image, and obtain the area of interest; 基于裁剪后的图像数据进行对齐,通过特征点检测和匹配算法找到图像中的关键特征点,并通过变换矩阵进行对齐,使图像中的对象或特征在相同位置上;Align based on the cropped image data, find the key feature points in the image through feature point detection and matching algorithms, and align through the transformation matrix so that the objects or features in the image are at the same position; 基于对齐后的图像数据进行归一化,使用线性归一化方法,将像素值缩放到0到1之间进行标准化处理。Normalization is performed based on the aligned image data, and the linear normalization method is used to scale the pixel values to between 0 and 1 for normalization. 5.根据权利要求1所述的融入动作信息的多模态奶牛个体识别方法,其特征在于,基于所述训练集和测试集构建深度学习网络模型,通过所述深度学习网络模型进行动作特征提取和花纹特征提取,具体包括:5. The multi-modal cow individual identification method incorporating action information according to claim 1, characterized in that a deep learning network model is constructed based on the training set and the test set, and action feature extraction is performed through the deep learning network model. and pattern feature extraction, specifically including: 将所述剪影图输入至深度学习网络模型,将剪影图的视频帧序列经过3D卷积网络和双向长短期记忆网络进行特征提取,输出概率向量形式的动作特征;Input the silhouette image into the deep learning network model, perform feature extraction on the video frame sequence of the silhouette image through a 3D convolution network and a bidirectional long short-term memory network, and output action features in the form of probability vectors; 将所述RGB图输入至深度学习网络模型,经过卷积神经网络提取深层特征,经过输出层输出概率向量形式的花纹特征。The RGB image is input into the deep learning network model, deep features are extracted through the convolutional neural network, and pattern features in the form of probability vectors are output through the output layer. 6.根据权利要求1所述的融入动作信息的多模态奶牛个体识别方法,其特征在于,将所述动作特征和花纹特征进行特征融合,基于融合特征对预设的支持向量机进行训练,具体包括:6. The multi-modal cow individual identification method incorporating action information according to claim 1, characterized in that the action features and pattern features are feature fused, and a preset support vector machine is trained based on the fusion features. Specifically include: 将所述动作特征和花纹特征两种概率向量形式的特征通过特征加权求和、特征值求和以及特征最大值三种融合方法进行特征融合;The features in the form of probability vectors of the action features and pattern features are feature fused through three fusion methods: feature weighted summation, feature value summation and feature maximum value; 选取最佳融合策略为特征加权求和,动作特征和花纹特征权重比例4:6;The best fusion strategy is selected as weighted summation of features, and the weight ratio of action features and pattern features is 4:6; 基于融合特征对预设的支持向量机进行训练。The preset support vector machine is trained based on the fused features. 7.根据权利要求1所述的融入动作信息的多模态奶牛个体识别方法,其特征在于,所述基于训练后的支持向量机模型对实时采集的奶牛图像进行奶牛个体识别,具体包括:7. The multi-modal cow individual identification method incorporating action information according to claim 1, characterized in that the cow individual identification is performed on the real-time collected cow images based on the trained support vector machine model, specifically including: 实时获取奶牛视频数据,将所述视频数据输入至训练后的支持向量机模型;Obtain cow video data in real time and input the video data into the trained support vector machine model; 通过所述支持向量机模型进行奶牛个体识别。Individual cow identification is performed through the support vector machine model. 8.一种融入动作信息的多模态奶牛个体识别系统,其特征在于,所述系统包括:8. A multi-modal cow individual identification system incorporating action information, characterized in that the system includes: 数据获取模块,用于采集奶牛活动时的视频数据,提取奶牛完整牛体图像;The data acquisition module is used to collect video data of cows during their activities and extract complete body images of cows; 图像转换模块,用于将所述视频数据中提取的完整牛体图像转换为剪影图和RGB图片;An image conversion module, used to convert the complete cow body image extracted from the video data into a silhouette image and an RGB image; 预处理模块,用于对所述剪影图和RGB图片进行预处理生成数据集,将所述数据集划分为训练集和测试集;A preprocessing module, used to preprocess the silhouette images and RGB images to generate a data set, and divide the data set into a training set and a test set; 特征提取模块,用于基于所述训练集和测试集构建深度学习网络模型,通过所述深度学习网络模型进行动作特征提取和花纹特征提取;A feature extraction module, used to construct a deep learning network model based on the training set and test set, and perform action feature extraction and pattern feature extraction through the deep learning network model; 特征融合模块,用于将所述动作特征和花纹特征进行特征融合,基于融合特征对预设的支持向量机进行训练;A feature fusion module, used to fuse the action features and pattern features, and train a preset support vector machine based on the fused features; 识别模块,用于基于训练后的支持向量机模型对实时采集的奶牛图像进行奶牛个体识别。The identification module is used to identify individual cows on real-time collected cow images based on the trained support vector machine model. 9.一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至6任一项所述融入动作信息的多模态奶牛个体识别方法。9. An electronic device, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, characterized in that when the processor executes the computer program, the processor implements the claims as claimed in The multi-modal cow individual identification method incorporating action information described in any one of 1 to 6. 10.一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述融入动作信息的多模态奶牛个体识别方法。10. A non-transitory computer-readable storage medium with a computer program stored thereon, characterized in that, when the computer program is executed by a processor, the multi-process integration of action information as described in any one of claims 1 to 6 is implemented. Modal cow individual identification method.
CN202311397300.3A 2023-10-25 2023-10-25 A multi-modal cow individual identification method and system incorporating action information Pending CN117456620A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311397300.3A CN117456620A (en) 2023-10-25 2023-10-25 A multi-modal cow individual identification method and system incorporating action information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311397300.3A CN117456620A (en) 2023-10-25 2023-10-25 A multi-modal cow individual identification method and system incorporating action information

Publications (1)

Publication Number Publication Date
CN117456620A true CN117456620A (en) 2024-01-26

Family

ID=89579264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311397300.3A Pending CN117456620A (en) 2023-10-25 2023-10-25 A multi-modal cow individual identification method and system incorporating action information

Country Status (1)

Country Link
CN (1) CN117456620A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119049118A (en) * 2024-07-09 2024-11-29 中国农业科学院北京畜牧兽医研究所 Cow lameness warning method and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119049118A (en) * 2024-07-09 2024-11-29 中国农业科学院北京畜牧兽医研究所 Cow lameness warning method and related equipment
CN119049118B (en) * 2024-07-09 2025-04-22 中国农业科学院北京畜牧兽医研究所 Milk cow lameness early warning method and related equipment

Similar Documents

Publication Publication Date Title
US10282589B2 (en) Method and system for detection and classification of cells using convolutional neural networks
Zin et al. Image technology based cow identification system using deep learning
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
WO2021047232A1 (en) Interaction behavior recognition method, apparatus, computer device, and storage medium
SE1930281A1 (en) Method for calculating deviation relations of a population
CN107844797A (en) A kind of method of the milking sow posture automatic identification based on depth image
Ghazal et al. Automated framework for accurate segmentation of leaf images for plant health assessment
CN105718873A (en) People stream analysis method based on binocular vision
CN111639629A (en) Pig weight measuring method and device based on image processing and storage medium
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN107766864B (en) Method and device for extracting features and method and device for object recognition
Farhood et al. Recent advances of image processing techniques in agriculture
CN105260750A (en) Dairy cow identification method and system
CN112257730A (en) Plant pest image recognition method, device, equipment and storage medium
CN119168425B (en) Method and system for predicting feed intake of periparturient dairy cows in pasture breeding scenarios
CN116543386A (en) Agricultural pest image identification method based on convolutional neural network
Xiang et al. Measuring stem diameter of sorghum plants in the field using a high-throughput stereo vision system
CN117218534A (en) Crop leaf disease identification method
CN111079617B (en) Poultry identification method and device, readable storage medium and electronic equipment
CN119168955A (en) Training-free defect detection method and defect detection equipment based on multi-scale mask
CN117456620A (en) A multi-modal cow individual identification method and system incorporating action information
Hou et al. Detection and localization of citrus picking points based on binocular vision
Nair et al. Fungus Detection and Identification using Computer Vision Techniques and Convolution Neural Networks
Li et al. Body Condition Scoring of Dairy Cows Based on Feature Point Location
CN110363240A (en) A kind of medical image classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination