CN110162648B - Picture processing method, device and recording medium - Google Patents
Picture processing method, device and recording medium Download PDFInfo
- Publication number
- CN110162648B CN110162648B CN201910425363.2A CN201910425363A CN110162648B CN 110162648 B CN110162648 B CN 110162648B CN 201910425363 A CN201910425363 A CN 201910425363A CN 110162648 B CN110162648 B CN 110162648B
- Authority
- CN
- China
- Prior art keywords
- picture
- classifying
- type
- determine whether
- processing method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure relates to a picture processing method, apparatus, and recording medium. According to one embodiment of the present disclosure, the picture processing method includes: classifying the pictures based on the pictures to determine whether the pictures are single face picture types; classifying the pictures based on the pictures under the condition that the pictures are determined to be single face picture types so as to determine whether the pictures are non-expression pack picture types or not; and in the event that the picture is determined to be of a non-expressive package picture type, classifying the picture based on text associated with the picture to determine whether the picture is of a pop picture type. The solution of the present disclosure can achieve at least one of the following effects: the method improves the accuracy of classifying the burst picture and the public character picture, improves the picture classifying efficiency and improves the accuracy of pushing the picture to the user.
Description
Technical Field
The present disclosure relates generally to picture processing, and more particularly, to a picture processing method, a picture processing apparatus, and a computer-readable recording medium storing a program implementing the picture processing method, which are related to picture classification.
Background
In recent years, with the popularization of digital cameras/video cameras and smart phones, the number of pictures accessible to the public on the internet is increasing. How to find pictures of interest to the user from among a large number of pictures is an important research direction.
Disclosure of Invention
A brief summary of the disclosure is presented below to provide a basic understanding of some aspects of the disclosure. It should be understood that this summary is not an exhaustive overview of the disclosure. It is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure. Its purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
The user's preferences for various types of pictures are different, for example, some users are interested in a pop-up picture of an average person and some users are interested in a picture of a public person such as a star. Therefore, it is desirable to categorize the pictures to selectively provide the pictures to the user.
According to an aspect of the present disclosure, there is provided a picture processing method, including: classifying the pictures based on the pictures to determine whether the pictures are single face picture types; classifying the pictures based on the pictures under the condition that the pictures are determined to be single face picture types so as to determine whether the pictures are non-expression pack picture types or not; and in the event that the picture is determined to be of a non-expressive package picture type, classifying the picture based on text associated with the picture to determine whether the picture is of a pop picture type.
According to an aspect of the present disclosure, there is provided a picture processing method, including: classifying the pictures based on the selected face areas in the pictures to determine whether the pictures are of a non-expressive package picture type; and classifying the picture based on text associated with the picture to determine whether the picture is of a pop-up picture type if the picture is determined to be of a non-expressive package picture type; wherein the number of faces shown in the selected face region is one.
According to another aspect of the present disclosure, there is provided a picture processing apparatus including: a first classification unit configured to classify the picture based on the picture to determine whether the picture is a single face picture type; a second classification unit configured to classify the picture based on the picture to determine whether the picture is a non-expressive package picture type; a third classification unit configured to classify the picture based on text associated with the picture to determine whether the picture is a pop picture type; and a control unit configured to: under the condition that the first classifying unit determines that the picture is a single face picture type, the second classifying unit is instructed to classify the picture; and under the condition that the second classifying unit determines that the picture is of a non-expression package picture type, the third classifying unit is instructed to classify the picture.
According to still another aspect of the present disclosure, there is provided a computer-readable recording medium storing a program that causes a computer to execute the foregoing picture processing method.
The picture processing method, the picture processing device and the recording medium can at least achieve one of the following effects: the method improves the accuracy of classifying the burst picture and the public character picture, improves the picture classifying efficiency and improves the accuracy of pushing the picture to the user.
Drawings
The above and other objects, features and advantages of the present disclosure will be more readily appreciated by referring to the following description of the embodiments of the present disclosure with reference to the accompanying drawings. The drawings are only for the purpose of illustrating the principles of the present disclosure. The dimensions and relative positioning of the elements in the figures are not necessarily drawn to scale. In the drawings:
FIG. 1 illustrates an exemplary flow chart of a picture processing method according to one embodiment of the present disclosure;
FIG. 2 illustrates an example of content according to one embodiment of the present disclosure;
FIG. 3 illustrates an exemplary flow chart of a picture processing method according to one embodiment of the present disclosure;
FIG. 4 illustrates an exemplary flow chart of a picture processing method according to one embodiment of the present disclosure;
FIG. 5 is an exemplary block diagram of a picture processing device according to one embodiment of the present disclosure;
FIG. 6 is an exemplary block diagram of a picture processing device according to one embodiment of the present disclosure; and
fig. 7 is an exemplary block diagram of a picture processing device according to one embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual embodiment are described in the specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions may be made to achieve the developers' specific goals, and that these decisions may vary from one implementation to another.
It should be noted here that, in order to avoid obscuring the present disclosure due to unnecessary details, only the device structures closely related to the scheme according to the present disclosure are shown in the drawings, and other details not greatly related to the present disclosure are omitted.
It is to be understood that the present disclosure is not limited to the described embodiments due to the following description with reference to the drawings. In this context, embodiments may be combined with each other, features replaced or borrowed between different embodiments, one or more features omitted in one embodiment, where possible.
The present disclosure relates to classification of pictures, the picture types involved include: face picture type, expression pack picture type, pop picture type, and public character picture type.
A picture processing method according to an embodiment of the present disclosure is described below with reference to fig. 1.
Fig. 1 illustrates an exemplary flowchart of a picture processing method 100 according to one embodiment of the present disclosure. The picture processing method 100 may, for example, classify pictures in content published by users in a network. Fig. 2 illustrates an example of content 200 according to one embodiment of the present disclosure. Content 200 includes a picture 211 and text 213 associated with picture 211. It should be noted that: the faces in the picture 211 are only schematic, and the picture with the faces for which the actual classification is aimed is usually a picture of a person actually existing. The association of text 213 with picture 211 may include: the picture 211 and the text 213 are components of the same content. The same content may refer to: content under the same topic title in a social network (including topic-initiated content and topic-reply content), topic-initiated content (e.g., content of a topic-initiated post) or topic-reply content (e.g., content of a post or posting). The same content may refer to: the text and the picture come from the same article, the same comment, or the same reply. Other information, such as audio or video, may also be included in the content 200. Text 213 may include a plurality of characters. Text 213 includes a title (if any) and a body. The content 200 may include a plurality of pictures. Content 200 published by a user includes, but is not limited to: articles, short comments, postings, answers to questions posed by others, and the like. Content 200 is, for example, articles, comments, questions, answers, postings published by users in a web community (e.g., web forum). For example, a user Ui publishes a topic, titled Tj, in a web community, that contains a picture and an originating text, for which there are n replies (postbacks or postbacks, containing reply text), then the text associated with the picture may refer to the originating text, the reply text, or a combination thereof; for the combined case, the title in the reply text is preferably disregarded for the reply text in the text associated with the picture.
The picture processing method may perform the following processing for each picture in the picture set to be classified.
Returning to fig. 1, at step 103, it is determined whether it is a single face picture based on the picture. Specifically, the pictures are classified based on the pictures to determine whether the pictures are single face picture types. For a picture of a single face picture type, it satisfies the following condition: the number of faces shown in this picture is 1. In the case where it is determined that the picture is a non-single face picture, the picture processing method 100 ends; alternatively, a type tag indicating that the picture is a non-single person picture may also be added to the picture. The classification in step 103 may be implemented using a first classifier.
In the case where it is determined that the picture is a single face picture type, the picture processing method 100 proceeds to step 105. In step 105, it is determined whether the picture is a non-expressive pack picture based on the picture. In particular, the pictures are classified based on the pictures to determine whether the pictures are of a non-expressive package picture type. In the case where it is determined that the picture is the expression package picture type, the picture processing method 100 ends; optionally, a type tag indicating that the picture is an expression package picture may also be added to the picture. The classification in step 105 may be implemented using a second classifier.
In the case where it is determined that the picture is a non-expression package picture, the picture processing method 100 proceeds to step 107. In step 107, it is determined whether the picture is a snap shot picture based on the text. In particular, the pictures are classified based on text associated with the pictures to determine whether the pictures are of a snap shot picture type. When the determination result is yes, the picture processing method 100 proceeds to step 109. At step 109, the picture is classified as a pop type. Optionally, a burst type tag may be added to the picture, where the burst type tag indicates that the picture is a burst picture; or put the picture into a snap photo album. Content of the shot-containing picture can be selectively pushed to the terminal user based on the shot type tag or the shot picture set, and when the terminal user is more interested in the content of the shot-containing picture, the content of the shot-containing picture can be pushed to the terminal user with higher probability, so that the content pushing accuracy is improved. When the determination result is "no", the picture processing method 100 ends. Optionally, a type tag indicating that the picture is a non-pop picture may also be added to the picture. Statistics show that: the type label for non-pop pictures is typically a picture of a public character, such as a star or celebrity, typically primarily a picture of a star. Thus, when it is determined that the user is more interested in stars, a picture in the non-snap picture set may be selected for the user. The classification in step 107 may be implemented using a third classifier.
In this embodiment, step 109 is an optional step, i.e. the picture processing method may not include step 109. For example, in the case where the determination result of step 107 is yes, a pop-type tag may be added to the picture, or the picture may be put into a pop-picture set, or the picture may be pushed to a predetermined user.
And if not, adding a corresponding type label to the picture. To this end, the inventors also contemplate a picture classification method 300.
Fig. 3 illustrates an exemplary flowchart of a picture processing method 300 according to one embodiment of the present disclosure. The picture processing method 300 is changed in comparison to the picture processing method 100 in that: step 109 is replaced with step 309, and in the case where the determination result of each of the determination steps 103, 105, and 107 is no, the picture processing method 300 proceeds to step 309. In step 309, a type tag is added to the picture according to each determination result. Specifically, in the case that the picture determines that the picture is not a single face picture, adding a type tag indicating that the picture is a non-single face picture to the picture; under the condition that the picture is determined to be the expression package picture, adding a type tag for the picture, wherein the type tag indicates that the picture is the expression package picture; adding a type tag for the picture, which indicates that the picture is a non-burst picture, under the condition that the picture is determined not to be the burst picture; in the case where it is determined that the picture is a pop-up picture, a type tag indicating that the picture is a pop-up picture, i.e., a pop-up type tag, is added to the picture.
The picture to be processed may contain a plurality of faces. To this end, the inventors also contemplate a picture processing method 400.
Fig. 4 illustrates an exemplary flowchart of a picture processing method 400 according to one embodiment of the present disclosure. The picture processing method 400 may classify pictures in content published by users on a network.
In step 405, it is determined whether the picture is a non-expressive pack picture based on the selected face region. In particular, the pictures are classified based on the pictures to determine whether the pictures are of a non-expressive package picture type. In the case where it is determined that the picture is an expression package picture, the picture processing method 400 ends; optionally, a type tag indicating that the picture is an expression package picture may also be added to the picture. In the picture processing method 400, the selected face region shows a single face, i.e., the selected face region satisfies the following condition: the number of faces contained in the area is 1. It can be understood that: for a picture which is determined to be free of faces by a picture classifier, a type tag can be added to the picture: "non-face picture"; for determining a picture containing a face through a picture classifier, a face region may be selected according to a predetermined rule, for example, the selected face region is a face region with the largest area in the picture (typically, a region where a face image of a person closest to a lens is located). For example, if there are only 1 face in the picture, the area where the face is selected is the selected face area; if the picture has 2 or more faces, the area of the face with the largest selected area is the selected face area. The selected face area may also be an area corresponding to the clearest face image in the picture. Determining the selected face region may be accomplished by using a fourth classifier.
In the case where it is determined that the picture is a non-expressive package picture, the picture processing method 400 proceeds to step 407. In step 407, it is determined whether the picture is a snap shot picture based on the text. In particular, the pictures are classified based on text associated with the pictures to determine whether the pictures are of a snap shot picture type. When the determination result is yes, the picture processing method 400 proceeds to step 409. At step 409, the picture is classified as a pop type. Optionally, a burst type tag may be added to the picture, where the burst type tag indicates that the picture is a burst picture; or put the picture into a snap photo album. When the determination result is "no", the picture processing method 400 ends. Optionally, a type tag indicating that the picture is a non-pop picture may also be added to the picture. The classification in step 407 may be implemented using a third classifier.
In this embodiment, step 409 is an optional step, i.e. the picture processing method may not include step 409. For example, in the case where the determination result of step 407 is yes, a pop-type tag may be added to the picture, or the picture may be put into a pop-picture set, or the picture may be pushed to a predetermined user.
In the picture processing methods 100 and 300, the first classifier is configured to have face recognition capability and to be able to distinguish whether there is a single face in a picture. When the number of faces in the picture is one, the first classifier determines that the picture is processed as a single face picture. When there are a plurality of faces, such as 2, 3, etc., or no faces in the picture, the first classifier determines that the picture is processed as a non-single face picture. The first classifier may be designed based on, for example, the face recognition tool "faceRecognizing". The first classifier may also be designed based on other face recognition tools.
In the picture processing method 400, the fourth classifier is configured to have face recognition capability and to be able to recognize the number of faces in a picture. The fourth classifier is similar to the first classifier. When there are no faces in the picture, the fourth classifier may determine the picture as a non-single face picture, and when the number of faces in the picture is one or more, the fourth classifier may select one face region in the picture as a selected face region based on a predetermined rule, where the selected face region has a single face picture. When the number of the faces in the picture is one, selecting the area where the faces are located as a selected face area; if there are 2 or more faces in the picture, a region where one face image is located may be selected as a selected face region from the plurality of face images according to a predetermined rule, for example, a region where a face image with a largest selected area is located is a selected face region. As an example, the area where the clearest face in the picture is located may also be selected as the selected face area. The fourth classifier may be designed, for example, based on the Face Recognition tool "Face Recognition".
The second classifier has the capability to determine whether the picture is a non-expressive pack picture. The second classifier may be various neural network classifiers. The second classifier is trained by training with the sample picture set to obtain a high recall second classifier before using the second classifier. As an example, the second classifier may be a neural network classifier based on the DenseNet121 model. Each convolution layer of the densnet 121 model connects the other layers in a feed-forward manner, receiving as input all of the preceding feature maps for each feature map. A conventional CNN network (convolutional neural network) for a total of L layers has L-layer connections, while a neural network based on the DenseNet121 model has L (l+1)/2-layer connections. Due to this dense connection of the DenseNet121 model, the DenseNet121 model can mitigate gradient vanishing, enhance feature delivery and feature reuse, and reduce network parameters.
The fifth classifier is similar to the second classifier. The fifth classifier has the ability to determine whether the picture is a non-expressive pack picture based on the selected face region in the picture. For example, if the image of the selected face region corresponds to an expression pack picture type, the picture is classified as an expression pack picture type, otherwise, as a non-expression pack picture type. The fifth classifier may be various neural network classifiers. The fifth classifier is trained by training with the sample picture set to obtain a fifth classifier with high recall rate before using the fifth classifier. As an example, the fifth classifier may be a neural network classifier based on the DenseNet121 model.
The third classifier has the capability to determine whether the picture is a snap shot picture. The popcorn picture is typically a photograph of an average person. Statistics indicate that for non-pop pictures, they are typically photographs of public characters such as stars. If the images are distinguished between the popcorn and the non-popcorn, the recall rate is generally low or requires a large amount of computation because both are face images. The third classifier is configured to determine whether the picture is a snap shot picture type based on text associated with the picture. The third classifier performs steps including word segmentation, TF-IDF feature extraction, and classifying the picture based on the TF-IDF features. For example, text may be segmented using barker segmentation. TF in TF-IDF refers to "Term Frequency" (abbreviated as TF), and IDF is "inverse document Frequency" (Inverse Document Frequency, abbreviated as IDF). When the third classifier is trained, the TF (word frequency) and the IDF (inverse document frequency) of each word in each item of content in the training content set can be extracted, and then the TF-IDF value of one word can be obtained by multiplying the two words. The TF-IDF value of the word represents the importance of the word in the content, and the top few words, the keywords of the content, are ranked by ranking the TF-IDFs of the words from large to small. The third classifier is trained based on TF-IDF values of the keywords. The third classifier may be a naive bayes classifier. Typically, the picture of the public character includes words of the public character's nickname, name, nickname, "concert", "music", etc., based on which a third classifier may be trained.
In the case where the confidence of the classification result given by the third classifier is within a predetermined range (e.g., within a middle size range, i.e., the classification result is not very reliable), the public character picture database may be queried based on the selected face region in the picture to determine whether the selected face region corresponds to a face picture of a public character, thereby improving the accuracy of classification. When only 1 face is shown in the picture, a public character picture database is queried based on the face.
According to an aspect of the present disclosure, there is also provided a picture processing apparatus.
Fig. 5 is an exemplary block diagram of a picture processing device 500 according to one embodiment of the present disclosure. The picture processing apparatus 500 includes a first classifying unit 501, a second classifying unit 503, a third classifying unit 505, and a control unit 507. The first classification unit 501 can classify a picture based on the picture to determine whether the picture is a single face picture type. The second classification unit 503 can classify the picture based on the picture to determine whether the picture is a non-expressive package picture type. The third classification unit 505 can classify the picture based on text associated with the picture to determine whether the picture is a snap shot picture type. The control unit 507 is capable of: in the case that the first classifying unit 501 determines that the picture is a single face picture type, the second classifying unit 503 is instructed to classify the picture; and instructs the third classifying unit 505 to classify the picture in the case where the second classifying unit 503 determines that the picture is a non-expressive package picture type. Further configurations of the first classifying unit 501, the second classifying unit 503, the third classifying unit 505, and the control unit 507, which have a correspondence relationship with the picture processing method 100 or 300, may refer to the description of the image processing method 100 or 300.
According to an aspect of the present disclosure, there is also provided a picture processing apparatus.
Fig. 6 is an exemplary block diagram of a picture processing device 600 according to one embodiment of the present disclosure. In fig. 6, a Central Processing Unit (CPU) 601 performs various processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 to a Random Access Memory (RAM) 603. The RAM 603 also stores data and the like necessary when the CPU 601 executes various processes, as necessary.
The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output interface 605 is also connected to the bus 604.
The following components are connected to the input/output interface 605: an input portion 606 including a soft keyboard or the like; an output portion 607 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage portion 608 such as a hard disk; and a communication section 609 including a network interface card such as a LAN card, a modem, and the like. The communication section 609 performs communication processing via a network such as the internet, a local area network, a mobile network, or a combination thereof.
The drive 610 is also connected to the input/output interface 605 as needed. A removable medium 611 such as a semiconductor memory or the like is installed on the drive 610 as needed, so that a computer program read therefrom is installed to the storage section 608 as needed.
The CPU 601 can execute codes of programs implementing the foregoing picture processing methods. The picture processing device may be a server side. In one embodiment, a picture processing device for a client may be obtained by changing the architecture of picture processing device 600, including, for example, removing storage portion 608 such as a hard disk.
According to an aspect of the present disclosure, there is also provided a computer-readable recording medium storing a program that causes a computer to execute the aforementioned picture processing method.
According to still another aspect of the present disclosure, there is also provided a picture processing apparatus. Fig. 7 is an exemplary block diagram of a picture processing device 700 according to one embodiment of the present disclosure. The picture processing apparatus 700 includes a fifth classifying unit 703, a third classifying unit 505, and a control unit 707. The fifth classifying unit 703 can classify the picture based on the selected face region in the picture to determine whether the picture is of the non-expressive pack picture type. The third classification unit 505 can classify the picture based on text associated with the picture to determine whether the picture is a snap shot picture type. The control unit 707 is capable of: in the case where the fifth classifying unit 703 determines that the picture is of a non-expressive package picture type, the third classifying unit 505 is instructed to classify the picture. Further configurations of the fifth classifying unit 703, the third classifying unit 505, and the control unit 707 may refer to the description of the image processing method 400 in correspondence with the image processing apparatus 700 and the image processing method 400.
The method and the device do not use a single picture classification model, but use a face recognition model and a text model, so that the public character picture and the burst picture can be well distinguished. From the foregoing description of specific embodiments of the present disclosure, those skilled in the art will appreciate that the presently disclosed schemes can achieve at least one of the following effects: the method improves the accuracy of classifying the burst picture and the public character picture, improves the picture classifying efficiency and improves the accuracy of pushing the picture to the user.
It will be understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, or components, but do not preclude the presence or addition of one or more other features, integers, steps, or components.
It is to be understood that features described and/or illustrated with respect to one embodiment may be used in the same or similar manner in one or more other embodiments in combination with or instead of the features of the other embodiments without departing from the spirit of the present disclosure.
Furthermore, the methods of the present disclosure are not limited to being performed in the temporal order described in the specification, but may be performed in other temporal orders, in parallel, or independently, if in principle feasible. Accordingly, the order in which the methods described in this specification are performed does not limit the scope of the present disclosure.
The present disclosure has been described in connection with the specific embodiments, but it should be apparent to those skilled in the art that the descriptions are intended to be exemplary and not limiting of the scope of the disclosure. Various modifications and alterations of this disclosure may be made by those skilled in the art in light of the spirit and principles of this disclosure, and such modifications and alterations are also within the scope of this disclosure.
Claims (10)
1. A picture processing method, comprising:
classifying the pictures based on the pictures to determine whether the pictures are single face picture types;
classifying the picture based on the picture to determine whether the picture is a non-expressive pack picture type if the picture is determined to be the single face picture type; and
classifying the picture based on text associated with the picture to determine whether the picture is a pop-up picture type if the picture is determined to be the non-expressive package picture type; if the confidence of the classification result of classifying the picture based on the text associated with the picture is within a predetermined range, querying a public personage database to determine whether the picture is a public personage picture.
2. The picture processing method according to claim 1, further comprising: and adding a type label to the picture according to each determination result.
3. The picture processing method of claim 1, wherein the text and the picture are from content under the same topic title published in a web community.
4. A picture processing method as claimed in claim 3, wherein the text comprises content in a post under the same topic title.
5. The picture processing method of claim 1, wherein the text and the picture are from the same article, the same comment, or the same reply.
6. The picture processing method of claim 1, wherein classifying the picture based on the picture to determine whether the picture is a non-expressive pack picture type comprises: the picture is classified based on a DenseNet121 model to determine whether the picture is the non-expressive package picture type.
7. The picture processing method of claim 1, wherein classifying the picture based on text associated with the picture to determine whether the picture is a snap shot picture type comprises: classifying the picture based on the TF-IDF value of the text to determine whether the picture is of the snap shot picture type.
8. A picture processing method, comprising:
classifying the picture based on a selected face region in the picture to determine whether the picture is a non-expressive package picture type; and
classifying the picture based on text associated with the picture to determine whether the picture is a pop-up picture type if the picture is determined to be the non-expressive package picture type; if the confidence of the classification result of classifying the picture based on the text associated with the picture is in a preset range, inquiring a public character database to determine whether the picture is a public character picture;
the face area selection specifically comprises the following steps:
adding a type label non-face picture for the picture without the face;
for a picture containing faces, a face area is selected according to a preset rule, and the number of faces shown in the selected face area is one.
9. A computer-readable recording medium storing a program that causes a computer to execute the picture processing method according to any one of claims 1 to 8.
10. A picture processing apparatus comprising:
a first classification unit configured to classify a picture based on the picture to determine whether the picture is a single face picture type;
a second classification unit configured to classify the picture based on the picture to determine whether the picture is a non-expressive package picture type;
a third classification unit configured to classify the picture based on text associated with the picture to determine whether the picture is a snap shot picture type; and
a control unit configured to:
the first classifying unit is used for indicating the second classifying unit to classify the picture under the condition that the first classifying unit determines that the picture is the single face picture type; and is also provided with
And under the condition that the second classifying unit determines that the picture is the non-expression pack picture type, the third classifying unit is instructed to classify the picture.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910425363.2A CN110162648B (en) | 2019-05-21 | 2019-05-21 | Picture processing method, device and recording medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910425363.2A CN110162648B (en) | 2019-05-21 | 2019-05-21 | Picture processing method, device and recording medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110162648A CN110162648A (en) | 2019-08-23 |
| CN110162648B true CN110162648B (en) | 2024-02-23 |
Family
ID=67631903
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910425363.2A Active CN110162648B (en) | 2019-05-21 | 2019-05-21 | Picture processing method, device and recording medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110162648B (en) |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102223242A (en) * | 2010-04-15 | 2011-10-19 | 腾讯数码(天津)有限公司 | Method and system for verifying authenticity of group members in SNS (social networking service) community |
| CN104063683A (en) * | 2014-06-06 | 2014-09-24 | 北京搜狗科技发展有限公司 | Expression input method and device based on face identification |
| CN105163017A (en) * | 2013-03-25 | 2015-12-16 | 锤子科技(北京)有限公司 | Method and device for showing self-shooting image |
| CN105205773A (en) * | 2015-10-20 | 2015-12-30 | 南京慧智灵杰信息技术有限公司 | Community correction intelligent facial recognition management system |
| CN105488111A (en) * | 2015-11-20 | 2016-04-13 | 小米科技有限责任公司 | Image search method and device |
| CN105740379A (en) * | 2016-01-27 | 2016-07-06 | 北京汇图科技有限责任公司 | Photo classification management method and apparatus |
| CN105849764A (en) * | 2013-10-25 | 2016-08-10 | 西斯摩斯公司 | Systems and methods for identifying influencers and their communities in a social data network |
| CN106446969A (en) * | 2016-12-01 | 2017-02-22 | 北京小米移动软件有限公司 | User identification method and device |
| CN106530217A (en) * | 2016-10-28 | 2017-03-22 | 维沃移动通信有限公司 | Photo processing method and mobile terminal |
| CN107133951A (en) * | 2017-05-22 | 2017-09-05 | 中国科学院自动化研究所 | Distorted image detection method and device |
| CN109002490A (en) * | 2018-06-26 | 2018-12-14 | 腾讯科技(深圳)有限公司 | User's portrait generation method, device, server and storage medium |
| CN109145963A (en) * | 2018-08-01 | 2019-01-04 | 上海宝尊电子商务有限公司 | A kind of expression packet screening technique |
| CN109345531A (en) * | 2018-10-10 | 2019-02-15 | 四川新网银行股份有限公司 | A kind of method and system based on picture recognition user's shooting distance |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10120877B2 (en) * | 2011-09-15 | 2018-11-06 | Stephan HEATH | Broad and alternative category clustering of the same, similar or different categories in social/geo/promo link promotional data sets for end user display of interactive ad links, coupons, mobile coupons, promotions and sale of products, goods and services integrated with 3D spatial geomapping and mobile mapping and social networking |
| US20170193218A1 (en) * | 2015-12-30 | 2017-07-06 | The Regents Of The University Of Michigan | Reducing Unregulated Aggregation Of App Usage Behaviors |
-
2019
- 2019-05-21 CN CN201910425363.2A patent/CN110162648B/en active Active
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102223242A (en) * | 2010-04-15 | 2011-10-19 | 腾讯数码(天津)有限公司 | Method and system for verifying authenticity of group members in SNS (social networking service) community |
| CN105163017A (en) * | 2013-03-25 | 2015-12-16 | 锤子科技(北京)有限公司 | Method and device for showing self-shooting image |
| CN105849764A (en) * | 2013-10-25 | 2016-08-10 | 西斯摩斯公司 | Systems and methods for identifying influencers and their communities in a social data network |
| CN104063683A (en) * | 2014-06-06 | 2014-09-24 | 北京搜狗科技发展有限公司 | Expression input method and device based on face identification |
| CN105205773A (en) * | 2015-10-20 | 2015-12-30 | 南京慧智灵杰信息技术有限公司 | Community correction intelligent facial recognition management system |
| CN105488111A (en) * | 2015-11-20 | 2016-04-13 | 小米科技有限责任公司 | Image search method and device |
| CN105740379A (en) * | 2016-01-27 | 2016-07-06 | 北京汇图科技有限责任公司 | Photo classification management method and apparatus |
| CN106530217A (en) * | 2016-10-28 | 2017-03-22 | 维沃移动通信有限公司 | Photo processing method and mobile terminal |
| CN106446969A (en) * | 2016-12-01 | 2017-02-22 | 北京小米移动软件有限公司 | User identification method and device |
| CN107133951A (en) * | 2017-05-22 | 2017-09-05 | 中国科学院自动化研究所 | Distorted image detection method and device |
| CN109002490A (en) * | 2018-06-26 | 2018-12-14 | 腾讯科技(深圳)有限公司 | User's portrait generation method, device, server and storage medium |
| CN109145963A (en) * | 2018-08-01 | 2019-01-04 | 上海宝尊电子商务有限公司 | A kind of expression packet screening technique |
| CN109345531A (en) * | 2018-10-10 | 2019-02-15 | 四川新网银行股份有限公司 | A kind of method and system based on picture recognition user's shooting distance |
Non-Patent Citations (2)
| Title |
|---|
| Norman Makoto Su 等.A Design Approach for Authenticity and Technology.《DIS '16: Proceedings of the 2016 ACM Conference on Designing Interactive Systems》.2016,643–655. * |
| 大数据视域下高校学生网络话语热词探析;经雨珠 等;《思想政治课研究》;20160805(第04期);38-41+17 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110162648A (en) | 2019-08-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10860854B2 (en) | Suggested actions for images | |
| US8073263B2 (en) | Multi-classifier selection and monitoring for MMR-based image recognition | |
| US8676810B2 (en) | Multiple index mixed media reality recognition using unequal priority indexes | |
| US9495385B2 (en) | Mixed media reality recognition using multiple specialized indexes | |
| US8369655B2 (en) | Mixed media reality recognition using multiple specialized indexes | |
| US10007928B2 (en) | Dynamic presentation of targeted information in a mixed media reality recognition system | |
| US8965145B2 (en) | Mixed media reality recognition using multiple specialized indexes | |
| US9058611B2 (en) | System and method for advertising using image search and classification | |
| US9116924B2 (en) | System and method for image selection using multivariate time series analysis | |
| US8489987B2 (en) | Monitoring and analyzing creation and usage of visual content using image and hotspot interaction | |
| US20140212106A1 (en) | Music soundtrack recommendation engine for videos | |
| CN107292642B (en) | Commodity recommendation method and system based on images | |
| US7457467B2 (en) | Method and apparatus for automatically combining a digital image with text data | |
| CN111125528B (en) | Information recommendation method and device | |
| CN109271542A (en) | Cover determines method, apparatus, equipment and readable storage medium storing program for executing | |
| US9774553B2 (en) | Systems and methods for estimating message similarity | |
| CN113591857B (en) | Character image processing method, device and ancient Chinese book image recognition method | |
| CN110162648B (en) | Picture processing method, device and recording medium | |
| CN113849688B (en) | Resource processing method, resource processing device, electronic device and storage medium | |
| CN112784042B (en) | Text similarity calculation method and system combining article structure and aggregation word vector | |
| CN116340551A (en) | A method and device for determining similar content | |
| CN113221572A (en) | Information processing method, device, equipment and medium | |
| CN117216356A (en) | Method and device for recommending multimedia content and electronic equipment | |
| CN119693500A (en) | Image library generation method, device, equipment and storage medium based on generation model | |
| CN116403143A (en) | Video tag determination method and device, electronic equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |