CN109446379A - Method and apparatus for handling information - Google Patents
Method and apparatus for handling information Download PDFInfo
- Publication number
- CN109446379A CN109446379A CN201811289810.8A CN201811289810A CN109446379A CN 109446379 A CN109446379 A CN 109446379A CN 201811289810 A CN201811289810 A CN 201811289810A CN 109446379 A CN109446379 A CN 109446379A
- Authority
- CN
- China
- Prior art keywords
- feature vector
- current video
- target
- video
- history
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/74—Browsing; Visualisation therefor
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for handling information.One specific embodiment of this method includes: to obtain history corresponding to target user in current video;From history in video frame, and the vector transformation model that the input of extracted video frame is trained in advance is extracted in current video, feature vector is obtained;Based on feature vector obtained, determine history in feature vector corresponding to current video;It is determining from predetermined candidate feature vector set to be more than or equal to the candidate feature vector of preset threshold as target feature vector in the similarity of feature vector corresponding to current video with history, wherein, it is in current video collection in current video that the candidate feature vector in candidate feature vector set, which corresponds to predetermined,;From being in current video corresponding to target feature vector in choosing in current video collection.This embodiment improves the specific aim of information processing and diversity.
Description
Technical field
The invention relates to field of computer technology, more particularly, to handle the method and apparatus of information.
Background technique
With the development of science and technology, the electronic equipments such as mobile phone browsing video has can be used in people.It is commonly used for being presented to
The video of some user may include similar video.Since user browses similar video in a short time, use may cause
The dislike at family, so the display frequency to similar video is needed to control.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for handling information.
In a first aspect, the embodiment of the present application provides a kind of method for handling information, this method comprises: obtaining target
History corresponding to user is in current video, wherein history is to export in historical time section to target user in current video
Used target terminal user, for target user browsing video;It is in extract video frame in current video from history, with
And the vector transformation model that the input of extracted video frame is trained in advance, obtain feature vector;Based on feature obtained to
Amount determines history in feature vector corresponding to current video;It determines and goes through from predetermined candidate feature vector set
History is more than or equal to the candidate feature vector of preset threshold as target spy in the similarity of feature vector corresponding to current video
Levy vector, wherein the candidate feature vector in candidate feature vector set corresponds to predetermined in current video collection
In current video;From being in current video corresponding to target feature vector in choosing in current video collection.
In some embodiments, candidate feature vector set corresponds to vector index engine;And from predetermined candidate
It is determining in feature vector set to be more than or equal to preset threshold in the similarity of feature vector corresponding to current video with history
Candidate feature vector is as target feature vector, comprising: right using vector index engine corresponding to candidate feature vector set
History is retrieved in feature vector corresponding to current video, and the candidate feature vector retrieved is determined as target spy
Levy vector.
In some embodiments, from being in current video corresponding to target feature vector in choosing in current video collection
Later, this method further include: determine time of the target user using target terminal user browsing history in current video;In distance
After identified time preset duration, selected is exported in current video to target terminal user.
It in some embodiments, is in extract video frame in current video, and extracted video frame is inputted from history
Vector transformation model trained in advance obtains feature vector, comprising: from history in extracting at least two videos in current video
Frame, and extracted at least two video frame is inputted to vector transformation model trained in advance respectively, it is special to obtain at least two
Levy vector.
In some embodiments, be based on feature vector obtained, determine history in feature corresponding to current video to
Amount, comprising: sum at least two feature vector obtained, it is right in current video institute as history to obtain summed result
The feature vector answered.
In some embodiments, candidate feature vector set is obtained by following generation step: being in current view based on target
Frequency and initial candidate feature vector set, execute step identified below: being in video frame to be extracted in current video, and incite somebody to action from target
Extracted video frame input vector transformation model obtains feature vector corresponding to video frame of the target in current video;Base
The feature vector corresponding to video frame of the target in current video determines target in feature vector corresponding to current video;
Target is added to predetermined initial candidate feature as candidate feature vector in feature vector corresponding to current video
In vector set, candidate feature vector set after addition is generated;Determine whether to get new in current video;In response to determination
It has not been obtained new in current video, candidate feature vector set after addition is determined as candidate feature vector set.
In some embodiments, generation step further include: in response to determine get it is new in current video, using new
As target it is in current video in current video, uses after addition candidate feature vector set as initial candidate set of eigenvectors
It closes, continues to execute determining step.
Second aspect, the embodiment of the present application provide a kind of for handling the device of information, which includes: video acquisition
Unit is configured to obtain history corresponding to target user in current video, wherein history is in history in current video
Between in section output to target terminal user used in target user, the video that is browsed for target user;Vector generates single
Member is configured to from history in extracting video frame in current video, and by extracted video frame input training in advance to
Transformation model is measured, feature vector is obtained;First determination unit is configured to determine that history is in based on feature vector obtained
Feature vector corresponding to current video;Second determination unit is configured to from predetermined candidate feature vector set
It is determining to make with history in the candidate feature vector that the similarity of feature vector corresponding to current video is more than or equal to preset threshold
For target feature vector, wherein the candidate feature vector in candidate feature vector set corresponds to predetermined in current video
In set is in current video;Video selection unit is configured to from selection target feature vector institute in current video collection
Corresponding is in current video.
In some embodiments, candidate feature vector set corresponds to vector index engine;And second determination unit into one
Step is configured to: using vector index engine corresponding to candidate feature vector set, to history in corresponding to current video
Feature vector is retrieved, and the candidate feature vector retrieved is determined as target feature vector.
In some embodiments, device further include: time determination unit is configured to determine target user and utilizes target
User terminal browses the time that history is in current video;Video output unit is configured to default apart from the identified time
After duration, selected is exported in current video to the target terminal user.
In some embodiments, vector generation unit is further configured to: being in extract at least in current video from history
Two video frames, and extracted at least two video frame is inputted to vector transformation model trained in advance respectively, it obtains extremely
Few two feature vectors.
In some embodiments, the first determination unit is further configured to: at least two feature vector obtained
It sums, obtains summed result as history in feature vector corresponding to current video.
In some embodiments, candidate feature vector set is obtained by following generation step: being in current view based on target
Frequency and initial candidate feature vector set, execute step identified below: being in video frame to be extracted in current video, and incite somebody to action from target
Extracted video frame input vector transformation model obtains feature vector corresponding to video frame of the target in current video;Base
The feature vector corresponding to video frame of the target in current video determines target in feature vector corresponding to current video;
Target is added to predetermined initial candidate feature as candidate feature vector in feature vector corresponding to current video
In vector set, candidate feature vector set after addition is generated;Determine whether to get new in current video;In response to determination
It has not been obtained new in current video, candidate feature vector set after addition is determined as candidate feature vector set.
In some embodiments, generation step further include: in response to determine get it is new in current video, using new
As target it is in current video in current video, uses after addition candidate feature vector set as initial candidate set of eigenvectors
It closes, continues to execute determining step.
The third aspect, the embodiment of the present application provide a kind of server, comprising: one or more processors;Storage device,
One or more programs are stored thereon with, when one or more programs are executed by one or more processors, so that one or more
The method that a processor realizes any embodiment in the above-mentioned method for handling information.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method of any embodiment in the above-mentioned method for handling information is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for handling information are gone through corresponding to target user by obtaining
History is in current video, is then in video frame to be extracted in current video, and extracted video frame is inputted instruction in advance from history
Experienced vector transformation model obtains feature vector, is then based on feature vector obtained, determines that history is right in current video institute
The feature vector answered, then determining and history is in spy corresponding to current video from predetermined candidate feature vector set
Levy vector similarity be more than or equal to preset threshold candidate feature vector as target feature vector, wherein candidate feature to
It is in current video collection in current video that candidate feature vector in duration set, which corresponds to predetermined, finally from current
Being chosen corresponding to target feature vector in video collection is in current video, to efficiently use candidate feature vector set, really
Define it is similar in current video in current video with history corresponding to target user, facilitate subsequent be in identified
Current video is handled, for example, being in the presentation time of current video determined by control, improves the specific aim of information processing
And diversity.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for handling information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for handling information of the embodiment of the present application;
Fig. 4 is the flow chart according to another embodiment of the method for handling information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for handling information of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the server of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for handling information of the application or the implementation of the device for handling information
The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed on terminal device 101,102,103, such as Video processing software,
Web browser applications, searching class application, social platform software, instant messaging tools etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, the various electronic equipments of video processing, including but not limited to smart phone, plate are can be with display screen and supported
Computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic
Image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, move
State image expert's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal is set
Standby 101,102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or
Software module (such as providing multiple softwares of Distributed Services or software module), also may be implemented into single software or soft
Part module.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as to showing on terminal device 101,102,103
The background server supported is provided in current video.The history shown on the available terminal device of background server is in current view
Frequently, and to acquired history in the data such as current video analyze etc. processing, acquisition processing result (such as target signature to
It is in current video corresponding to amount).
It should be noted that the method provided by the embodiment of the present application for handling information is generally held by server 105
Row, correspondingly, the device for handling information is generally positioned in server 105.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.It is in current view corresponding to target feature vector obtaining
Used data do not need in the case where long-range obtain during frequency, and above system framework can not include network and end
End equipment, and only include server.
With continued reference to Fig. 2, the process of one embodiment of the method for handling information according to the application is shown
200.The method for being used to handle information, comprising the following steps:
Step 201, obtaining history corresponding to target user is in current video.
In the present embodiment, can lead to for handling the executing subject (such as server shown in FIG. 1) of the method for information
It crosses wired connection mode or radio connection obtains history corresponding to target user in current video.Wherein, history is in
Current video can in historical time section export to target terminal user used in target user, be used for target user
The video of browsing.Historical time section can be pre-set period, such as in March, 2018;Or with for the first time to
The output of target terminal user used in target user is start time in the time of current video, is used with the last time to target
The output of target terminal user used in family is the period of end time in the time of current video.
Target user be to from determine in current video collection with its corresponding to history to be in that current video is similar be in
The user of current video.It can be predetermined video collection in current video collection.It is to be given for exporting in current video
The terminal of communication connection, the video to be presented to the user.
In practice, above-mentioned executing subject is available to be pre-stored within the presentation of history corresponding to local, target user
Use video;Alternatively, history corresponding to the above-mentioned available target terminal user of executing subject is sent, target user is in current
Video.
It is understood that herein, history corresponding at least one available target user of above-mentioned executing subject
It can be in current video in each history in current video at least one acquired history in turn in current video,
Execute subsequent step 202-205.
It step 202, is in video frame to be extracted in current video, and extracted video frame is inputted into training in advance from history
Vector transformation model, obtain feature vector.
It in the present embodiment, is in current video based on history obtained in step 201, above-mentioned executing subject can be from history
In video frame, and the vector transformation model that the input of extracted video frame is trained in advance is extracted in current video, obtain special
Levy vector.Wherein, feature vector obtained can be used for characterizing the feature of inputted video frame.
It is appreciated that video is substantially the sequence of frames of video that a sequencing according to the time arranges.In turn, above-mentioned
Executing subject can be in video frame to be extracted in current video, such as can use the side extracted at random from history using various methods
Method is in extract video frame in current video from history;Alternatively, can be from history in sequence of frames of video corresponding to current video
In, sequence is extracted in the video frame of predeterminated position.
In the present embodiment, vector transformation model is the model for extracting the feature of video frame, can be used for characterizing view
The corresponding relationship of feature vector corresponding to frequency frame and video frame.Specifically, since video frame is substantially image, and then vector
Transformation model may include for extracting the structure of characteristics of image (such as convolutional layer), certainly can also include other structures (example
Such as pond layer).
It should be noted that the method that training obtains vector transformation model is the known skill of extensive research and application at present
Art, details are not described herein again.
In some optional implementations of the present embodiment, above-mentioned executing subject can be in mention in current video from history
At least two video frames are taken, and extracted at least two video frame is inputted to vector transformation model trained in advance respectively,
Obtain at least two feature vectors.This implementation can determine history in current view at least two feature vector of later use
Feature vector corresponding to frequency provides support, determines history in feature corresponding to current video using at least two feature vectors
The accuracy of identified feature vector can be improved in vector.
Step 203, it is based on feature vector obtained, determines history in feature vector corresponding to current video.
In the present embodiment, it is based on step 202 feature vector obtained, above-mentioned executing subject can determine that history is presented
The feature vector corresponding to video.History can be used for characterizing history in current view in feature vector corresponding to current video
The feature of frequency.Specifically, above-mentioned executing subject can determine by various methods history in feature corresponding to current video to
Amount.
As an example, above-mentioned executing subject can be directly by this feature when feature vector obtained only includes one
Vector is determined as history in feature vector corresponding to current video, alternatively, above-mentioned executing subject can to this feature vector into
Row processing (such as multiplied by default value), and feature vector is determined as history in feature corresponding to current video by treated
Vector;When feature vector obtained includes at least two, above-mentioned executing subject can be at least two feature obtained
Vector is handled (such as carry out mean value computation), and by processing result be determined as history in feature corresponding to current video to
Amount.
In some optional implementations of the present embodiment, when feature vector obtained includes at least two, on
Stating executing subject can sum at least two feature vector obtained, and obtaining summed result as history is in current view
Feature vector corresponding to frequency.
In this implementation, determine history in current video institute by feature vector corresponding at least two video frames
Corresponding feature vector, increases reference data, and determine, history can be improved in feature vector corresponding to current video
Accuracy.
Step 204, determining and history is in spy corresponding to current video from predetermined candidate feature vector set
The similarity for levying vector is more than or equal to the candidate feature vector of preset threshold as target feature vector.
In the present embodiment, above-mentioned to hold based on the history obtained in step 203 in feature vector corresponding to current video
Row main body can be determined with history from predetermined candidate feature vector set in feature vector corresponding to current video
Similarity be more than or equal to preset threshold candidate feature vector as target feature vector.Wherein, preset threshold can be skill
The pre-set numerical value of art personnel.Specifically, above-mentioned executing subject can use various methods from predetermined candidate feature
The determining candidate for being more than or equal to preset threshold in the similarity of feature vector corresponding to current video with history in vector set
Feature vector is as target feature vector, for example, above-mentioned executing subject can be respectively to history in spy corresponding to current video
The candidate feature vector levied in vector sum candidate feature vector set carries out similarity calculation, obtains calculated result, and will calculate
As a result be compared with preset threshold, so determine candidate feature corresponding to the calculated result more than or equal to preset threshold to
Amount is used as target feature vector.
It should be noted that similarity calculation is the well-known technique studied and applied extensively at present, details are not described herein again.
In the present embodiment, candidate feature vector collection is combined into predetermined set, the time in candidate feature vector set
Selecting feature vector corresponding is in current video collection in current video.Specifically, for the time in candidate feature vector set
Feature vector is selected, the candidate feature vector is for characterizing in presentation corresponding with the candidate feature vector in current video collection
With the feature of video.
In practice, for it is above-mentioned in each of current video collection be in current video, can be from this in current video
Video frame is extracted, and extracted video frame is inputted into above-mentioned vector transformation model, obtains feature vector corresponding to video frame,
Then it is determined using feature vector corresponding to video frame in candidate feature vector corresponding to current video, and then really using institute
It is fixed, form candidate feature vector set in candidate feature vector corresponding to current video.
In some optional implementations of the present embodiment, candidate feature vector set can pass through following generation step
It obtains: being in current video and initial candidate feature vector set based on target, execute step identified below: firstly, being in from target
Video frame is extracted in current video, and extracted video frame is inputted into above-mentioned vector transformation model, obtains target in current
Feature vector corresponding to the video frame of video.Then, feature vector corresponding to the video frame based on target in current video,
Determine target in feature vector corresponding to current video.Then using target in feature vector corresponding to current video as time
It selects feature vector to be added in predetermined initial candidate feature vector set, generates candidate feature vector set after addition.
Finally, it is determined whether get new in current video.It has not been obtained new in current video in response to determining, will be waited after addition
Feature vector set is selected to be determined as candidate feature vector set.
Wherein, target in the executing subject that current video is above-mentioned generation step get in advance in current video.Just
Beginning candidate feature vector set can be to be not added with the set of candidate feature vector, or add candidate feature vector
Set.
In some optional implementations of the present embodiment, above-mentioned generation step can also include: to obtain in response to determination
Get new in current video, using new in current video is in current video as target, using candidate feature after addition to
Duration set continues to execute above-mentioned determining step as initial candidate feature vector set.
By this implementation, what can be will acquire in real time is added to initially in feature vector corresponding to current video
In candidate feature vector set, candidate feature vector set is generated, the comprehensive of candidate feature vector set is improved.
It should be noted that the executing subject of above-mentioned generation step can be with the executing subject of the method for handling information
It is same or different.If identical, the executing subject of generation step can be after obtaining candidate feature vector set, will be candidate
Feature vector set is stored in local.If it is different, then the executing subject of generation step can obtain candidate feature vector collection
After conjunction, candidate feature vector set is sent to the executing subject for being used to handle the method for information.
Step 205, from being in current video corresponding to target feature vector in choosing in current video collection.
In the present embodiment, based on the target feature vector obtained in step 204, above-mentioned executing subject can be from current
Choose in video collection is in current video corresponding to target feature vector.
It is understood that since target feature vector is to history in the similar of feature vector corresponding to current video
Degree is more than or equal to the feature vector of preset threshold, and can be used for characterizing in current view in feature vector corresponding to current video
The feature of frequency, therefore what is selected based on this step is similar in current video in current video with history in current video.
In some optional implementations of the present embodiment, target feature vector is being chosen from in current video collection
It is corresponding in after current video, following steps can also be performed in above-mentioned executing subject: firstly, above-mentioned executing subject can be true
The user that sets the goal is using target terminal user browsing history in the time (such as 2018 on September 1) of current video.Then, exist
After identified time preset duration (such as one month), above-mentioned executing subject can will be selected defeated in current video
Out to target terminal user.
In this implementation, in the time range corresponding to above-mentioned preset duration, above-mentioned executing subject can not be incited somebody to action
It is selected to export in current video to target terminal user, after preset duration, then to it is selected carried out in current video it is defeated
Out, it with this, can control the similar frequence of exposure in current video, help to enhance user experience, improve information processing
Diversity.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for handling information of the present embodiment
Figure.In the application scenarios of Fig. 3, corresponding to the available target terminal user 302 of server 301 is sent, target user
History is in current video 303, wherein history is within past one day in current video 303, and the output of server 301 is used to target
Family terminal 302, for target user browsing video.Then, server 301 can be in and mention in current video 303 from history
Video frame 3031 and video frame 3032 are taken, and extracted video frame 3031 and video frame 3032 are inputted into preparatory training respectively
Vector transformation model 304, obtain spy corresponding to feature vector 3051 and video frame 3032 corresponding to video frame 3031
Levy vector 3052.Then, server 301 can be based on feature vector 3051 obtained and feature vector 3052, determine history
In feature vector 306 corresponding to current video 303.Then, the available predetermined candidate feature vector of server 301
Set 307, and determining and history is in feature vector 306 corresponding to current video 303 from candidate feature vector set 307
Similarity be more than or equal to preset threshold (such as " 10 ") candidate feature vector as target feature vector 308, wherein candidate
It is in current video collection in current video that candidate feature vector in feature vector set 307, which corresponds to predetermined,.Most
Afterwards, server 301 is available above-mentioned in current video collection 309, and from selection target spy in current video collection 309
Levying is in current video 3091 corresponding to vector 308.
The method provided by the above embodiment of the application efficiently uses candidate feature vector set, defines and uses with target
It is in current video that history corresponding to family is similar in current video, facilitate it is subsequent to identified at current video
Reason improves the specific aim and diversity of information processing for example, being in the presentation time of current video determined by control.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for handling information.The use
In the process 400 of the method for processing information, comprising the following steps:
Step 401, obtaining history corresponding to target user is in current video.
In the present embodiment, can lead to for handling the executing subject (such as server shown in FIG. 1) of the method for information
It crosses wired connection mode or radio connection obtains history corresponding to target user in current video.
It step 402, is in video frame to be extracted in current video, and extracted video frame is inputted into training in advance from history
Vector transformation model, obtain feature vector.
It in the present embodiment, is in current video based on history obtained in step 401, above-mentioned executing subject can be from history
In video frame, and the vector transformation model that the input of extracted video frame is trained in advance is extracted in current video, obtain special
Levy vector.Wherein, feature vector obtained can be used for characterizing the feature of inputted video frame.
Step 403, it is based on feature vector obtained, determines history in feature vector corresponding to current video.
In the present embodiment, it is based on step 402 feature vector obtained, above-mentioned executing subject can determine that history is presented
The feature vector corresponding to video.History can be used for characterizing history in current view in feature vector corresponding to current video
The feature of frequency.
Step 404, right in current video to history using vector index engine corresponding to candidate feature vector set
The feature vector answered is retrieved, and the candidate feature vector retrieved is determined as target feature vector.
In the present embodiment, above-mentioned executing subject can use predetermined candidate feature vector set building vector inspection
Index is held up, and then can use vector index engine corresponding to candidate feature vector set, right in current video to history
The feature vector answered is retrieved, and the candidate feature vector retrieved is determined as target feature vector.
In practice, search engine (Search Engine) refers to according to certain strategy, with specific computer program
Information is collected, after carrying out tissue and processing to information, provides the system of retrieval service.Herein, vector index engine refers to
It is the engine using candidate feature vector as searched targets.Specifically, above-mentioned executing subject can use candidate feature vector collection
It closes, adopts building vector index engine in various manners, constructed for example, by using IVFADC algorithm.
It is appreciated that when constructing vector index engine, it can be set what the search result of vector index engine was met
Condition (i.e. search result and the similarity of retrieval object is more than or equal to preset threshold), so that the candidate feature vector retrieved
It is more than or equal to preset threshold in the similarity of feature vector corresponding to current video with history.
It should be noted that candidate feature vector set can be generated by above-mentioned executing subject, can also be by it in practice
His electronic equipment generates and sends to above-mentioned executing subject.When candidate feature vector set updates, (what i.e. addition was new is in current view
Feature vector corresponding to frequency) when, above-mentioned executing subject can be with renewal vector search engine.
Step 405, from being in current video corresponding to target feature vector in choosing in current video collection.
In the present embodiment, based on the target feature vector obtained in step 404, above-mentioned executing subject can be from current
Choose in video collection is in current video corresponding to target feature vector.
Above-mentioned steps 401, step 402, step 403, step 405 respectively with step 201, the step in previous embodiment
202, step 203, step 205 are consistent, and the description above with respect to step 201, step 202, step 203 and step 205 is also suitable
In step 401, step 402, step 403 and step 405, details are not described herein again.
Figure 4, it is seen that the method for handling information compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 highlight the step of determining target feature vector using vector index engine.It present embodiments provides as a result, another
Kind obtains the scheme of target feature vector, and carries out information processing using vector index engine, and it is fast can to obtain processing faster
Degree, improves the efficiency of information processing.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for handling letter
One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for handling information of the present embodiment includes: video acquisition unit 501, vector life
At unit 502, the first determination unit 503, the second determination unit 504 and video selection unit 505.Wherein, video acquisition unit
501 are configured to obtain history corresponding to target user in current video, wherein history is in historical time in current video
Video to target terminal user used in target user, browsing for target user is exported in section;Vector generation unit
502 be configured to from history in extracting video frame in current video, and by extracted video frame input training in advance to
Transformation model is measured, feature vector is obtained;First determination unit 503 is configured to determine history based on feature vector obtained
In feature vector corresponding to current video;Second determination unit 504 is configured to from predetermined candidate feature vector collection
In conjunction the determining candidate feature for being more than or equal to preset threshold with history in the similarity of feature vector corresponding to current video to
Amount is used as target feature vector, wherein the candidate feature vector in candidate feature vector set corresponds to predetermined in current
In video collection is in current video;Video selection unit 505 is configured to choose target signature from in current video collection
It is in current video corresponding to vector.
It in the present embodiment, can be by wired connection side for handling the video acquisition unit 501 of the device 500 of information
It is in current video that formula or radio connection, which obtain history corresponding to target user,.Wherein, history can be in current video
To export video to target terminal user used in target user, browsing for target user in historical time section.
Target user be to from determine in current video collection with its corresponding to history it is similar in current video in current video
User.It can be predetermined video collection in current video collection.It is for exporting to communication connection in current video
Terminal, the video to be presented to the user.
In the present embodiment, the history obtained based on video acquisition unit 501 is in current video, vector generation unit 502
It can be in video frame to be extracted in current video, and extracted video frame input vector trained in advance is converted into mould from history
Type obtains feature vector.Wherein, feature vector obtained can be used for characterizing the feature of inputted video frame.
In the present embodiment, vector transformation model is the model for extracting the feature of video frame, can be used for characterizing view
The corresponding relationship of feature vector corresponding to frequency frame and video frame.Specifically, since video frame is substantially image, and then vector
Transformation model may include for extracting the structure of characteristics of image (such as convolutional layer), certainly can also include other structures (example
Such as pond layer).
In the present embodiment, it is based on the feature vector obtained of vector generation unit 502, the first determination unit 503 can be with
Determine history in feature vector corresponding to current video.History can be used for characterizing in feature vector corresponding to current video
History is in the feature of current video.
It in the present embodiment, is in feature vector corresponding to current video based on the history that the first determination unit 503 obtains,
Second determination unit 503 can be determined with history from predetermined candidate feature vector set in corresponding to current video
The similarity of feature vector is more than or equal to the candidate feature vector of preset threshold as target feature vector.Wherein, preset threshold
It can be the pre-set numerical value of technical staff.
In the present embodiment, candidate feature vector collection is combined into predetermined set, the time in candidate feature vector set
Selecting feature vector corresponding is in current video collection in current video.Specifically, for the time in candidate feature vector set
Feature vector is selected, the candidate feature vector is for characterizing in presentation corresponding with the candidate feature vector in current video collection
With the feature of video.
In the present embodiment, the target feature vector obtained based on the second determination unit 504, video selection unit 505 can
With from being in current video corresponding to target feature vector in choosing in current video collection.
In some optional implementations of the present embodiment, candidate feature vector set corresponds to vector index engine;With
And second determination unit 504 can be further configured to: using vector index engine corresponding to candidate feature vector set,
History is retrieved in feature vector corresponding to current video, and the candidate feature vector retrieved is determined as target
Feature vector.
In some optional implementations of the present embodiment, device 500 can also include: time determination unit (in figure
It is not shown), it is configured to determine time of the target user using target terminal user browsing history in current video;Video output
Unit (not shown), is configured to after identified time preset duration, will be selected defeated in current video
Out to target terminal user.
In some optional implementations of the present embodiment, vector generation unit 502 can be further configured to: from
History is at least two video frames to be extracted in current video, and extracted at least two video frame is inputted preparatory instruction respectively
Experienced vector transformation model obtains at least two feature vectors.
In some optional implementations of the present embodiment, the first determination unit 503 can be further configured to: right
At least two feature vector obtained is summed, obtain summed result as history be in current video corresponding to feature to
Amount.
In some optional implementations of the present embodiment, candidate feature vector set can pass through following generation step
It obtains: being in current video and initial candidate feature vector set based on target, execute step identified below: from target in current view
Video frame is extracted in frequency, and by extracted video frame input vector transformation model, obtains the video that target is in current video
Feature vector corresponding to frame;Feature vector corresponding to video frame based on target in current video determines target in current
Feature vector corresponding to video;Target is added to as candidate feature vector in feature vector corresponding to current video pre-
First in determining initial candidate feature vector set, candidate feature vector set after addition is generated;Determine whether to get new
In current video;It has not been obtained new in current video in response to determining, candidate feature vector set after addition is determined as waiting
Select feature vector set.
In some optional implementations of the present embodiment, generation step can also include: to get in response to determination
New is in current video, and using new in current video is in current video as target, uses candidate feature vector collection after addition
Cooperation is initial candidate feature vector set, continues to execute determining step.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and its
In include unit, details are not described herein.
The device provided by the above embodiment 500 of the application efficiently uses candidate feature vector set, defines and mesh
Mark user corresponding to history in current video it is similar be in current video, facilitate it is subsequent to it is identified in current video into
Row processing improves the specific aim and diversity of information processing for example, being in the presentation time of current video determined by control.
Below with reference to Fig. 6, it illustrates the computer systems 600 for the server for being suitable for being used to realize the embodiment of the present application
Structural schematic diagram.Server shown in Fig. 6 is only an example, should not function and use scope band to the embodiment of the present application
Carry out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.
The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores
The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And
In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include video acquisition unit, vector generation unit, the first determination unit, the second determination unit and video selection unit.Wherein, these
The title of unit does not constitute the restriction to the unit itself under certain conditions, for example, video acquisition unit can also be retouched
It states as " obtaining the unit that history is in current video ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in server described in above-described embodiment;It is also possible to individualism, and without in the supplying server.It is above-mentioned
Computer-readable medium carries one or more program, when said one or multiple programs are executed by the server,
So that the server: obtaining history corresponding to target user is in current video, wherein history is in history in current video
Between in section output to target terminal user used in target user, the video that is browsed for target user;It is presented from history
With extracting video frame in video, and the vector transformation model that the input of extracted video frame is trained in advance, obtain feature to
Amount;Based on feature vector obtained, determine history in feature vector corresponding to current video;From predetermined candidate spy
Levy the determining time for being more than or equal to preset threshold in the similarity of feature vector corresponding to current video with history in vector set
Select feature vector as target feature vector, wherein the candidate feature vector in candidate feature vector set is corresponding predetermined
In in current video collection being in current video;From in presentation corresponding to selection target feature vector in current video collection
Use video.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (16)
1. a kind of method for handling information, comprising:
Obtaining history corresponding to target user is in current video, wherein history is defeated in historical time section in current video
Out to target terminal user used in the target user, for the target user browse video;
It is in video frame to be extracted in current video, and extracted video frame input vector trained in advance is turned from the history
Change model, obtains feature vector;
Based on feature vector obtained, determine history in feature vector corresponding to current video;
It is determined from predetermined candidate feature vector set to history in the similar of feature vector corresponding to current video
Degree is more than or equal to the candidate feature vector of preset threshold as target feature vector, wherein the time in candidate feature vector set
Selecting feature vector to correspond to predetermined is in current video collection in current video;
From described in choosing in current video collection corresponding to the target feature vector in current video.
2. according to the method described in claim 1, wherein, the candidate feature vector set corresponds to vector index engine;And
The determining and history from predetermined candidate feature vector set is in feature vector corresponding to current video
Similarity is more than or equal to the candidate feature vector of preset threshold as target feature vector, comprising:
Using vector index engine corresponding to the candidate feature vector set, to history in feature corresponding to current video
Vector is retrieved, and the candidate feature vector retrieved is determined as target feature vector.
3. according to the method described in claim 1, wherein, described special in the target is chosen in current video collection from described
It levies corresponding to vector in after current video, the method also includes:
Determine that the target user browses the time that the history is in current video using target terminal user;
After identified time preset duration, selected is exported in current video to the target terminal user.
4. according to the method described in claim 1, wherein, it is described from the history in extracting video frame in current video, and
By extracted video frame input vector transformation model trained in advance, feature vector is obtained, comprising:
It is at least two video frames to be extracted in current video, and extracted at least two video frame is distinguished from the history
Input vector transformation model trained in advance, obtains at least two feature vectors.
5. it is described to be based on feature vector obtained according to the method described in claim 4, wherein, determine history in current view
Feature vector corresponding to frequency, comprising:
It sums at least two feature vector obtained, obtains summed result as history in corresponding to current video
Feature vector.
6. method described in one of -5 according to claim 1, wherein the candidate feature vector set passes through following generation step
It obtains:
It is in current video and initial candidate feature vector set based on target, executes step identified below: from target in current view
Video frame is extracted in frequency, and extracted video frame is inputted into the vector transformation model, obtains target in current video
Feature vector corresponding to video frame;Feature vector corresponding to video frame based on target in current video, determines that target is in
Feature vector corresponding to current video;Target is added in feature vector corresponding to current video as candidate feature vector
Into predetermined initial candidate feature vector set, candidate feature vector set after addition is generated;Determine whether to get
New is in current video;It has not been obtained new in current video in response to determining, candidate feature vector set after addition is determined
For candidate feature vector set.
7. according to the method described in claim 6, wherein, the generation step further include:
In response to determine get it is new in current video, use it is new in current video as target in current video, use
Candidate feature vector set continues to execute the determining step as initial candidate feature vector set after addition.
8. a kind of for handling the device of information, comprising:
Video acquisition unit is configured to obtain history corresponding to target user in current video, wherein history is in current view
Frequency is to target terminal user used in the target user, clear for the target user to export in historical time section
The video look at;
Vector generation unit is configured to from the history in extracting video frame in current video, and by extracted video
Frame input vector transformation model trained in advance, obtains feature vector;
First determination unit is configured to determine history in feature corresponding to current video based on feature vector obtained
Vector;
Second determination unit is configured to from predetermined candidate feature vector set determining and history in current video institute
The similarity of corresponding feature vector is more than or equal to the candidate feature vector of preset threshold as target feature vector, wherein waits
Selecting the candidate feature vector in feature vector set to correspond to predetermined is in current video collection in current video;
Video selection unit, be configured to be in from described in choosing corresponding to the target feature vector in current video collection
Current video.
9. device according to claim 8, wherein the candidate feature vector set corresponds to vector index engine;And
Second determination unit is further configured to:
Using vector index engine corresponding to the candidate feature vector set, to history in feature corresponding to current video
Vector is retrieved, and the candidate feature vector retrieved is determined as target feature vector.
10. device according to claim 8, wherein described device further include:
Time determination unit is configured to determine the target user using target terminal user and browses the history in current view
The time of frequency;
Video output unit, is configured to after identified time preset duration, will be selected defeated in current video
Out to the target terminal user.
11. device according to claim 8, wherein the vector generation unit is further configured to:
It is at least two video frames to be extracted in current video, and extracted at least two video frame is distinguished from the history
Input vector transformation model trained in advance, obtains at least two feature vectors.
12. device according to claim 11, wherein first determination unit is further configured to:
It sums at least two feature vector obtained, obtains summed result as history in corresponding to current video
Feature vector.
13. the device according to one of claim 8-12, wherein the candidate feature vector set is walked by following generation
It is rapid to obtain:
It is in current video and initial candidate feature vector set based on target, executes step identified below: from target in current view
Video frame is extracted in frequency, and extracted video frame is inputted into the vector transformation model, obtains target in current video
Feature vector corresponding to video frame;Feature vector corresponding to video frame based on target in current video, determines that target is in
Feature vector corresponding to current video;Target is added in feature vector corresponding to current video as candidate feature vector
Into predetermined initial candidate feature vector set, candidate feature vector set after addition is generated;Determine whether to get
New is in current video;It has not been obtained new in current video in response to determining, candidate feature vector set after addition is determined
For candidate feature vector set.
14. device according to claim 13, wherein the generation step further include:
In response to determine get it is new in current video, use it is new in current video as target in current video, use
Candidate feature vector set continues to execute the determining step as initial candidate feature vector set after addition.
15. a kind of server, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-7.
16. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Method as described in any in claim 1-7.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811289810.8A CN109446379A (en) | 2018-10-31 | 2018-10-31 | Method and apparatus for handling information |
| PCT/CN2019/101686 WO2020088048A1 (en) | 2018-10-31 | 2019-08-21 | Method and apparatus for processing information |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811289810.8A CN109446379A (en) | 2018-10-31 | 2018-10-31 | Method and apparatus for handling information |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109446379A true CN109446379A (en) | 2019-03-08 |
Family
ID=65549590
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811289810.8A Pending CN109446379A (en) | 2018-10-31 | 2018-10-31 | Method and apparatus for handling information |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN109446379A (en) |
| WO (1) | WO2020088048A1 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020088048A1 (en) * | 2018-10-31 | 2020-05-07 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing information |
| CN111836064A (en) * | 2020-07-02 | 2020-10-27 | 北京字节跳动网络技术有限公司 | Live broadcast content monitoring method and device |
| CN112182290A (en) * | 2019-07-05 | 2021-01-05 | 北京字节跳动网络技术有限公司 | Information processing method and device and electronic equipment |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015034850A2 (en) * | 2013-09-06 | 2015-03-12 | Microsoft Corporation | Feature selection for recommender systems |
| CN106407401A (en) * | 2016-09-21 | 2017-02-15 | 乐视控股(北京)有限公司 | A video recommendation method and device |
| CN106547908A (en) * | 2016-11-25 | 2017-03-29 | 三星电子(中国)研发中心 | A kind of information-pushing method and system |
| CN107016592A (en) * | 2017-03-08 | 2017-08-04 | 美的集团股份有限公司 | Home appliance based on application guide page recommends method and apparatus |
| CN107105349A (en) * | 2017-05-17 | 2017-08-29 | 东莞市华睿电子科技有限公司 | A video recommendation method |
| CN108307240A (en) * | 2018-02-12 | 2018-07-20 | 北京百度网讯科技有限公司 | Video recommendation method and device |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9058385B2 (en) * | 2012-06-26 | 2015-06-16 | Aol Inc. | Systems and methods for identifying electronic content using video graphs |
| CN105141903B (en) * | 2015-08-13 | 2018-06-19 | 中国科学院自动化研究所 | A kind of method for carrying out target retrieval in video based on colouring information |
| CN107577737A (en) * | 2017-08-25 | 2018-01-12 | 北京百度网讯科技有限公司 | Method and device for pushing information |
| CN109446379A (en) * | 2018-10-31 | 2019-03-08 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling information |
-
2018
- 2018-10-31 CN CN201811289810.8A patent/CN109446379A/en active Pending
-
2019
- 2019-08-21 WO PCT/CN2019/101686 patent/WO2020088048A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015034850A2 (en) * | 2013-09-06 | 2015-03-12 | Microsoft Corporation | Feature selection for recommender systems |
| CN106407401A (en) * | 2016-09-21 | 2017-02-15 | 乐视控股(北京)有限公司 | A video recommendation method and device |
| CN106547908A (en) * | 2016-11-25 | 2017-03-29 | 三星电子(中国)研发中心 | A kind of information-pushing method and system |
| CN107016592A (en) * | 2017-03-08 | 2017-08-04 | 美的集团股份有限公司 | Home appliance based on application guide page recommends method and apparatus |
| CN107105349A (en) * | 2017-05-17 | 2017-08-29 | 东莞市华睿电子科技有限公司 | A video recommendation method |
| CN108307240A (en) * | 2018-02-12 | 2018-07-20 | 北京百度网讯科技有限公司 | Video recommendation method and device |
Non-Patent Citations (1)
| Title |
|---|
| 王彤: "《数字媒体内容管理技术与实践》", 31 May 2014 * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020088048A1 (en) * | 2018-10-31 | 2020-05-07 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing information |
| CN112182290A (en) * | 2019-07-05 | 2021-01-05 | 北京字节跳动网络技术有限公司 | Information processing method and device and electronic equipment |
| CN111836064A (en) * | 2020-07-02 | 2020-10-27 | 北京字节跳动网络技术有限公司 | Live broadcast content monitoring method and device |
| CN111836064B (en) * | 2020-07-02 | 2022-01-07 | 北京字节跳动网络技术有限公司 | Live broadcast content identification method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2020088048A1 (en) | 2020-05-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109460513A (en) | Method and apparatus for generating clicking rate prediction model | |
| CN109902186A (en) | Method and apparatus for generating a neural network | |
| CN109360028A (en) | Method and apparatus for pushed information | |
| CN108446387A (en) | Method and apparatus for updating face registration library | |
| CN110110811A (en) | Method and apparatus for training pattern, the method and apparatus for predictive information | |
| CN109410253B (en) | For generating method, apparatus, electronic equipment and the computer-readable medium of information | |
| CN108595628A (en) | Method and apparatus for pushed information | |
| CN109492128A (en) | Method and apparatus for generating model | |
| CN109165573A (en) | Method and apparatus for extracting video feature vector | |
| CN109308490A (en) | Method and apparatus for generating information | |
| CN108960316A (en) | Method and apparatus for generating model | |
| CN109815365A (en) | Method and apparatus for processing video | |
| CN109376267A (en) | Method and apparatus for generating model | |
| CN109145828A (en) | Method and apparatus for generating video classification detection model | |
| CN108989882A (en) | Method and apparatus for exporting the snatch of music in video | |
| CN108345387A (en) | Method and apparatus for output information | |
| CN108960110A (en) | Method and apparatus for generating information | |
| CN109214501A (en) | The method and apparatus of information for identification | |
| CN108776692A (en) | Method and apparatus for handling information | |
| CN108595211A (en) | Method and apparatus for output data | |
| CN108182472A (en) | For generating the method and apparatus of information | |
| CN109710507A (en) | A kind of method and apparatus of automatic test | |
| CN108595448A (en) | Information-pushing method and device | |
| CN109446379A (en) | Method and apparatus for handling information | |
| CN110084317A (en) | The method and apparatus of image for identification |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190308 |