[go: up one dir, main page]

CN113630618B - Video processing method, device and system - Google Patents

Video processing method, device and system Download PDF

Info

Publication number
CN113630618B
CN113630618B CN202110901970.9A CN202110901970A CN113630618B CN 113630618 B CN113630618 B CN 113630618B CN 202110901970 A CN202110901970 A CN 202110901970A CN 113630618 B CN113630618 B CN 113630618B
Authority
CN
China
Prior art keywords
video
information
service
file
attribute information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110901970.9A
Other languages
Chinese (zh)
Other versions
CN113630618A (en
Inventor
刘瑞洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110901970.9A priority Critical patent/CN113630618B/en
Publication of CN113630618A publication Critical patent/CN113630618A/en
Application granted granted Critical
Publication of CN113630618B publication Critical patent/CN113630618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23109Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4332Content storage operation, e.g. storage operation in response to a pause request, caching operations by placing content in organized collections, e.g. local EPG data repository
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a video processing method, a device and a system, wherein the video processing method comprises the following steps: responding to a recording start instruction of a host, and acquiring a push file generated in a live broadcast process by the host pushed by a distribution server; creating a segmented video based on the push file, and determining video attribute information of the segmented video; storing the fragmented video and the video attribute information into a service storage space; and under the condition that the anchor shuts down live broadcast, creating recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space and issuing the recorded broadcast service video.

Description

Video processing method, device and system
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video processing method, apparatus, and system.
Background
With the development of internet technology, more and more services start to be online, and with the higher richness of the services, the types of services that can be brought to users are more and more. In general, after a service user initiates a service through a service platform, other users can participate in the service through an online participation mode. In order to provide a traceable function for users who do not participate in the service or users who participate in the service operation process, a service platform generally re-collects video streams related in the service operation process after the service is closed, and then creates recorded broadcast video based on the video streams so that the users can still know the content related to the service after the service is closed. However, in the prior art, after the service platform acquires the video stream, the video stream needs to be downloaded and transcoded, which consumes more time, and related notification and interaction information of the user related to the service operation process cannot be traced.
Disclosure of Invention
In view of this, embodiments of the present application provide a video processing method. The application also relates to a video processing device, a video processing system, a computing device and a computer readable storage medium, which are used for solving the problem of long creation time period of retrospective video in the prior art.
According to a first aspect of an embodiment of the present application, there is provided a video processing method, including:
responding to a recording start instruction of a host, and acquiring a push file generated in a live broadcast process by the host pushed by a distribution server;
creating a segmented video based on the push file, and determining video attribute information of the segmented video;
storing the fragmented video and the video attribute information into a service storage space;
and under the condition that the anchor shuts down live broadcast, creating recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space and issuing the recorded broadcast service video.
According to a second aspect of embodiments of the present application, there is provided a video processing apparatus, including:
the acquisition module is configured to respond to a recording start instruction of the anchor and acquire a push file generated in a live broadcast process by the anchor pushed by the distribution server;
A determining module configured to create a slice video based on the push file and determine video attribute information of the slice video;
a storage module configured to store the fragmented video and the video attribute information into a service storage space;
and the creation module is configured to create and issue a recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space under the condition that the anchor shuts down live broadcast.
According to a third aspect of embodiments of the present application, there is provided a video processing system, comprising:
the system comprises a main broadcasting end, a recording end, a service end and a content distribution network;
the anchor terminal is configured to collect a push file generated by an anchor in a live broadcast process and send the push file to the content distribution network;
the content distribution network is configured to push the push file to the recording segment under the condition that the record function is determined to be started by the anchor;
the recording end is configured to create a segmented video based on the push file and determine video attribute information of the segmented video; storing the fragmented video and the video attribute information into a service storage space; under the condition that the anchor shuts down live broadcast, the segmented video and the video attribute information in the service storage space are sent to the service end;
The service end is configured to create and issue a recorded broadcast service video based on the segmented video and the video attribute information.
According to a fourth aspect of embodiments of the present application, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the video processing method when executing the instructions.
According to a fifth aspect of embodiments of the present application, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the video processing method.
According to the video processing method, a recording start request of a host is responded, a push file generated in a live broadcast process by the host pushed by a distribution server is obtained, at the moment, a segmented video corresponding to the current stage can be directly created based on the push file, video attribute information of the segmented video is determined, then the video attribute information and the segmented video are stored in a service storage space, and the aim of recording while broadcasting can be achieved through continuous periodic circulation; when the live broadcast is closed by the anchor, all part of video and video attribute information can be directly extracted from the service storage space to create the recorded broadcast service video for release, so that the recorded broadcast service video can be generated without waiting for a long time, the video release can be timely performed, the live broadcast missing user or the user with backtracking requirement can timely watch the video, the quick docking of the live broadcast service in the operation stage and the recorded broadcast service video release stage is realized, and the participation experience of the user is improved.
Drawings
Fig. 1 is a schematic diagram of a video processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of a video processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of playing back a recorded broadcast service video according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a video processing system according to an embodiment of the present application;
fig. 6 is a flowchart of a video processing system applied in a live scene according to an embodiment of the present application
FIG. 7 is a block diagram of a computing device according to one embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in one or more embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of one or more embodiments of the application. As used in this application in one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the present application, a video processing method is provided, and the present application relates to a video processing apparatus, a video processing system, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
In practical application, with the development of live broadcast service, live broadcast becomes one of important leisure and entertainment projects for users, and spectators can watch game games, sports games or personal knowledge through live broadcast; in order to provide a better live broadcast viewing experience for the audience, the live broadcast platform generally generates live broadcast video before the anchor broadcast as recorded broadcast video after the anchor broadcast for the audience to browse again; however, the process of generating the recorded video not only needs to download and transcode the video, but also needs to consume a long time to create the recorded video, so that the recorded video cannot be seen quickly by a viewer, and the butt joint period of the live broadcast stage and the recorded broadcast stage is long, so that the participation experience of the user can be influenced to a great extent.
In view of this, referring to the schematic diagram shown in fig. 1, in order to enable quick generation of recorded broadcast video in a live broadcast scenario, in a live broadcast process, a push file distributed by a live broadcast CDN (Content Delivery Network, a content distribution network) is acquired through a recording cluster, virtual gift information, bullet screen information, minute data index information and the like related in the live broadcast process are acquired at the same time, a video stream in the push file is split into small fragment videos, and after each fragment video is created, the fragment videos are stored in a Redis in combination with the data, and the cycle is performed until the live broadcast is downloaded. After the main broadcasting is downwards broadcast, the live video, the virtual gift information, the barrage information, the minute data index information and the like can be directly extracted from the Redis without downloading and transcoding the video, the barrage index information and the recorded video are generated and released based on the video slicing information, the barrage slicing information and the index slicing information, so that the recorded video can be created and released after the main broadcasting is downwards broadcast in a short time, the connection of the live broadcasting stage and the recorded broadcasting stage is realized, and the user experience is improved.
According to the video processing method, a recording start request of a host is responded, a push file generated in a live broadcast process by the host pushed by a distribution server is obtained, at the moment, a segmented video corresponding to the current stage can be directly created based on the push file, video attribute information of the segmented video is determined, then the video attribute information and the segmented video are stored in a service storage space, and the aim of recording while broadcasting can be achieved through continuous periodic circulation; when the live broadcast is closed by the anchor, all part of video and video attribute information can be directly extracted from the service storage space to create the recorded broadcast service video for release, so that the recorded broadcast service video can be generated without waiting for a long time, the video release can be timely performed, the live broadcast missing user or the user with backtracking requirement can timely watch the video, the quick docking of the live broadcast service in the operation stage and the recorded broadcast service video release stage is realized, and the participation experience of the user is improved.
Fig. 2 shows a flowchart of a video processing method according to an embodiment of the present application, which specifically includes the following steps:
step S202, a push file generated in a live broadcast process by a host is obtained in response to a record starting instruction of the host, wherein the push file is pushed by a distribution server.
Specifically, the push file specifically refers to a push file generated in the running process of live broadcast opened by a host, and the file supports live broadcast running; correspondingly, the distribution server specifically refers to a CDN (content distribution network) node for interfacing with the anchor end, and as the content distribution network is a node for performing the work of resource scheduling, load balancing and the like, the content distribution network continuously receives the push file pushed by the anchor end; under the condition that the anchor starts the live broadcast recording function, the content distribution network pushes push files generated in the live broadcast process to the server.
Based on the method, in order to quickly generate the recorded broadcast service video after live broadcast closing so as to realize quick connection between the live broadcast operation stage and the live broadcast closing stage for generating the recorded broadcast service video, the push file pushed by the distribution server can be directly acquired under the condition that the live broadcast is in an operation state, thereby realizing the purpose of creating the recorded broadcast service video while the live broadcast is operated, and further saving the time for creating the recorded broadcast service video.
Further, when obtaining the push file generated in the live broadcast service process, the method may be implemented based on the request of the anchor, and in this embodiment, the specific implementation manner is as follows:
receiving a live broadcast starting request submitted by the anchor for a live broadcast service; and under the condition that the live broadcast starting request contains the recording starting instruction, acquiring the push file generated in the live broadcast process by the anchor pushed by the distribution server.
Specifically, the live broadcast start request specifically refers to a request submitted by a host when the live broadcast platform has a live broadcast start requirement, when the live broadcast start request is received, live broadcast is started and operated, for example, in a live broadcast scene, when the live broadcast platform server receives the live broadcast start request submitted by the host, the live broadcast platform server starts a live broadcast service of the host according to the live broadcast start request. Correspondingly, the recording start instruction specifically refers to an instruction for recording live content after live broadcast is started.
Based on the method, after a live broadcast start request is received and submitted by a host for a live broadcast service, the host needs to start live broadcast at the moment so that other users can participate in the live broadcast; the server side will also start live broadcast according to the live broadcast start request. Under the condition that the live broadcast opening request contains a recording opening instruction, the fact that the anchor needs to create recorded broadcast service video after live broadcast closing is described, recording service can be started according to the recording instruction at the moment, namely, a plug flow file generated in the running process of live broadcast service started by the anchor is collected.
In practical application, after the live broadcast service generates a push file in the running process, the push file is sent to a Content Delivery Network (CDN) to perform operations such as load balancing and scheduling, and the push file is continuously generated along with the running of the live broadcast service, so that when the push file is collected, the push file can be realized based on the content delivery network, that is, when the push file is generated and sent to the content delivery network, the push file can be extracted from the content delivery network according to a recording instruction for subsequent video processing, and the process is performed along with the running of the live broadcast service, so that the push file is collected while the live broadcast service is running, and the creation of recorded broadcast service videos is conveniently and rapidly completed.
In summary, by providing the record request submitting interface for the anchor, not only can the subsequent operation of creating the record service video be selectively turned on/off according to the anchor demand, but also more flexible selection can be provided for the anchor, thereby further improving the participation experience of the anchor.
Step S204, a segmented video is created based on the push file, and video attribute information of the segmented video is determined.
Specifically, on the basis of the obtained push file, further, in order to realize the subsequent and rapid completion of the creation of the recorded broadcast service video, a slicing video can be created based on the push file. That is, in order to facilitate storage and creation of the recorded broadcast service video, at this time, the video stream related to the obtained push file is split into segmented videos with shorter duration, and then after each segmented video is created, video attribute information corresponding to each segmented video is determined, where the video attribute information specifically refers to information corresponding to an attribute of each segmented video, including, but not limited to, start time information, duration information, file size information and the like of each segmented video.
Further, in the process of creating the slice video, considering that the push file is continuously generated, correspondingly, if the slice video needs to be created, the video stream related to the push file needs to be integrated, and a preset processing strategy is adopted to complete, in this embodiment, the specific implementation manner is as follows:
analyzing the push file to obtain a video stream corresponding to the anchor; and performing slicing processing on the video stream according to a preset slicing strategy, and generating the slicing video according to a slicing processing result.
Specifically, the video stream specifically refers to a video stream related to the push file, for example, in a live scene, the video stream is a video segment of the live broadcast content of the anchor contained in the push file; accordingly, the preset slicing strategy specifically refers to a strategy of creating slicing videos, and since the live broadcast service is operated for a long time, in order to improve the efficiency of creating the recorded service videos later, a plurality of slicing videos can be created based on the video stream, that is, the video stream is sliced according to the preset slicing strategy, so as to obtain the slicing videos.
Based on the method, in the process of creating the segmented video according to the push file, the push file is continuously generated in the live broadcast service operation process, and the segmented video is a segment of video with a set length, so that after the push file is obtained, the creation of one segmented video can be completed under the condition that the corresponding length of the video stream related to the push file reaches the set length, and the like, a plurality of segmented videos separated from the video streams related to all the push files related to the live broadcast service from opening to closing can be obtained, and the subsequent processing can be carried out after each segmented video is obtained, so that the subsequent processing of each segmented video is continuously carried out, the time for subsequently creating the recorded broadcast service video is effectively reduced, and the release efficiency of the recorded broadcast service video is improved.
In the implementation, because the length of the video stream related to the push file may be shorter, if the slicing process is directly performed, a large amount of slicing videos are generated, and at this time, the resource processing pressure is increased; in view of this, the slicing strategy provided in this embodiment may perform the splicing process on the video stream obtained by parsing, and then split the video stream according to the slicing length, that is, first form a video stream with a longer duration, and then perform the slicing process, so as to obtain a plurality of sliced videos according to the splitting result, which is convenient for subsequent processing.
In conclusion, the video stream obtained through analysis is subjected to slicing processing by adopting a preset slicing strategy, so that each sliced video can be guaranteed to have similar attributes, a foundation can be laid for subsequent creation of recorded broadcast service videos, and the efficiency of subsequent creation of recorded broadcast service videos is improved by adopting a slicing mode.
Furthermore, since the live broadcast platform provides services for starting live broadcast services to multiple anchor broadcasters at the same time, after creating the video segment based on the push file corresponding to the anchor broadcasters, in order to facilitate the subsequent generation of the video recording service video corresponding to the live broadcast services started by the anchor broadcasters, the video attribute information of the video segment needs to be determined, so that the video segment video and the video attribute information are stored at the same time, and the object of the created video recording service video is determined when the video recording service video is generated, in this embodiment, the specific implementation manner is as follows:
Analyzing the segmented video to obtain starting time information, video duration information, video space occupation information and video sequence index information of the segmented video; determining a storage space identifier according to the storage position of the live broadcast attribute information in the service storage space; the video attribute information of the segmented video is created based on the start time information, the video duration information, the video space occupation information, the video sequence index information, and the storage space identification.
Specifically, the start time information specifically refers to start time information of each segmented video; the video duration information specifically refers to the playing duration information of each segmented video; the video space occupation information specifically refers to file size information of each segmented video; the video sequence index information specifically refers to information corresponding to the front-to-back sequence among the segmented videos, and is convenient for avoiding the problem of the playing sequence of the recorded broadcast service video caused by the confusion of the segmented videos when the recorded broadcast service video is subsequently created. Accordingly, the live broadcast attribute information specifically refers to information related to a live broadcast service, including, but not limited to, a unique identifier corresponding to the current execution of the live broadcast service, a unique identifier corresponding to the live broadcast service, timestamp information, and the like. It should be noted that, live broadcast attribute information is generated when live broadcast service is created and is stored in the service storage space in advance, and the subsequently generated segmented video and video attribute information are both merged and written into the service storage space in the same storage position as long as being associated with a host broadcast, so that subsequent management and use are facilitated.
Based on the above, after the segmented video is created according to the push file, in order to be able to successfully create the recording and playing service video in the following, at this time, the starting time information, the video duration information, the video space occupation information and the video index information of each segmented video can be obtained by analyzing each segmented video, meanwhile, the storage space identifier is determined according to the storage position of the live broadcast attribute information in the service storage space, and then the starting time information, the video duration information, the video space occupation information, the video index information and the storage space identifier are integrated to generate the video attribute information corresponding to each segmented video, so as to facilitate the use when creating the recording and playing service video.
In summary, the video attribute information of each segmented video is obtained by analyzing the segmented video, so that the subsequent targeted creation of the recorded broadcast service video can be facilitated, the success of creating the recorded broadcast service video can be ensured, and the release processing of the recorded broadcast service video can be ensured to be completed in a shorter time.
Step S206, storing the fragmented video and the video attribute information into a service storage space.
Specifically, after the video attribute information of the segmented video and the segmented video are determined, since the live broadcast service is still in an operation state, the segmented video and the corresponding video attribute information thereof can be temporarily stored in a service storage space corresponding to the live broadcast service, so that the video attribute information and the segmented video can be directly extracted from the service storage space after the live broadcast service is finished to create the recorded broadcast service video; the service storage space may be implemented by using a storage unit dis, or other storage systems may be selected according to an actual application scenario, which is not limited in this embodiment.
For example, when the live start request is determined to carry a recording instruction for recording live content submitted by the user a, the user a needs to record the next live content, so that a recorded video (a historical video generated for the live content of the live L game) can be quickly generated after the live broadcast of the user a is finished, a push file of the live broadcast CDN is continuously acquired from a client of the live broadcast L game after the live broadcast is started, a piece of video taking 30min as one piece is created based on the push file, video stream splicing can be sequentially performed based on the push file which is continuously received when the video is created, the video is divided into a piece of video when the video which is continuously received reaches 30min, and the like until the live broadcast of the user a is finished, and the creation of the piece of video is stopped.
When each segmented video is created, in order to improve efficiency of creating recorded broadcast video, attribute information of each segmented video can be determined at this time, so that the segmented video can be stored conveniently. Based on this, after obtaining one piece of video, the determination of the video attribute information of the piece of video can be performed, and by analyzing the piece of video, the Start time (Start time) of the piece of video is obtained as ST1, duration (playback time length) is D1, file Size (Size of video File) is FS1, and index (sequential index of piece of video) is I1; meanwhile, the socket (storage space identifier) for storing the segmented video is determined to be B1, then the video attribute information corresponding to the segmented video can be determined by integrating the parameters, and by pushing the video attribute information, each segmented video is processed by adopting the same strategy, so that the follow-up storage and the creation of recorded broadcast video are convenient. Furthermore, after the creation of the segmented video and the determination of the corresponding video attribute information thereof are completed, the segmented video and the corresponding video attribute information thereof can be written into the storage unit Redis according to the storage space identifier B1, so that the relevant information corresponding to the live broadcast of the user A is stored together, and the recorded broadcast video can be conveniently created subsequently.
Step S208, under the condition that the anchor shuts down live broadcast, a recorded broadcast service video is created and released based on the fragmented video and the video attribute information in the service storage space.
Specifically, under the condition that the live broadcast service is in an execution state, the process is repeated continuously, so that the fragmented videos involved in the live broadcast service operation process can be stored; under the condition that the live broadcast service is closed, the live broadcast service of which the host is actively stopped is described, at this time, in order to provide recorded broadcast service videos for audiences of the host in time, the segmented video and the corresponding video attribute information thereof can be directly extracted from a service storage space, the recorded broadcast service videos corresponding to the live broadcast service of which the host initiates the current time are created by combining the segmented video and the video attribute information, and the recorded broadcast service videos are issued. The recorded broadcast service video specifically refers to video content corresponding to the live broadcast service in the running process; if the video generated by the content after the live broadcasting is finished after the live broadcasting is played by the host in the live broadcasting scene, the video is the recorded broadcast service video.
In practical application, when publishing the video of the recorded broadcast service, because the video is created for the live broadcast service initiated by the anchor, in order to facilitate the user to watch, the video of the recorded broadcast service can be published in a carrier operated by the live broadcast service, and the live broadcast service and the video of the recorded broadcast service which can be participated in by the carrier are related to the anchor; if in a live broadcast scene, the recorded broadcast service video is released in a live broadcast room of a host broadcast for a user watching live broadcast to review.
In particular, in the case of closing the live broadcast service, the last piece of the segmented video being created may not be created, i.e. the segmentation strategy is not satisfied, and if the segmented video is discarded, the integrity of the video of the recording and playing service created subsequently cannot be guaranteed, so in the case of closing the live broadcast service, the last piece of the segmented video can be directly processed based on the end time, and the determination of the video attribute information of the segmented video is used for creating the video of the recording and playing service subsequently.
Further, because the recorded broadcast service video to be published is not only related to the anchor but also related to the live broadcast service initiated by the anchor, considering that the server may create multiple recorded broadcast service videos for different anchors at the same time, the recorded broadcast service video may be created by combining live broadcast attribute information, and in this embodiment, the specific implementation manner is as shown in step S2082 to step S2084:
and step S2082, generating live broadcast attribute information according to the anchor information carried in the live broadcast start request, and storing the live broadcast attribute information into the service storage space.
Specifically, the anchor information specifically refers to information capable of identifying the anchor, where the information may be a unique identifier corresponding to the anchor, or an anchor password, or anchor identity information, and the embodiment is not limited in any way; correspondingly, the live broadcast attribute information specifically refers to information which needs to be generated and used when the live broadcast service is created, live broadcast service video can be conveniently bound with the live broadcast service and the anchor broadcast through the live broadcast attribute information, and the live broadcast attribute information stored in the service storage space is stored in the same position as the segmented video and the video attribute information, namely, the three have the same storage identification, so that the recorded broadcast service video can be conveniently created, extracted and used.
Further, the procedure of creating live attribute information based on the anchor information is as follows:
analyzing the live broadcast starting request to obtain the anchor information; generating a time stamp and a live broadcast identifier based on the anchor information, and reading a live broadcast address identifier corresponding to the anchor information; and integrating the timestamp, the live broadcast identifier and the live broadcast address identifier to generate the live broadcast attribute information.
Specifically, the timestamp specifically refers to the time when the live broadcast service is started, the live broadcast identifier specifically refers to the unique identifier corresponding to the live broadcast service, and the live broadcast address identifier specifically refers to the persistence identifier corresponding to the live broadcast service initiated by the anchor. In a live broadcast scene, the time stamp specifically refers to the time when the user starts live broadcast, the live broadcast identifier specifically refers to the unique identifier of the current broadcast, and the live broadcast address identifier specifically refers to the ID of the live broadcast room of the user.
Based on the above, in the case that a live broadcast opening request submitted by a host for a live broadcast service is received, at this time, the host information can be determined based on the live broadcast opening request; because the live broadcast service is a new live broadcast service, the time stamp and the live broadcast identifier corresponding to the live broadcast service are created based on the anchor information, and meanwhile, the live broadcast platform already stores the related information corresponding to the anchor and the live broadcast address identifier is uniquely allocated, so that the live broadcast address identifier can be directly read, and then the live broadcast attribute information corresponding to the live broadcast service initiated by the anchor can be obtained by integrating the time stamp, the live broadcast identifier and the live broadcast address identifier, thereby facilitating the subsequent targeted creation of recorded broadcast service videos.
Further, after the live broadcast attribute information is obtained, the live broadcast attribute information can be stored in the service storage space, and when the segmented video and the video attribute information are stored subsequently, the segmented video and the video attribute information corresponding to the segmented video are generated for the live broadcast service of this time, and the segmented video and the video attribute information corresponding to the segmented video are also related to the live broadcast service of this time, so that the segmented video and the video attribute information corresponding to the segmented video can be directly stored in the storage position corresponding to the live broadcast attribute information, and the subsequent unified calling and the use are convenient.
In sum, by combining the timestamp, the live broadcast identifier and the live broadcast address identifier to form live broadcast attribute information, the recorded broadcast service video associated with the anchor can be created in a targeted manner, and the built-in relation between the information and the video can be established, so that the targeted manner can be maintained when the recorded broadcast service video is released.
And step S2084, creating and publishing the recorded broadcast service video based on the segmented video, the video attribute information and the live broadcast attribute information in the service storage space.
Specifically, after the live broadcast attribute information, the segmented video and the corresponding video attribute information thereof are stored in the service storage space, if the live broadcast service is closed, the segmented video and the corresponding video attribute information thereof related to the live broadcast service and the live broadcast attribute information thereof can be directly extracted from the service storage space at the moment, and the recorded broadcast service video can be created and released by combining three needles.
In practical application, the live broadcast service is closed based on the control decision of the anchor, and when the anchor submits the closing request, the live broadcast service is closed immediately, and the processing operation of creating the recorded broadcast service video is triggered, so that the operation of creating and releasing the recorded broadcast service video is responded at the moment of closing the live broadcast service, and the time consumption is reduced.
Further, in the process of creating and publishing recorded broadcast service videos, considering that a live broadcast platform can dock more anchor, and different anchor can initiate different live broadcast services, when publishing, an interface corresponding to the anchor is selected for publishing, and in this embodiment, the specific implementation manner is as follows:
extracting the fragmented video, the video attribute information and the live broadcast attribute information corresponding to the anchor from the service storage space; creating the recorded broadcast service video according to the segmented video, the video attribute information and the live broadcast attribute information; and determining a release interface corresponding to the anchor based on the live broadcast attribute information, and calling the release interface to release the recorded broadcast service video.
Specifically, the publishing interface specifically refers to an interface corresponding to the anchor, and the publishing interface corresponds to the live address identifier, so that the published recorded broadcast service video is guaranteed to be associated with the live broadcast service. Based on the above, under the condition that the live broadcast service is closed, a closing request submitted by a host for the live broadcast service is described, at this time, in order to timely create and release recorded broadcast service videos, the segmented video, the video attribute information and the live broadcast attribute information corresponding to the host can be extracted from a service storage space, and then the recorded broadcast service videos are created based on the segmented video, the video attribute information and the live broadcast attribute information, so that the recorded broadcast service videos are defined to be created for the closed live broadcast service initiated by the host through the live broadcast attribute information and the video attribute information; then, in order to ensure that the video of the recorded broadcast service is released at the release address associated with the anchor, a unique release interface corresponding to the anchor can be determined based on the live broadcast attribute information when the anchor initiates the live broadcast service, and then the release interface is called to complete the release processing of the video of the recorded broadcast service.
Along the above example, when the user A starts live broadcast, the live broadcast platform can provide live broadcast service for different users at the same time, so that different live broadcast rooms are set for different main broadcasters in order to facilitate other users to watch live broadcast contents of different users; based on this, at the start of the user live broadcast, the corresponding Room ID (Room ID) is determined to be id_123456 based on id_1 of the user a; the Tiemstamp (time stamp) of the live broadcast is TS1; live Key (unique identifier of live broadcast) of the live broadcast is LK1; after the live broadcast information is obtained, the live broadcast information is written into a storage unit Redis according to the storage space identifier B1.
Further, under the condition that the user A submits a live broadcast closing request, in order to quickly generate recorded broadcast video, extracting a segmented video corresponding to the live broadcast of the user A and corresponding video attribute information and live broadcast information of the segmented video from a storage position corresponding to a storage space identifier B1 in a storage unit Redis, wherein the obtained segmented video is segmented video 1 and segmented video 2 … … segmented video n respectively; the video attribute information corresponding to the segmented video 1 is that the Start time is ST1, the Duration is D1, the File Size is FS1, and the index is I1; the video attribute information corresponding to the segmented video 2 is that the Start time is ST2, the Duration is D2, the File Size is FS2, and the index is I2; … …; the video attribute information corresponding to the segmented video n is that the Start time is STn, the Duration is Dn, the File Size is FSn, and the index is In.
Furthermore, after obtaining the corresponding information, the recorded broadcast video can be spliced based on each segmented video and the corresponding video attribute information thereof, the splicing sequence is completed according to the sequence index of the segmented video, the length of the spliced recorded broadcast video is the total time length of the segmented video 1 to the segmented video n, and the file size is the total size of the segmented video 1 to the segmented video n; after obtaining the recorded video, in order to enable other users to watch the live broadcast content before the user A faster, at this time, a recorded video playback entry corresponding to the user A can be generated based on the live broadcast information, that is, the recorded video is released in a live broadcast room of the user A, so that after the other users enter the corresponding live broadcast room of the user A at the time when the user A is not live broadcast, the recorded video of the user A can be watched, and when the user A is not on-air after the recorded video is released, the content displayed in the live broadcast room of the user A is shown in (a) in fig. 3.
In sum, by combining live broadcast attribute information, the segmented video and the video attribute information corresponding to the segmented video to create the recorded broadcast service video, the recorded broadcast service video can be ensured to be distributed pertinently, and other users can browse and watch conveniently, so that the user participation experience is further improved.
In addition, when creating the recorded broadcast service video, considering that the recorded broadcast service video process may be inconvenient for other users to watch, and meanwhile, if all the recorded broadcast service video is loaded, more network resources will be consumed, so that the recorded broadcast service video can be created in a segmented manner, in this embodiment, the specific implementation manner is as follows:
creating an initial recorded broadcast service video according to the segmented video, the video attribute information and the live broadcast attribute information; splitting the initial recorded broadcast service video into at least two middle recorded broadcast service videos under the condition that the play time length of the initial recorded broadcast service video is longer than a preset time length threshold value; and taking the at least two middle recorded broadcast service videos as the recorded broadcast service videos.
Specifically, the initial recording and broadcasting service video specifically refers to a recording and broadcasting service video to be distributed, which is created by combining the segmented video, the corresponding video attribute information and the live broadcast attribute information; correspondingly, the at least two intermediate recorded broadcast service videos specifically refer to each recorded broadcast service video obtained after the initial recorded broadcast service video is segmented.
Based on the above, after the initial recording service video is created based on the slice video, the video attribute information and the live broadcast attribute information, in order to facilitate other users to browse the recording service video, whether the playing time length of the initial recording service video is longer than the preset time length threshold value can be judged at this time, if not, it is indicated that the playing time length of the initial recording service video is not longer, and the initial recording service video can be directly released as the recording service video.
If yes, the playing time length of the initial recording service video is too long, if the initial recording service video is released directly, other users can not watch the initial recording service video conveniently, then the initial recording service video can be subjected to segmentation processing, two or more middle recording service videos are obtained, and then all the obtained middle recording service videos are released as recording service videos.
In practical application, when at least two intermediate recorded broadcast service videos are used as recorded broadcast service videos, if the recorded broadcast service videos are issued, a plurality of intermediate recorded broadcast service videos need to be issued, and in order to facilitate the user to watch, corresponding stage identifiers can be added in each intermediate recorded broadcast service video, so that the user knows the sequence of playing each intermediate recorded broadcast service video. Meanwhile, the play time length of the segmented middle recorded broadcast service video can be set according to the actual application scene, and the embodiment is not limited at all and is convenient for a user to watch.
In sum, by selectively creating the recorded broadcast service video in a mode of judging the broadcast time length, the broadcast time length of the recorded broadcast service video can be controlled, the problem that users watch the recorded broadcast service video inconveniently due to overlong broadcast time length can be avoided, and further the watching experience of the users is improved.
In practical application, when other users watch recorded broadcast service videos, not only video content needs to be watched, but also interaction information of other participating users in the running process of live broadcast service watching may exist, so that watching experience is improved. Based on this, in consideration of the requirement of the participating user, in the process of creating the segmented video, corresponding barrage information and virtual article information can be recorded at the same time, and in this embodiment, the specific implementation manner is as follows:
acquiring a barrage file and a virtual article file corresponding to the segmented video, and establishing an association relationship between the barrage file and the virtual article file and the segmented video; and storing the barrage file and the virtual article file into the service storage space based on the association relation.
Specifically, the barrage file specifically refers to a file corresponding to barrage information of each segmented video, and all barrages and corresponding time stamps related to the segmented video in the time length are stored in the file, so that when the recorded broadcast service video is conveniently played, the corresponding barrages can be displayed to participating users; correspondingly, the virtual object file specifically refers to a file corresponding to a virtual object of each segmented video, and virtual gifts and corresponding time stamps which are presented to the host by other participating users and are related to the time length of the segmented video are stored in the file, so that when the recorded broadcast service video is conveniently played, the corresponding virtual gifts presentation condition can be presented to the participating users.
In the implementation, as the live broadcast service can continuously create a plurality of segmented videos in the execution process, each segmented video is corresponding to a barrage file and a virtual object file, and the association relationship between each segmented video and the corresponding barrage file and virtual file can be pre-established for facilitating the subsequent creation of the video of the recorded broadcast service, so that the video of the recorded broadcast service can be played.
Based on the method, the bullet screen file and the virtual article file of the segmented video can be obtained simultaneously in the process of creating the segmented video, then the association relation between the bullet screen file and the virtual article file and the segmented video is built, each segmented video and the corresponding bullet screen and virtual article are associated, then the bullet screen file and the virtual article file are written into the service storage space based on the association relation, after the recorded broadcast service video is created and released, the bullet screen file and/or the virtual article file can be called to generate the corresponding bullet screen and/or virtual article when a user has the requirement of watching the bullet screen and/or the virtual article, and when the user watches the recorded broadcast service video, the bullet screen file and/or the virtual article file can be displayed to the user together at the set time by combining the time stamp.
Further, when the participating user has a viewing requirement, in order to conveniently display the recorded broadcast service video and the corresponding barrage and virtual articles to the user, the video may be sent to the participating user in combination with the corresponding barrage file and virtual article file, where in this embodiment, the specific implementation manner is as follows:
under the condition that a viewing request is submitted by the audience of the anchor aiming at the recorded broadcast business video, extracting the barrage file and the virtual article file from the business storage space; and generating a recorded broadcast video packet based on the recorded broadcast service video, the barrage file and the virtual object file, and transmitting the recorded broadcast video packet to the audience.
In practical application, after the video of the recording and broadcasting service is released, other participating users can look up the video of the recording and broadcasting service at the corresponding live broadcast addresses after the live broadcast service is closed, at this time, if the participating users who participate in the live broadcast service submit viewing requests for the video of the recording and broadcasting service, in order to improve the participating experience of the participating users, the barrage file and the virtual article file can be directly extracted in the service storage space, and the recording and broadcasting video packet is generated by combining the video of the recording and broadcasting service, the barrage file and the virtual article file and is sent to the participating users, so that the participating users can know barrages and virtual articles sent by other users in the running process of the live broadcast service at the same time, and the participating experience of the participating users is improved.
Along the above example, in the live broadcast process of the user a, the audience sends a large number of barrages and virtual gifts, and when each segmented video is created, the barrage file and the virtual gifts file corresponding to the time interval of each segmented video are stored in the storage unit dis. When receiving the live video recorded by the user b, the user a can extract the barrage file and the virtual gift file from the storage unit dis, and create a video recorded package in combination with the video recorded and send the video recorded package to the client of the user b, and when watching the video recorded by the user b, the user b can watch the barrage and the virtual gift sent by other users in the live broadcast process at the same time, and the recorded content watched by the user b is shown in (b) in fig. 3.
In conclusion, by combining the barrage file and the virtual gift file to create the recorded broadcast video package, the participating users can conveniently know the participation degree of other participating users in the live broadcast service operation process, and the watching experience of the users is effectively improved.
In addition, after the video of the recording and broadcasting service is released, the video of the recording and broadcasting service can be watched by a plurality of other participating users at the same time, in order to improve the participation experience of the users watching the video of the recording and broadcasting service, a barrage transmitting interface and a virtual article transmitting interface can be provided at the interface for watching the video of the recording and broadcasting service, that is, the users can also transmit barrages and give virtual gifts when watching the video of the recording and broadcasting service, thereby being more convenient for the users to participate in the service and improving the participation experience of the users.
According to the video processing method, a recording start request of a host is responded, a push file generated in a live broadcast process by the host pushed by a distribution server is obtained, at the moment, a segmented video corresponding to the current stage can be directly created based on the push file, video attribute information of the segmented video is determined, then the video attribute information and the segmented video are stored in a service storage space, and the aim of recording while broadcasting can be achieved through continuous periodic circulation; when the live broadcast is closed by the anchor, all part of video and video attribute information can be directly extracted from the service storage space to create the recorded broadcast service video for release, so that the recorded broadcast service video can be generated without waiting for a long time, the video release can be timely performed, the live broadcast missing user or the user with backtracking requirement can timely watch the video, the quick docking of the live broadcast service in the operation stage and the recorded broadcast service video release stage is realized, and the participation experience of the user is improved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video processing apparatus, and fig. 4 shows a schematic structural diagram of a video processing apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
An obtaining module 402, configured to obtain a push file generated in a live broadcast process by a host, where the push file is pushed by a distribution server in response to a recording start instruction of the host;
a determining module 404 configured to create a slice video based on the push file and determine video attribute information of the slice video;
a storage module 406 configured to store the fragmented video and the video attribute information into a service storage space;
a creating module 408, configured to create and issue a recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space in the case that the anchor turns off live broadcast.
In an alternative embodiment, the acquisition module 402 is further configured to:
receiving a live broadcast starting request submitted by the anchor for a live broadcast service; and under the condition that the live broadcast starting request contains the recording starting instruction, acquiring the push file generated in the live broadcast process by the anchor pushed by the distribution server.
In an alternative embodiment, the video processing apparatus further includes:
the information storage module is configured to generate live broadcast attribute information according to the anchor information carried in the live broadcast starting request and store the live broadcast attribute information into the service storage space;
Accordingly, the creation module 408 is further configured to: and creating and publishing the recorded broadcast service video based on the segmented video, the video attribute information and the live broadcast attribute information in the service storage space.
In an alternative embodiment, the information storage module is further configured to:
analyzing the live broadcast starting request to obtain the anchor information; generating a time stamp and a live broadcast identifier based on the anchor information, and reading a live broadcast address identifier corresponding to the anchor information; and integrating the timestamp, the live broadcast identifier and the live broadcast address identifier to generate the live broadcast attribute information.
In an alternative embodiment, the determining module 404 is further configured to:
analyzing the push file to obtain a video stream corresponding to the anchor; and performing slicing processing on the video stream according to a preset slicing strategy, and generating the slicing video according to a slicing processing result.
In an alternative embodiment, the determining module 404 is further configured to:
analyzing the segmented video to obtain starting time information, video duration information, video space occupation information and video sequence index information of the segmented video; determining a storage space identifier according to the storage position of the live broadcast attribute information in the service storage space; the video attribute information of the segmented video is created based on the start time information, the video duration information, the video space occupation information, the video sequence index information, and the storage space identification.
In an alternative embodiment, the information storage module is further configured to:
extracting the fragmented video, the video attribute information and the live broadcast attribute information corresponding to the anchor from the service storage space; creating the recorded broadcast service video according to the segmented video, the video attribute information and the live broadcast attribute information; and determining a release interface corresponding to the anchor based on the live broadcast attribute information, and calling the release interface to release the recorded broadcast service video.
In an alternative embodiment, the information storage module is further configured to:
creating an initial recorded broadcast service video according to the segmented video, the video attribute information and the live broadcast attribute information; splitting the initial recorded broadcast service video into at least two middle recorded broadcast service videos under the condition that the play time length of the initial recorded broadcast service video is longer than a preset time length threshold value; and taking the at least two middle recorded broadcast service videos as the recorded broadcast service videos.
In an alternative embodiment, the video processing apparatus further includes:
the establishing module is configured to acquire a barrage file and a virtual article file corresponding to the segmented video, and establish an association relationship between the barrage file and the virtual article file and the segmented video; and storing the barrage file and the virtual article file into the service storage space based on the association relation.
In an alternative embodiment, the video processing apparatus further includes:
the sending module is configured to extract the barrage file and the virtual article file from the service storage space under the condition that the audience of the anchor submits a viewing request aiming at the recorded broadcast service video; and generating a recorded broadcast video packet based on the recorded broadcast service video, the barrage file and the virtual object file, and transmitting the recorded broadcast video packet to the audience.
According to the video processing device, the record starting request of the anchor is responded, the plug flow file generated in the live broadcast process by the anchor pushed by the distribution server is obtained, at the moment, the segmented video corresponding to the current stage can be directly created based on the plug flow file, the video attribute information of the segmented video is determined, then the video attribute information and the segmented video are stored in the service storage space, and the aim of recording while broadcasting can be achieved through continuous periodic circulation; when the live broadcast is closed by the anchor, all part of video and video attribute information can be directly extracted from the service storage space to create the recorded broadcast service video for release, so that the recorded broadcast service video can be generated without waiting for a long time, the video release can be timely performed, the live broadcast missing user or the user with backtracking requirement can timely watch the video, the quick docking of the live broadcast service in the operation stage and the recorded broadcast service video release stage is realized, and the participation experience of the user is improved.
The above is a schematic solution of a video processing apparatus of the present embodiment. It should be noted that, the technical solution of the video processing apparatus and the technical solution of the video processing method belong to the same concept, and details of the technical solution of the video processing apparatus, which are not described in detail, can be referred to the description of the technical solution of the video processing method.
Corresponding to the above method embodiment, the present application further provides a video processing system embodiment, and fig. 5 shows a schematic structural diagram of a video processing system according to an embodiment of the present application. As shown in fig. 5, the system 500 includes:
a main cast end 510, a record end 520, a service end 530 and a content distribution network 540;
the anchor terminal 510 is configured to collect a push file generated by an anchor during a live broadcast process, and send the push file to the content distribution network 540;
the content distribution network 540 is configured to push the push file to the recording segment 520 if it is determined that the anchor starts a recording function;
the recording end 520 is configured to create a segmented video based on the push file, and determine video attribute information of the segmented video; storing the fragmented video and the video attribute information into a service storage space; transmitting the fragmented video and the video attribute information in the service storage space to the service end 530 under the condition that the anchor turns off live broadcast;
The service end 530 is configured to create a recorded broadcast service video based on the fragmented video and the video attribute information and issue the recorded broadcast service video.
Specifically, the anchor terminal 510 specifically refers to a terminal device held by the anchor, and the recording terminal 520 specifically refers to a cluster for providing creation of a fragmented video, which is used for supporting creation of a recorded broadcast service video by the docking service terminal 530; accordingly, the service end 530 specifically refers to an end for creating and publishing the recorded broadcast service video.
Optionally, the content distribution network 540 is configured to determine whether a recording instruction submitted by the anchor for live broadcast is received; if yes, the push file is sent to the recording end.
Specifically, the content delivery network 540 refers to a CDN for interfacing with the anchor terminal 510, and is a node for performing tasks such as resource scheduling and load balancing, so that the content delivery network will continuously receive the push file pushed by the anchor terminal 510 in the RTMP push manner, so that it can determine whether to record video at the content delivery network, and if recording is required, it will directly push the push file to the recording terminal 520 for processing.
Optionally, the service end 530 is further configured to generate live broadcast attribute information according to the anchor information carried in the live broadcast start request, and store the live broadcast attribute information in the service storage space; correspondingly, the creating and publishing the recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space comprises the following steps: and creating and publishing the recorded broadcast service video based on the segmented video, the video attribute information and the live broadcast attribute information in the service storage space.
Optionally, the service end 530 is further configured to parse the live broadcast start request to obtain the anchor information; generating a time stamp and a live broadcast identifier based on the anchor information, and reading a live broadcast address identifier corresponding to the anchor information; and integrating the timestamp, the live broadcast identifier and the live broadcast address identifier to generate the live broadcast attribute information.
Optionally, the recording end 520 is further configured to parse the push file to obtain a video stream corresponding to the anchor; and performing slicing processing on the video stream according to a preset slicing strategy, and generating the slicing video according to a slicing processing result.
Optionally, the recording end 520 is further configured to parse the segmented video to obtain start time information, video duration information, video space occupation information and video sequence index information of the segmented video; determining a storage space identifier according to the storage position of the live broadcast attribute information in the service storage space; the video attribute information of the segmented video is created based on the start time information, the video duration information, the video space occupation information, the video sequence index information, and the storage space identification.
Optionally, the service end 530 is further configured to extract the fragmented video, the video attribute information and the live broadcast attribute information corresponding to the anchor in the service storage space; creating the recorded broadcast service video according to the segmented video, the video attribute information and the live broadcast attribute information; and determining a release interface corresponding to the anchor based on the live broadcast attribute information, and calling the release interface to release the recorded broadcast service video.
Optionally, the service end 530 is further configured to create an initial recorded broadcast service video according to the fragmented video, the video attribute information and the live attribute information; splitting the initial recorded broadcast service video into at least two middle recorded broadcast service videos under the condition that the play time length of the initial recorded broadcast service video is longer than a preset time length threshold value; and taking the at least two middle recorded broadcast service videos as the recorded broadcast service videos.
Optionally, the service end 530 is further configured to obtain a barrage file and a virtual article file corresponding to the segmented video, and establish an association relationship between the barrage file and the virtual article file and the segmented video; and storing the barrage file and the virtual article file into the service storage space based on the association relation.
Optionally, the service end 530 is further configured to extract the barrage file and the virtual object file in the service storage space when receiving a viewing request submitted by the audience of the anchor for the recorded broadcast service video; and generating a recorded broadcast video packet based on the recorded broadcast service video, the barrage file and the virtual object file, and transmitting the recorded broadcast video packet to the audience.
The video processing system provided by the application realizes that recorded broadcast service videos can be generated without waiting for a long time, not only can video release be performed in time, but also users missing live broadcast or users with backtracking requirements can watch the videos in time, and the quick docking of the operation stage of the live broadcast service and the video release stage of the recorded broadcast service is realized, so that the participation experience of the users is improved.
The above is a schematic solution of a video processing system of the present embodiment. It should be noted that, the technical solution of the video processing system and the technical solution of the video processing method belong to the same conception, and details of the technical solution of the video processing system, which are not described in detail, can be referred to the description of the technical solution of the video processing method.
The application of the video processing system provided in the present application in a live scene is taken as an example, and the video processing system is further described below with reference to fig. 6. Fig. 6 shows a flow chart of a video processing system applied to a live scene according to an embodiment of the present application, which specifically includes the following steps:
step S602, the live broadcast cluster receives a live broadcast start request submitted by a user through a user side.
Step S604, the live broadcast cluster generates live broadcast information according to the user on-air information carried in the live broadcast start request.
The live broadcast information comprises a live broadcast room ID, a time stamp and a unique identifier of the live broadcast, and meanwhile, the live broadcast cluster can deliver the live broadcast information to the DataBus, so that the subsequent recording and use are facilitated.
In step S606, the recording cluster pulls the direct broadcast information in the direct broadcast cluster in each fixed period based on the polling mechanism, and stores the direct broadcast information in the storage unit Redis.
Specifically, in order to ensure that the subsequent live broadcast is effectively recorded, the recording cluster will pull the direct broadcast information regularly, so that the segmented video belonging to the user can be stored in the same storage position during the subsequent recording, and the use and management are convenient.
In step S608, the user starts live broadcasting through the user terminal, and the user terminal collects the push file and sends the push file to the live broadcasting CDN through the RTMP.
In step S610, the live CDN sends the push file to the recording cluster if it is determined that the user starts the recording function.
In step S612, the recording cluster receives the push file, creates the slice video based on the push file, and determines the video attribute information corresponding to the slice video.
Specifically, the video attribute information includes a storage space identifier corresponding to a storage position of the segmented video in the storage unit Redis, a start time of the segmented video, a playback time length, a size of a recording file, and a sequential index of the segmented video.
In step S614, the recording cluster stores the segmented video and the video attribute information corresponding to the segmented video in the storage unit dis.
In step S616, the recording cluster reads the segmented video and the video attribute information and the live broadcast information corresponding to the segmented video in the storage unit Redis and sends the segmented video and the video attribute information and the live broadcast information to the live broadcast cluster when determining that the user turns off the live broadcast.
Step S618, the live broadcast cluster creates recorded broadcast video based on the segmented video, the corresponding video attribute information and the live broadcast information.
Step S620, the live broadcast cluster reads relevant barrages, virtual gifts and live videos in the live broadcast process of the user, and issues the live broadcast videos through a playback interface corresponding to the user.
Specifically, when the recorded broadcast video is released, considering the participation degree that the watching user possibly needs to be connected with other users, the bullet screen and the virtual gift related in the live broadcast process of the user can be read to jointly record the broadcast video together, so that the bullet screen content and the virtual gift presentation condition in the live broadcast process can be watched by other users.
In summary, the video of the recorded broadcast service can be generated without waiting for a long time, so that not only can the video be released in time, but also the user missing the live broadcast or having a backtracking requirement can watch the video in time, and the quick docking between the running stage of the live broadcast service and the video release stage of the recorded broadcast service is realized, thereby improving the participation experience of the user.
Fig. 7 illustrates a block diagram of a computing device 700 provided in accordance with an embodiment of the present application. The components of computing device 700 include, but are not limited to, memory 710 and processor 720. Processor 720 is coupled to memory 710 via bus 730, and database 750 is used to store data.
Computing device 700 also includes access device 740, access device 740 enabling computing device 700 to communicate via one or more networks 760. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 740 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 700, as well as other components not shown in FIG. 7, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 7 is for exemplary purposes only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 700 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 700 may also be a mobile or stationary server.
Wherein the processor 720 implements the steps of the video processing method when executing the instructions.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the video processing method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the video processing method.
An embodiment of the present application also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of the video processing method as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the video processing method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the video processing method.
The foregoing describes specific embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of this application. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (14)

1. A video processing method, comprising:
responding to a recording opening instruction carried in a live broadcast opening request submitted by a host broadcast aiming at a live broadcast service, and acquiring a push file generated in a live broadcast process by the host broadcast pushed by a distribution server;
creating a segmented video based on the plug flow file, and determining a barrage file and a virtual article file of the segmented video;
analyzing the segmented video to obtain starting time information, video duration information, video space occupation information and video sequence index information of the segmented video, wherein the video sequence index information is information corresponding to the front-back sequence of each segmented video;
generating live broadcast attribute information according to the anchor information carried in the live broadcast starting request;
creating video attribute information of the segmented video based on the live attribute information, the start time information, the video duration information, the video space occupation information and the video sequence index information;
storing the segmented video, the video attribute information, the barrage file and the virtual article file into a service storage space, wherein the barrage file and the virtual article file have an association relationship with the segmented video;
And under the condition that the anchor shuts down live broadcast, creating a recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space and issuing, wherein the barrage file and the virtual article file are used for displaying the barrage and the virtual article to a user when the recorded broadcast service video is played.
2. The video processing method according to claim 1, wherein before the creating a recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space and the publishing step is performed, further comprising:
storing the live broadcast attribute information into the service storage space;
correspondingly, the creating and publishing the recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space comprises the following steps:
and creating and publishing the recorded broadcast service video based on the segmented video, the video attribute information and the live broadcast attribute information in the service storage space.
3. The video processing method according to claim 2, wherein the generating live attribute information according to the anchor information carried in the live start request includes:
Analyzing the live broadcast starting request to obtain the anchor information;
generating a time stamp and a host identifier based on the host information, and reading a live broadcast address identifier corresponding to the host information;
and integrating the timestamp, the anchor identifier and the live address identifier to generate the live attribute information.
4. The video processing method according to claim 1, wherein the creating a slice video based on the push file comprises:
analyzing the push file to obtain a video stream corresponding to the anchor;
and performing slicing processing on the video stream according to a preset slicing strategy, and generating the slicing video according to a slicing processing result.
5. The video processing method according to claim 1, wherein the creating video attribute information of the fragmented video based on the live attribute information, the start time information, the video duration information, the video space occupation information, the video sequence index information, comprises:
determining a storage space identifier according to the storage position of the live broadcast attribute information in the service storage space;
the video attribute information of the segmented video is created based on the start time information, the video duration information, the video space occupation information, the video sequence index information, and the storage space identification.
6. The video processing method according to claim 2, wherein the creating and publishing the recorded broadcast service video based on the fragmented video, the video attribute information, and the live broadcast attribute information in the service storage space includes:
extracting the fragmented video, the video attribute information and the live broadcast attribute information corresponding to the anchor from the service storage space;
creating the recorded broadcast service video according to the segmented video, the video attribute information and the live broadcast attribute information;
and determining a release interface corresponding to the anchor based on the live broadcast attribute information, and calling the release interface to release the recorded broadcast service video.
7. The video processing method according to claim 6, wherein the creating the recording service video from the slice video, the video attribute information, and the live attribute information includes:
creating an initial recorded broadcast service video according to the segmented video, the video attribute information and the live broadcast attribute information;
splitting the initial recorded broadcast service video into at least two middle recorded broadcast service videos under the condition that the play time length of the initial recorded broadcast service video is longer than a preset time length threshold value;
And taking the at least two middle recorded broadcast service videos as the recorded broadcast service videos.
8. The method according to any one of claims 1 to 7, wherein before the creating a recorded broadcast service video and distributing step is performed based on the fragmented video and the video attribute information in the service storage space, further comprising:
acquiring a barrage file and a virtual article file corresponding to the segmented video, and establishing an association relationship between the barrage file and the virtual article file and the segmented video;
and storing the barrage file and the virtual article file into the service storage space based on the association relation.
9. The video processing method according to claim 8, wherein after the creating a recorded broadcast service video based on the fragmented video and the video attribute information in the service storage space and the publishing step is performed, further comprising:
under the condition that a viewing request is submitted by the audience of the anchor aiming at the recorded broadcast business video, extracting the barrage file and the virtual article file from the business storage space;
and generating a recorded broadcast video packet based on the recorded broadcast service video, the barrage file and the virtual object file, and transmitting the recorded broadcast video packet to the audience.
10. A video processing apparatus, comprising:
the acquisition module is configured to respond to a recording start instruction carried in a live broadcast start request submitted by a host for a live broadcast service and acquire a push file generated in a live broadcast process by the host pushed by a distribution server;
the information storage module is configured to generate live broadcast attribute information according to the anchor information carried in the live broadcast starting request;
the determining module is configured to create a segmented video based on the plug flow file and determine a barrage file and a virtual article file of the segmented video; analyzing the segmented video to obtain starting time information, video duration information, video space occupation information and video sequence index information of the segmented video; creating video attribute information of the segmented video based on the live broadcast attribute information, the starting time information, the video duration information, the video space occupation information and the video sequence index information, wherein the video sequence index information is information corresponding to each segmented video in a front-back sequence;
the storage module is configured to store the segmented video, the video attribute information, the barrage file and the virtual article file into a service storage space, wherein the barrage file and the virtual article file have an association relationship with the segmented video;
The creation module is configured to create a recording and broadcasting service video based on the fragment video and the video attribute information in the service storage space and issue the video under the condition that the host broadcasting is closed to live broadcasting, wherein the bullet screen file and the virtual article file are used for displaying the bullet screen and the virtual article to a user when the recording and broadcasting service video is played.
11. A video processing system, comprising:
the system comprises a main broadcasting end, a recording end, a service end and a content distribution network;
the anchor terminal is configured to collect a push file generated by an anchor in a live broadcast process and send the push file to the content distribution network;
the content distribution network is configured to push the push file to the recording end under the condition that the record function is determined to be started by the anchor;
the service end is configured to generate live broadcast attribute information according to the anchor information carried in the live broadcast starting request;
the recording end is configured to receive a live broadcast starting request submitted by a host for a live broadcast service, create a segmented video based on the push file, and determine a barrage file and a virtual article file of the segmented video; analyzing the segmented video to obtain starting time information, video duration information, video space occupation information and video sequence index information of the segmented video; creating video attribute information of the segmented video based on the live attribute information, the start time information, the video duration information, the video space occupation information and the video sequence index information; storing the segmented video, the video attribute information, the barrage file and the virtual article file into a service storage space; under the condition that the anchor shuts down live broadcast, the segmented video and the video attribute information in the service storage space are sent to the service end, wherein the barrage file and the virtual article file are used for displaying a barrage and a virtual article to a user when the recorded broadcast service video is played, the video sequence index information is information corresponding to the front-back sequence among the segmented videos, and the barrage file and the virtual article file have an association relation with the segmented video;
The service end is further configured to create a recorded broadcast service video based on the segmented video and the video attribute information and issue the recorded broadcast service video.
12. The video processing system of claim 11, wherein the content distribution network is configured to determine whether a recording instruction submitted by the anchor for live broadcast was received; if yes, the push file is sent to the recording end.
13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the computer instructions, performs the steps of the method of any one of claims 1-9.
14. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1-9.
CN202110901970.9A 2021-08-06 2021-08-06 Video processing method, device and system Active CN113630618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110901970.9A CN113630618B (en) 2021-08-06 2021-08-06 Video processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110901970.9A CN113630618B (en) 2021-08-06 2021-08-06 Video processing method, device and system

Publications (2)

Publication Number Publication Date
CN113630618A CN113630618A (en) 2021-11-09
CN113630618B true CN113630618B (en) 2024-02-13

Family

ID=78383195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110901970.9A Active CN113630618B (en) 2021-08-06 2021-08-06 Video processing method, device and system

Country Status (1)

Country Link
CN (1) CN113630618B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466201B (en) * 2022-02-21 2024-03-19 上海哔哩哔哩科技有限公司 Live stream processing method and device
CN115633194A (en) * 2022-12-21 2023-01-20 易方信息科技股份有限公司 Live broadcast playback method and device, computer equipment and storage medium
CN116233504B (en) * 2023-04-10 2025-04-15 中国工商银行股份有限公司 A live streaming method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721811A (en) * 2015-05-15 2016-06-29 乐视云计算有限公司 Live video recording method and system
CN106412677A (en) * 2016-10-28 2017-02-15 北京奇虎科技有限公司 Generation method and device of playback video file
CN109561351A (en) * 2018-12-03 2019-04-02 网易(杭州)网络有限公司 Network direct broadcasting back method, device and storage medium
CN109831676A (en) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 A kind of video data handling procedure and device
CN111447455A (en) * 2018-12-29 2020-07-24 北京奇虎科技有限公司 Live video stream playback processing method, device and computing device
CN111954078A (en) * 2020-08-24 2020-11-17 上海连尚网络科技有限公司 Video generation method and device for live broadcast

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9986267B2 (en) * 2015-10-23 2018-05-29 Disney Enterprises, Inc. Methods and systems for dynamically editing, encoding, posting and updating live video content
CN107948669A (en) * 2017-12-22 2018-04-20 成都华栖云科技有限公司 Based on CDN fast video production methods
CN109874061A (en) * 2019-03-22 2019-06-11 北京奇艺世纪科技有限公司 A kind of processing method of live video, device and electronic equipment
CN110351506A (en) * 2019-07-17 2019-10-18 视联动力信息技术股份有限公司 A kind of video recording method, device, electronic equipment and readable storage medium storing program for executing
CN111954077A (en) * 2020-08-24 2020-11-17 上海连尚网络科技有限公司 Video stream processing method and device for live broadcast

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721811A (en) * 2015-05-15 2016-06-29 乐视云计算有限公司 Live video recording method and system
CN106412677A (en) * 2016-10-28 2017-02-15 北京奇虎科技有限公司 Generation method and device of playback video file
CN109561351A (en) * 2018-12-03 2019-04-02 网易(杭州)网络有限公司 Network direct broadcasting back method, device and storage medium
CN111447455A (en) * 2018-12-29 2020-07-24 北京奇虎科技有限公司 Live video stream playback processing method, device and computing device
CN109831676A (en) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 A kind of video data handling procedure and device
CN111954078A (en) * 2020-08-24 2020-11-17 上海连尚网络科技有限公司 Video generation method and device for live broadcast

Also Published As

Publication number Publication date
CN113630618A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN113630618B (en) Video processing method, device and system
CN108391179B (en) Live broadcast data processing method and device, server, terminal and storage medium
US12356025B2 (en) Multi-user live streaming method, terminal, server, and storage medium
US12432396B2 (en) Live-streaming processing method and apparatus
CN112839238B (en) Screen projection playing method and device and storage medium
CN112672179B (en) Method, device and equipment for live game
CN110996145A (en) Multimedia resource playing method, system, terminal equipment and server
CN114363651A (en) Live stream processing method and device
CN107690084A (en) The playing method and device of media file
CN108769816A (en) A kind of video broadcasting method, device and storage medium
CN117041623A (en) Digital person live broadcasting method and device
CN111432284A (en) Bullet screen interaction method of multimedia terminal and multimedia terminal
CN105227987B (en) synchronous playing method and system
CN111726677B (en) Video playing method and device, computer storage medium and electronic equipment
CN111818383B (en) Video data generation method, system, device, electronic equipment and storage medium
CN111773661A (en) System, method and device for team formation game based on live broadcast interface
HK1204404A1 (en) A method, device, server and terminal for a video session
CN115243066A (en) Information pushing method and device, electronic equipment and computer readable medium
CN105979333B (en) Data synchronous display method and device
CN115914670B (en) Live broadcast playback processing method, device and storage medium
CN114466201B (en) Live stream processing method and device
CN113630508B (en) Video ring back tone management method, device, equipment and medium
CN117278769A (en) Live broadcast playback video generation method and device, live broadcast platform, computer equipment and storage medium
CN117319696A (en) Live video playing control method and device
CN114979695B (en) SRS-based multi-process live broadcast method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant