[go: up one dir, main page]

WO2018120819A1 - Procédé et dispositif pour produire des présentations - Google Patents

Procédé et dispositif pour produire des présentations Download PDF

Info

Publication number
WO2018120819A1
WO2018120819A1 PCT/CN2017/094598 CN2017094598W WO2018120819A1 WO 2018120819 A1 WO2018120819 A1 WO 2018120819A1 CN 2017094598 W CN2017094598 W CN 2017094598W WO 2018120819 A1 WO2018120819 A1 WO 2018120819A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio data
presentation
target time
time interval
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/094598
Other languages
English (en)
Chinese (zh)
Inventor
吴亮
黄薇
高峰
钟恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Publication of WO2018120819A1 publication Critical patent/WO2018120819A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting

Definitions

  • the present application relates to the field of web technologies, and in particular, to a method for fabricating a presentation and a device for making a presentation.
  • the user In order to realize distance learning, the user usually records the operation of the presentation while the user is speaking, keeping the user's speech synchronized with the presentation.
  • the video data obtained by recording the operation of the presentation is bulky and takes up a lot of storage space.
  • the video data is often compressed to reduce the resolution of the video data, resulting in blurry content of the presentation.
  • the present application has been made in order to provide a method of fabricating a presentation and a corresponding apparatus for producing a presentation that overcomes the above problems or at least partially solves or alleviates the above problems.
  • a method of making a presentation including:
  • the audio data is re-added to the target time interval.
  • a production apparatus for a presentation including:
  • a web page loading module adapted to load a web page generated for the presentation
  • a presentation element configuration module adapted to configure a presentation element in the web page
  • An audio data adding module adapted to add audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis;
  • a target time interval selection module adapted to select a target time interval on the time axis
  • An audio data adding module adapted to re-add audio data to the target time interval.
  • a computer program comprising computer readable code causing the terminal device to perform the production of any of the aforementioned presentations when the computer readable code is run on a terminal device method.
  • a computer readable medium storing a computer program of a method of fabricating a presentation as described above.
  • the embodiment of the present application loads a web page generated for a presentation in a client, and configures a presentation element in the web page, and further adds audio data to the presentation element on the timeline, so that the presentation can be played according to the timeline
  • the elements are synchronized to play audio data
  • the web page is used as a carrier to create a presentation
  • the audio data is used to synchronize the presentation of the presentation elements and audio data, allowing the user to view the contents of the presentation and listen to the presentation of the presentation.
  • Using the web element as a presentation element compared to the video data, can greatly reduce the mention, reduce the occupation of the storage space, and, because the web element is directly drawn and loaded on the web page, without compression processing, the web element can be guaranteed.
  • Sharpness on the other hand, by re-adding audio data to the target time interval, the user is prevented from manually deleting the uncovered audio data, and the limitation on the length of the re-added audio data is removed, thereby improving the efficiency of production.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for creating a presentation according to an embodiment of the present application
  • FIGS. 2A-2C illustrate example diagrams of a configuration presentation element in accordance with one embodiment of the present application
  • 3A-3D illustrate example diagrams of editing a presentation element and audio data playback order, in accordance with one embodiment of the present application
  • FIGS. 4A-4D illustrate example diagrams of playing presentation elements and audio data in accordance with one embodiment of the present application
  • FIGS. 5A-5B illustrate example diagrams of recording audio data in accordance with one embodiment of the present application
  • 6A-6C illustrate example diagrams of a selection re-recording in accordance with one embodiment of the present application
  • FIG. 7 is a structural block diagram of a device for fabricating a presentation according to an embodiment of the present application.
  • Figure 8 schematically shows a block diagram of a terminal device for performing the method according to the present application
  • Fig. 9 schematically shows a storage unit for holding or carrying program code implementing the method according to the present application.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for creating a presentation according to an embodiment of the present application. Specifically, the method may include the following steps:
  • Step 101 Load a web page generated for the presentation.
  • the user can log in to the server by using a user account on a client such as a browser, and send a request for generating a presentation to the server.
  • a client such as a browser
  • the server can configure a new presentation and configure the presentation with a unique presentation identifier, such as slide_id (slide ID), which is used to generate a unique one for the presentation. Edit the URL (Uniform Resource Locator) and return the URL for editing to the client.
  • slide_id segment ID
  • URL Uniform Resource Locator
  • the client accesses the URL for editing to load a web page, which is the carrier of the presentation, ie the presentation can edit the content in the web page.
  • the information of the presentation can be displayed in the area such as the user center.
  • the client can directly load the web page by using the URL for editing, which is not used in the embodiment of the present application. limit.
  • the presentation ID is used to generate a unique URL for the presentation, and the URL for the presentation is returned to the client.
  • the client can access the URL for the presentation to load the web page, which is the carrier of the presentation, ie the presentation can be played in the web page.
  • Step 102 configuring a presentation element in the web page.
  • the presentation elements can include one or more of the following:
  • Text images, images of specified shapes, lines, tables, frames, and code.
  • the user can trigger the presentation element to edit state by clicking or the like.
  • the editing operation bar of the presentation element is popped up in the web page, and the user can display the element of the presentation element in the editing operation column. Parameters for the user to adjust.
  • the edit operation bar of the text box can be popped up in the web page, and the user can set the font alignment and font. Play multiplier, font color, line spacing, font spacing and other element parameters.
  • the edit operation column of the table may be popped up in the web page, and the user may set the number of rows, the number of columns, and the cell.
  • Element parameters such as margins, border width, and border color.
  • the user can save it manually, or the script of the client executing the web page can be automatically saved.
  • the parameters configured in the presentation element of the web page can be synchronized with the server during saving, and the server takes the parameter. Store under the presentation (represented by the presentation ID) for subsequent loading.
  • the client loads the web page by using the URL for editing.
  • the corresponding presentation element is loaded according to the previously set element parameters, so that the user can continue editing, which is not limited by the embodiment of the present application.
  • Step 103 adding audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis.
  • the client in order to control the playing of the presentation, can configure a timeline and set the playing time of the presentation element on the timeline.
  • the user can record audio data
  • the client adds audio data to the presentation element, such as a user's speech, so that the presentation elements can be played while the audio data is being played on the time axis, so that the two can be synchronized.
  • the user can set the playing time of the presentation element. With the passage of time, when the audio data is set to be played, the speech can be set to be switched in order.
  • the manuscript elements that is, the text "Quiet Night Thinking", “Li Bai”, “Before the Moon”.
  • the timing control is displayed in the lower left corner, and as time passes, the audio presentation data is played, and the presentation document elements are switched in order, that is, the text is displayed. "Quiet night thinking”, “Li Bai”, “before the bed bright moonlight”.
  • step 103 may include the following sub-steps:
  • Sub-step S11 the recorder is called to record audio data to the presentation element.
  • the microphone can be called to collect the original audio data, and the recorder is called to record the audio data.
  • a recording control can be loaded, after clicking the recording control, recording is started, and a visual element of the audio data is displayed on the axis element of the visual axis of the time axis.
  • the sub-step S11 may include the following sub-steps:
  • Sub-step S111 acquiring original audio stream data collected by the microphone
  • Sub-step S112 the original audio stream data is transmitted to the recorder
  • Sub-step S113 the original audio stream data is visualized in the recorder according to the recording parameters, and the original audio stream data is converted into audio data of a specified format.
  • the client can obtain the original audio stream data collected by the microphone through the getUserMedia interface provided by WebRTC (Web Real-Time Communication).
  • WebRTC Web Real-Time Communication
  • a script processing node is created by the createScriptProcess method of the Web Audio API, which is used to process raw audio stream data using Javascript.
  • the audio source node is connected to the processing node, and the processing node is connected to the audio output node to form a complete processing flow.
  • the processing node can listen to the AudioProcessingEvent event through the onaudioprocess method, and the event acquires a certain length of data from the original audio stream data for processing at regular intervals.
  • the original audio stream data is visualized by the drawAudioWave method (the visualized elements are generated based on the frequency, waveform and other attributes of the original audio stream data), and the audio data is transmitted to the Web Worker for audio.
  • the drawAudioWave method the visualized elements are generated based on the frequency, waveform and other attributes of the original audio stream data
  • the audio data is transmitted to the Web Worker for audio.
  • the audio processing is paused, and a format file such as WAV is requested from the Web Worker, and the Web Worker converts the existing original audio stream data into audio data of a format such as WAV and returns it.
  • a format file such as WAV
  • the computing power of the client (such as a browser) is mostly limited, and the temporary storage and processing of the original audio stream data generally requires a large amount of computing power, another thread is opened by introducing a Web Worker.
  • the temporary storage and processing of the original audio stream data is performed to ensure that other processing of the client (such as a browser) can be performed normally.
  • step 103 may include the following sub-steps:
  • Sub-step S21 inputting text information to the presentation element
  • Sub-step S22 converting the text information into audio data.
  • the terminal where the client is located is not configured with a microphone, the user may be allowed to By entering text information into the presentation elements, text information can be converted to audio data by speech synthesis (The Emperor Waltz, TEW).
  • speech synthesis The Emperor Waltz, TEW.
  • Speech synthesis also known as Text to Speech (TTS) technology
  • TTS Text to Speech
  • the characteristics of the segment such as pitch, length and intensity, are made, so that the synthesized speech can correctly express the semantics and sound more natural.
  • the phonetic primitives of the single words or phrases corresponding to the processed text are extracted from the speech synthesis library, and the prosody characteristics of the speech primitives are adjusted and modified by using a specific speech synthesis technique, and finally synthesized. Meet the required voice data.
  • the manner of adding audio data is only an example.
  • other manners of adding audio data may be set according to actual conditions, for example, directly importing existing audio data, and the like. This is not limited.
  • those skilled in the art may also adopt other manners of adding audio data according to actual needs, and the embodiment of the present application does not limit this.
  • the audio data on the time axis can be uploaded to the server.
  • the audio data can be retrieved from the Web Worker, and the audio file is compressed by the amrnb.js library, compressed into a specified format such as amr, and then uploaded to the server, the server stores Under the presentation (represented by the presentation ID) for subsequent loading.
  • Step 104 selecting a target time interval on the time axis.
  • the unsatisfactory area may be selected for re-recording on the time axis, and the area is referred to as a target time interval.
  • the timeline has a visual axis element on the web page with a time scale on the axis element of the visualization, such as 00:00, 00:05, 00:10, etc. .
  • a scrolling marker strip is inserted over the visualized axis element, as shown in Figure 6A, with a solid dot, like a pin.
  • the interval between the start position and the end position of the scroll bar is taken as the target time interval as shown in the rectangular area of the visual axis element as shown in FIG. 6A.
  • Step 105 re-adding audio data to the target time interval.
  • the audio data in the uncovered area is automatically deleted to ensure the consistency of the audio data, and the re-recording is selected.
  • the client can replace the target time interval with audio data of any length, that is, the time of the added audio data can be longer than the length of the target time interval.
  • the client automatically deletes the remaining 8 seconds of audio data.
  • step 105 may include the following sub-steps:
  • Sub-step S31 deleting original audio data located in the target time interval
  • Sub-step S32 moving original audio data located after the target time interval to a start time of the target time zone
  • Sub-step S33 inserting new audio data from the start time of the target time interval, and moving the original audio data located after the target time interval to the end time of the new audio data.
  • the original audio data is audio data before re-adding audio data
  • the new audio data is re-added audio data
  • the timeline has visual axis elements on the web page, and the audio data has visual audio elements on the visual axis elements.
  • the client can delete the original audio data located in the target time interval, and move the original audio data located after the target time interval to the start time of the target time zone to ensure the consistency of the audio data.
  • the visualized audio elements of the original audio data located in the target time interval can be deleted on the visualized axis elements.
  • the visualized audio element of the original audio data located after the target time interval is moved to the start time of the target time zone.
  • new audio data can be inserted from the start time of the target time interval, and the original audio data located after the target time interval is moved to the end time of the new audio data, ensuring continuity of the audio data.
  • the visualized audio element of the new audio data can be inserted from the start time of the target time zone, and the visualized audio element of the original audio data located after the target time zone is moved to the new audio.
  • the data is visualized after the audio element.
  • some recording editing applications may support selection re-recording, but generally delete the audio data in the covered area, the audio data in the uncovered area will be retained, and the user needs to manually delete it.
  • the re-recorded The length of the audio data generally cannot exceed the area covered.
  • the selected target time interval is 10 seconds, but the newly recorded audio data is 2 seconds, the remaining 8 seconds of audio data is not overwritten, and the remaining 8 seconds of audio data still exists, the user needs to manually The remaining 8 seconds of audio data is selected for deletion, and the length of the recording cannot exceed 10 seconds.
  • the embodiment of the present application loads a web page generated for a presentation in a client, and configures a presentation element in the web page, and further adds audio data to the presentation element on the timeline, so that the presentation can be played according to the timeline
  • the elements are synchronized to play audio data
  • the web page is used as a carrier to create a presentation
  • the audio data is used to synchronize the presentation of the presentation elements and audio data, allowing the user to view the contents of the presentation and listen to the presentation of the presentation.
  • Using the web element as a presentation element compared to the video data, can greatly reduce the mention, reduce the occupation of the storage space, and, because the web element is directly drawn and loaded on the web page, without compression processing, the web element can be guaranteed. Sharpness, on the other hand, through the target time interval
  • the audio data is re-added to prevent the user from manually deleting the uncovered audio data, thereby removing the limitation on the length of the re-added audio data and improving the efficiency of the production.
  • FIG. 7 a structural block diagram of a device for creating a presentation according to an embodiment of the present application is shown, which may specifically include the following modules:
  • a web page loading module 701, configured to load a web page generated for the presentation
  • a presentation element configuration module 702 adapted to configure a presentation element in the web page
  • An audio data adding module 703, configured to add audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis;
  • a target time interval selection module 704 adapted to select a target time interval on the time axis;
  • the audio data adding module 705 is adapted to re-add audio data to the target time interval.
  • the audio data adding module 701 includes:
  • a recording sub-module adapted to call the recorder to record audio data to the presentation element.
  • the recording submodule includes:
  • the original audio stream data acquiring unit is adapted to acquire original audio stream data collected in the microphone
  • a recorder incoming unit adapted to transmit the raw audio stream data to the recorder
  • a recorder processing unit adapted to visualize the original audio stream data in the recorder according to recording parameters, and convert the original audio stream data into audio data of a specified format.
  • the audio data adding module 701 includes:
  • a text information input submodule adapted to input text information to the presentation element
  • a text information conversion sub-module adapted to convert the text information into audio data.
  • the timeline has a visual axis element on the web page
  • the target time interval selection module 703 includes:
  • a scrolling marker strip insertion sub-module adapted to insert a scrolling marker strip on the visualized axis element
  • the interval selection sub-module is adapted to use a section between the start position and the end position of the scroll marker as the target time interval.
  • the audio data adding module 704 includes:
  • the original audio data deletion submodule is adapted to delete original audio data located in the target time interval
  • An original audio data moving submodule adapted to move original audio data located after the target time interval to a start time of the target time zone
  • a new audio data insertion sub-module adapted to insert new audio data from a start time of the target time interval and to move original audio data located after the target time interval to an end time of the new audio data.
  • the timeline has a visual axis element on the web page, the audio data having a visualized audio element on the visualized axis element;
  • the audio data adding module 704 further includes:
  • An audio element deletion submodule adapted to delete a visualized audio element of the original audio data located in the target time interval on the visualized axis element
  • An audio element moving submodule adapted to move a visualized audio element of the original audio data located after the target time interval to a start time of the target time zone;
  • An audio element insertion sub-module adapted to insert a visualized audio element of the new audio data from a start time of the target time zone and to move the visualized audio element of the original audio data located after the target time interval to the After the audio elements of the new audio data are visualized.
  • the method further includes:
  • An audio uploading module adapted to upload audio data on the timeline to a server.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the various component embodiments of the present application can be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the components of the presentation device in accordance with embodiments of the present application.
  • the application can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • Such a program implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.
  • FIG. 8 illustrates a terminal device that can implement the production of a presentation according to the present application.
  • the terminal device conventionally includes a processor 810 and a computer program product or computer readable medium in the form of a memory 820.
  • the memory 820 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 820 has a memory space 830 for program code 831 for performing any of the method steps described above.
  • storage space 830 for program code may include various program code 831 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • Such computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 820 in the terminal device of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit includes computer readable code 831', i.e., code readable by a processor, such as 810, that when executed by the terminal device causes the terminal device to perform each of the methods described above step.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Conformément à des modes de réalisation, la présente invention concerne un procédé et un dispositif pour produire une présentation. Le procédé consiste à: charger une page web générée pour une présentation; configurer un élément de présentation dans la page web; ajouter des données audio à l'élément de présentation sur une ligne chronologique, lire simultanément les données audio lors de la lecture de l'élément de présentation en fonction de la ligne de temps; sélectionner un intervalle de temps cible sur la ligne de temps; et rajouter des données audio pour l'intervalle de temps cible. Les modes de réalisation de la présente invention permettent d'obtenir un élément web en tant qu'élément de présentation et permettent, en comparaison avec des données vidéo, un volume considérablement réduit et une occupation réduite de l'espace de stockage; de plus, étant donné que l'élément de bande est rendu et chargé directement sur la page web, le besoin de traitement de compression est évité, et la clarté de l'élément web peut être assurée.
PCT/CN2017/094598 2016-12-26 2017-07-27 Procédé et dispositif pour produire des présentations Ceased WO2018120819A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611220468.7 2016-12-26
CN201611220468.7A CN108241598A (zh) 2016-12-26 2016-12-26 一种演示文稿的制作方法和装置

Publications (1)

Publication Number Publication Date
WO2018120819A1 true WO2018120819A1 (fr) 2018-07-05

Family

ID=62701870

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/094598 Ceased WO2018120819A1 (fr) 2016-12-26 2017-07-27 Procédé et dispositif pour produire des présentations

Country Status (2)

Country Link
CN (1) CN108241598A (fr)
WO (1) WO2018120819A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112533054A (zh) * 2019-09-19 2021-03-19 腾讯科技(深圳)有限公司 在线视频的播放方法、装置及存储介质
CN114398883A (zh) * 2022-01-19 2022-04-26 平安科技(深圳)有限公司 演示文稿生成方法、装置、计算机可读存储介质及服务器
CN114501106A (zh) * 2020-08-04 2022-05-13 腾讯科技(深圳)有限公司 一种文稿显示控制方法、装置、电子设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958608B (zh) * 2018-07-10 2022-07-15 广州视源电子科技股份有限公司 电子白板的界面元素操作方法、装置及交互智能设备
CN112115283A (zh) * 2020-08-25 2020-12-22 天津洪恩完美未来教育科技有限公司 绘本数据的处理方法、装置及设备
CN117278802B (zh) * 2023-11-23 2024-02-13 湖南快乐阳光互动娱乐传媒有限公司 一种视频剪辑痕迹的比对方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376001A (zh) * 2013-08-13 2015-02-25 腾讯科技(深圳)有限公司 一种ppt播放方法及装置
CN104765714A (zh) * 2014-01-08 2015-07-08 中国移动通信集团浙江有限公司 一种电子阅读与听书的切换方法及装置
CN104994434A (zh) * 2015-07-06 2015-10-21 天脉聚源(北京)教育科技有限公司 一种视频播放方法及装置
CN105530440A (zh) * 2014-09-29 2016-04-27 北京金山安全软件有限公司 一种视频的制作方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050154679A1 (en) * 2004-01-08 2005-07-14 Stanley Bielak System for inserting interactive media within a presentation
CN101299250A (zh) * 2007-04-30 2008-11-05 深圳华飚科技有限公司 在线协同幻灯片制作服务系统
CN101344883A (zh) * 2007-07-09 2009-01-14 宇瞻科技股份有限公司 记录演示文稿的方法
US8381086B2 (en) * 2007-09-18 2013-02-19 Microsoft Corporation Synchronizing slide show events with audio
CN102156613A (zh) * 2011-03-29 2011-08-17 汉王科技股份有限公司 演示文稿的显示方法及装置
JP2015056880A (ja) * 2013-09-13 2015-03-23 株式会社ネクスウェイ プレゼンテーション提供システム、方法、及びプログラム
CN105472406B (zh) * 2015-12-04 2019-01-29 广东威创视讯科技股份有限公司 演示文稿显示方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376001A (zh) * 2013-08-13 2015-02-25 腾讯科技(深圳)有限公司 一种ppt播放方法及装置
CN104765714A (zh) * 2014-01-08 2015-07-08 中国移动通信集团浙江有限公司 一种电子阅读与听书的切换方法及装置
CN105530440A (zh) * 2014-09-29 2016-04-27 北京金山安全软件有限公司 一种视频的制作方法及装置
CN104994434A (zh) * 2015-07-06 2015-10-21 天脉聚源(北京)教育科技有限公司 一种视频播放方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112533054A (zh) * 2019-09-19 2021-03-19 腾讯科技(深圳)有限公司 在线视频的播放方法、装置及存储介质
CN114501106A (zh) * 2020-08-04 2022-05-13 腾讯科技(深圳)有限公司 一种文稿显示控制方法、装置、电子设备和存储介质
CN114398883A (zh) * 2022-01-19 2022-04-26 平安科技(深圳)有限公司 演示文稿生成方法、装置、计算机可读存储介质及服务器
CN114398883B (zh) * 2022-01-19 2023-07-07 平安科技(深圳)有限公司 演示文稿生成方法、装置、计算机可读存储介质及服务器

Also Published As

Publication number Publication date
CN108241598A (zh) 2018-07-03

Similar Documents

Publication Publication Date Title
WO2018120819A1 (fr) Procédé et dispositif pour produire des présentations
WO2018120821A1 (fr) Procédé et dispositif de production d'une présentation
JP5030617B2 (ja) デジタル・オーディオ・プレーヤ上でrssコンテンツをレンダリングするためのrssコンテンツ管理のための方法、システム、およびプログラム(デジタル・オーディオ・プレーヤ上でrssコンテンツをレンダリングするためのrssコンテンツ管理)
US8937620B1 (en) System and methods for generation and control of story animation
US8966360B2 (en) Transcript editor
US8548618B1 (en) Systems and methods for creating narration audio
JP2023548008A (ja) 音声およびビデオ組立てのためのテキスト駆動型エディタ
US20200058288A1 (en) Timbre-selectable human voice playback system, playback method thereof and computer-readable recording medium
CN107517323B (zh) 一种信息分享方法、装置及存储介质
US20080027726A1 (en) Text to audio mapping, and animation of the text
WO2012086356A1 (fr) Format de fichier, serveur, dispositif de visualisation pour bande dessinée numérique, dispositif de génération de bande dessinée numérique
US20180226101A1 (en) Methods and systems for interactive multimedia creation
US20120177345A1 (en) Automated Video Creation Techniques
JPH0778074A (ja) マルチメディアのスクリプト作成方法とその装置
JP2007242012A (ja) デジタル・オーディオ・プレーヤ上で電子メールをレンダリングするための電子メール管理のための方法、システム、およびプログラム(デジタル・オーディオ・プレーヤ上で電子メールをレンダリングするための電子メール管理)
CN110781328A (zh) 基于语音识别的视频生成方法、系统、装置和存储介质
CN111930289B (zh) 一种处理图片和文本的方法和系统
CN108241672A (zh) 一种在线展示演示文稿的方法和装置
WO2018120820A1 (fr) Procédé et appareil de production de présentations
CN114638232A (zh) 一种文本转换成视频的方法、装置、电子设备及存储介质
Chi et al. Synthesis-assisted video prototyping from a document
KR20210050410A (ko) 영상 컨텐츠에 대한 합성음 실시간 생성에 기반한 컨텐츠 편집 지원 방법 및 시스템
CN119110139A (zh) 自动化视频生成方法、装置、设备及存储介质
CN119299800A (zh) 视频生成方法、装置、计算设备、存储介质及程序产品
CN115695680A (zh) 视频编辑方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17886381

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17886381

Country of ref document: EP

Kind code of ref document: A1