[go: up one dir, main page]

WO2018184249A1 - Procédé et terminal de génération de musique - Google Patents

Procédé et terminal de génération de musique Download PDF

Info

Publication number
WO2018184249A1
WO2018184249A1 PCT/CN2017/079829 CN2017079829W WO2018184249A1 WO 2018184249 A1 WO2018184249 A1 WO 2018184249A1 CN 2017079829 W CN2017079829 W CN 2017079829W WO 2018184249 A1 WO2018184249 A1 WO 2018184249A1
Authority
WO
WIPO (PCT)
Prior art keywords
music
picture
information
parameter information
generating
Prior art date
Application number
PCT/CN2017/079829
Other languages
English (en)
Chinese (zh)
Inventor
陈崇震
Original Assignee
格兰比圣(深圳)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 格兰比圣(深圳)科技有限公司 filed Critical 格兰比圣(深圳)科技有限公司
Priority to PCT/CN2017/079829 priority Critical patent/WO2018184249A1/fr
Publication of WO2018184249A1 publication Critical patent/WO2018184249A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present invention relates to the field of mobile terminals, and in particular, to a music generating method and a terminal.
  • the music associated with the picture can also be played at the same time as the picture is displayed on the terminal.
  • the conventional method is that the terminal first searches for a matching music on the network or a pre-established music library according to, for example, a location and a time of the captured picture, and then plays the same picture through the player while browsing the picture. Match the music.
  • the technical problem to be solved by the present invention lies in the existing way of providing music matching the picture, the matching effect of the searched music and the picture is poor, and the searched music cannot reflect the defect of the content in the picture well. .
  • the technical solution adopted by the present invention to solve the technical problem thereof is: Providing a music generating method, comprising the following steps:
  • the step of generating music according to the acquired element information and parameter information of the element includes:
  • the sound of the element is generated using a sound simulation method.
  • the step of generating music according to the acquired element information and the parameter information of the element comprises: [0011] determining a note type corresponding to the element information; [0012] acquiring a note in a corresponding note category according to parameter information of the element;
  • the music of the element is generated according to the acquired note.
  • the step of generating music according to the acquired element information and parameter information of the element further comprises:
  • the music of each element of the picture is superimposed and/or arranged in order to generate music that matches the picture.
  • the music generating method of the present invention further includes:
  • the music generating method of the present invention further includes:
  • a music generating terminal including:
  • an image recognition module configured to identify a picture, obtain element information in the picture, and parameter information of the element
  • the music generating module is configured to generate music according to the acquired element information and parameter information of the element.
  • the music generating terminal of the present invention further includes:
  • an association module configured to associate the generated music with the picture.
  • the music generating terminal of the present invention further includes:
  • a playing module configured to play the generated music after receiving the play instruction.
  • the generated music can directly reflect the elements in the picture, and the matching effect is good, and the visual to auditory conversion is realized.
  • FIG. 1 is a schematic flow chart of a first embodiment of a music generating method provided by the present invention.
  • step S12 in the first embodiment of the music generating method provided by the present invention
  • FIG. 3 is a schematic flowchart of step S22 in the second embodiment of the present invention.
  • FIG. 4 is a flowchart of a third embodiment of the music generating method provided by the present invention. Schematic diagram
  • FIG. 5 is a schematic flow chart of a fourth embodiment of a music generating method provided by the present invention.
  • FIG. 6 is a schematic structural diagram of a first embodiment of a music generating terminal provided by the present invention.
  • FIG. 7 is a structural diagram of a music generation module in another embodiment of a music generation terminal provided by the present invention.
  • FIG. 1 is a music generating method according to the present invention.
  • the operating method of the embodiment includes the following steps:
  • Step S11 Identify a picture, obtain element information in the picture, and parameter information of the element.
  • Step S12 Generate music according to the acquired element information and the parameter information of the element.
  • the picture can be acquired in various ways, such as by network downloading, camera shooting, and the like.
  • the element information in the picture and the parameter information of the element may be automatically identified and acquired, or the element information of the corresponding picture and the parameter information of the element may be identified and obtained after receiving the control instruction.
  • the user can input control commands by interacting with the terminal.
  • the element is the content recorded in the photo that can be simulated, such as a picture of a frog on a pond in a rainy day, which can be both an elementary rain and a frog.
  • the parameter information of an element includes information reflecting the characteristics of the element, such as the number of elements, radians, length, and the like.
  • the elements in the picture and their parameters can be identified by various image recognition technologies, preferably, The image edge detection algorithm (PattemRecognition) is used to identify various elements contained in the image, as well as various parameter information of each element.
  • the image edge detection algorithm PattemRecognition
  • step S12 includes the following sub-steps:
  • Sub-step S121 generating music of the element by using a sound simulation method according to the acquired element information and the parameter information of the element;
  • Sub-step S122 superimposing and/or arranging the music of each element of the picture to generate music that matches the picture.
  • music can be generated by various sound simulation calculation methods.
  • the subject that needs to perform the sound simulation is determined according to the element information, and the feature that the theme is finally simulated into the sound is determined according to the parameter information.
  • the element is rain
  • the parameter information of the element includes a large number of raindrops and a large length
  • the sound of the downpour is simulated by a sound simulation calculation method.
  • the element is a frog
  • the parameter information of the element is a plurality of numbers
  • the sound of the frog is simulated by a sound simulation calculation method.
  • the music of the element acts as music that matches the picture. If the image contains multiple elements, the music of each element can be superimposed and/or arranged in order. For example, the sound of a downpour is superimposed on the frog's screams, and then the sound of a torrential rain and then the frog's cry. Again, the sound of the downpour and the sound of the frog's screams, and then the sound of the trees swaying in the wind.
  • the music generated by performing sound simulation by using the element information in the picture and the parameter information of the element can directly reflect the content in the picture, and the degree of matching with the picture is high. , the realization of the transition from visual to auditory.
  • the music generating method includes the following steps:
  • Step S20 Identify a picture, obtain element information in the picture, and parameter information of the element.
  • Step S21 Generate music according to the acquired element information and the parameter information of the element.
  • step S21 includes the following sub-steps:
  • Sub-step S211 determining a note category corresponding to the element information
  • Sub-step S212 Acquire notes in corresponding note categories according to parameter information of the element
  • Sub-step S213 generating music of the element according to the acquired note; [0057] Sub-step S214: superimposing and/or arranging the music of each element of the picture to generate music that matches the picture.
  • the notes for simulating the generation of various elements are classified in advance, one element corresponds to one note type, and one note category includes a plurality of notes for expressing the element features.
  • the corresponding note type is obtained according to the acquired element information
  • the sub-step S212 all the notes for embodying the feature of the element are found from the note type according to the feature information of the element.
  • the music of the element is generated using all of the acquired notes in sub-step S213. If there is only one element in the picture, the music of the element is used as the music matching the picture. If the picture contains multiple elements, in sub-step S214, the music of each element is superimposed and/or pressed. The order of the lines produces music that matches the picture.
  • the light and heavy fluctuations of the music generated by using the element information in the picture and the note corresponding to the parameter information of the element can directly reflect the content in the picture and match the picture. High degree, achieving a transition from visual to auditory.
  • the music generating method includes the following steps:
  • Step S40 Identify a picture, obtain element information in the picture, and parameter information of the element.
  • Step S41 generating music according to the acquired element information and parameter information of the element
  • Step S42 Associating the generated music with the picture.
  • step S42 the music generated in step S41 is associated with the corresponding picture, and the music and the association relationship are saved.
  • There are several ways to associate music with a picture for example, setting up a relationship table, and recording the generated music and pictures in the table.
  • the user can be allowed to know or use the corresponding picture when playing the music, or when browsing the picture. Get or play the corresponding music.
  • the music generating method includes the following steps: [0068] Step S50: identifying a picture, acquiring element information in the picture and parameter information of the element
  • Step S51 generating music according to the acquired element information and parameter information of the element
  • Step S52 associate the generated music with the picture
  • Step S53 When the play command is received, the generated music is played.
  • the difference between this embodiment and the third embodiment is that when the terminal receives the play command, the generated music will be played. Specifically, when the user browses the picture with the matching music, the terminal receives the play instruction, and automatically plays the music that matches the picture.
  • a play command can also be input through interaction with other means of the terminal device to play music that matches the picture. For example, after browsing the picture, the user double-clicks the picture, and the terminal receives the play command, and then plays the music that matches the picture.
  • the terminal when the user browses to the picture where the matching music exists, the terminal can automatically play the matched music, thereby improving the user experience.
  • a music generating terminal 600 includes:
  • an image recognition module 610 configured to identify a picture, obtain element information in the picture, and parameter information of the element
  • a music generating module 620 configured to generate music according to the acquired element information and parameter information of the element
  • an association module 630 configured to associate the generated music with a picture
  • the playing module 640 is configured to play the generated music after receiving the play instruction.
  • the music generating module 620 includes:
  • a first element music sub-module 621 configured to generate music of the element by using a sound simulation method according to the acquired element information and parameter information of the element;
  • the music synthesis sub-module 622 is configured to superimpose and/or sequentially arrange the music of each element of the picture to generate music that matches the picture.
  • the music generating module may further include: [0084] a category determining submodule 710, configured to determine a note category corresponding to the element information;
  • a note acquisition sub-module 720 configured to obtain a note in a corresponding note category according to the parameter information of the element
  • a second element music sub-module 730 configured to generate music of the element according to the acquired note
  • the music synthesis sub-module 740 is configured to superimpose and/or arrange the music of each element of the picture to generate music that matches the picture.
  • the music generated by performing sound simulation by using the element information in the picture and the parameter information of the element can directly reflect the content in the picture, and the degree of matching with the picture is high. , the realization of the transition from visual to auditory.
  • Modules or units or subunits in the apparatus of the embodiments of the present invention may be combined, divided, and deleted according to actual needs.
  • a person skilled in the art can understand that all or part of the steps of the foregoing embodiments can be completed by a program to instruct terminal device related hardware, and the program can be stored in a computer readable storage medium, and the storage medium can be Including: flash drive, read-only memory (ROM), random access memory (Random Access Memory, RAM), disk or CD, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé et un terminal de génération de musique. Le procédé consiste à : reconnaître une image pour acquérir des informations d'éléments dans l'image ainsi que des informations de paramètres d'éléments ; et générer de la musique en fonction des informations d'éléments acquises et des informations de paramètres des éléments. Le terminal comprend : un module de reconnaissance d'image permettant de reconnaître une image pour acquérir des informations d'éléments dans l'image ainsi que des informations de paramètres d'éléments ; et un module de génération de musique permettant de générer de la musique selon les informations d'éléments acquises et les informations de paramètres des éléments. La musique générée à l'aide de la solution technique permet de refléter directement les éléments dans l'image avec un bon effet de correspondance et de réaliser une conversion visuelle-auditive.
PCT/CN2017/079829 2017-04-08 2017-04-08 Procédé et terminal de génération de musique WO2018184249A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/079829 WO2018184249A1 (fr) 2017-04-08 2017-04-08 Procédé et terminal de génération de musique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/079829 WO2018184249A1 (fr) 2017-04-08 2017-04-08 Procédé et terminal de génération de musique

Publications (1)

Publication Number Publication Date
WO2018184249A1 true WO2018184249A1 (fr) 2018-10-11

Family

ID=63712776

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/079829 WO2018184249A1 (fr) 2017-04-08 2017-04-08 Procédé et terminal de génération de musique

Country Status (1)

Country Link
WO (1) WO2018184249A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100325581A1 (en) * 2006-11-10 2010-12-23 Microsoft Corporation Data object linking and browsing tool
CN103475789A (zh) * 2013-08-26 2013-12-25 宇龙计算机通信科技(深圳)有限公司 一种移动终端及其控制方法
CN104065869A (zh) * 2013-03-18 2014-09-24 三星电子株式会社 在电子装置中与播放音频组合地显示图像的方法
CN105045840A (zh) * 2015-06-30 2015-11-11 广东欧珀移动通信有限公司 一种图片展示方法及移动终端
CN105912722A (zh) * 2016-05-04 2016-08-31 广州酷狗计算机科技有限公司 歌曲发送方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100325581A1 (en) * 2006-11-10 2010-12-23 Microsoft Corporation Data object linking and browsing tool
CN104065869A (zh) * 2013-03-18 2014-09-24 三星电子株式会社 在电子装置中与播放音频组合地显示图像的方法
CN103475789A (zh) * 2013-08-26 2013-12-25 宇龙计算机通信科技(深圳)有限公司 一种移动终端及其控制方法
CN105045840A (zh) * 2015-06-30 2015-11-11 广东欧珀移动通信有限公司 一种图片展示方法及移动终端
CN105912722A (zh) * 2016-05-04 2016-08-31 广州酷狗计算机科技有限公司 歌曲发送方法及装置

Similar Documents

Publication Publication Date Title
US12278859B2 (en) Creating a cinematic storytelling experience using network-addressable devices
US8347213B2 (en) Automatically generating audiovisual works
CN105159639B (zh) 音频封面显示方法及装置
JP2019212308A (ja) 動画サービス提供方法およびこれを用いるサービスサーバ
US12176008B2 (en) Method and apparatus for matching music with video, computer device, and storage medium
US11511200B2 (en) Game playing method and system based on a multimedia file
CN110675886A (zh) 音频信号处理方法、装置、电子设备及存储介质
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
WO2024188242A1 (fr) Procédé et appareil de réponse à une question, dispositif et support de stockage
CN109474843A (zh) 语音操控终端的方法、客户端、服务器
US11568871B1 (en) Interactive media system using audio inputs
US20170171471A1 (en) Method and device for generating multimedia picture and an electronic device
CN113450804B (zh) 语音可视化方法、装置、投影设备及计算机可读存储介质
CN112380871A (zh) 语义识别方法、设备及介质
WO2018184249A1 (fr) Procédé et terminal de génération de musique
CN106060394A (zh) 一种拍照方法、装置和终端设备
CN115119041B (zh) 跨屏播放的控制方法、装置、设备及计算机存储介质
CN115148184B (zh) 语音合成及播报方法、教学方法、直播方法及装置
WO2018187890A1 (fr) Procédé et dispositif de génération de musique en fonction d'une image
CN119207346A (zh) K歌方法、计算机设备、可读存储介质和计算机程序产品
WO2022209648A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et support non transitoire lisible par ordinateur
CN113192533A (zh) 音频处理方法、装置、电子设备及存储介质
TWI590077B (zh) 多媒體檔案播放方法與電子裝置
CN113840152A (zh) 直播关键点处理方法和装置
CN119052393A (zh) 数据处理方法、介质、产品及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17904822

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17904822

Country of ref document: EP

Kind code of ref document: A1