WO2018198382A1 - Appareil et procédé d'affichage de paroles - Google Patents
Appareil et procédé d'affichage de paroles Download PDFInfo
- Publication number
- WO2018198382A1 WO2018198382A1 PCT/JP2017/017436 JP2017017436W WO2018198382A1 WO 2018198382 A1 WO2018198382 A1 WO 2018198382A1 JP 2017017436 W JP2017017436 W JP 2017017436W WO 2018198382 A1 WO2018198382 A1 WO 2018198382A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display
- display unit
- lyrics
- unit
- state
- Prior art date
Links
- 238000000034 method Methods 0.000 title description 24
- 230000004044 response Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 description 24
- 230000008569 process Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 18
- 239000011295 pitch Substances 0.000 description 15
- 230000000694 effects Effects 0.000 description 8
- 238000013500 data storage Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 239000000872 buffer Substances 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
Definitions
- the present invention relates to a lyrics display apparatus and method for displaying lyrics.
- Patent Document 1 an apparatus that displays lyrics that are sung according to a performance by a performer is known (Patent Document 1).
- This device updates the singing position in the lyrics indicated by the lyric data, and displays the character at the singing position in a mode (color) different from other characters.
- Some electronic musical instruments and the like receive a command by displaying a setting screen for setting various functions, sound generation parameters, and the like.
- An object of the present invention is to provide a lyric display apparatus and method capable of visually recognizing lyrics and state without complicated operations.
- a display unit (33) composed of a plurality of display units (45, 46) arranged continuously, and lyrics data (14a) including character information for displaying lyrics.
- a data acquisition unit for acquiring a state, a state acquisition unit for acquiring a predetermined state, and character information included in the lyric data acquired by the data acquisition unit is a partial display unit group in the display unit. While displaying on a 1st display part group (45), the state acquired by the said state acquisition part is displayed on the 2nd display part group (46) which does not belong to the said 1st display part group in the said display unit.
- a lyrics display device having a display control unit to be displayed.
- the character information included in the lyric data is displayed on the first display unit group which is a continuous display unit group in the display unit, and the state acquired by the state acquisition step is displayed in the display unit.
- symbol in the said parenthesis is an illustration.
- the lyrics and the state can be visually recognized without a complicated operation.
- FIG. 1 is a schematic diagram of a lyrics display device.
- FIG. 2 is a schematic diagram of a lyrics display device.
- FIG. 3 is a block diagram of an electronic musical instrument.
- FIG. 4 is a diagram showing a main part of the display unit.
- FIG. 5 is a flowchart showing an example of the flow of processing when a performance is performed.
- FIG. 6 is a diagram showing an example of lyric text data.
- FIG. 7 is a diagram showing an example of types of speech element data.
- FIG. 8 is a flowchart of display processing.
- FIG. 9 is a diagram showing a display example in the display unit.
- FIG. 10 is a diagram showing a display example in the display unit. [FIG. 11A to FIG. 11C]
- FIG. 12 is a diagram showing a display example in the display unit.
- FIG. 13 is a diagram showing a display example in the display unit.
- FIG. 14 is a diagram showing a display example in the display
- FIG. 1 and FIG. 2 are schematic diagrams of a lyrics display device according to an embodiment of the present invention.
- This lyric display device is configured as an electronic musical instrument 100 that is a keyboard musical instrument as an example, and has a main body 30 and a neck 31.
- the main body 30 has a first surface 30a, a second surface 30b, a third surface 30c, and a fourth surface 30d.
- the first surface 30a is a keyboard arrangement surface on which a keyboard unit KB composed of a plurality of keys is arranged.
- the second surface 30b is the back surface. Hooks 36 and 37 are provided on the second surface 30b.
- a strap (not shown) can be placed between the hooks 36 and 37, and the performer usually performs performance such as operation of the keyboard KB with the strap placed on the shoulder. Therefore, when used on the shoulder, especially when the scale direction (key arrangement direction) of the keyboard part KB is the left-right direction, the first surface 30a and the keyboard part KB face the listener side, and the third surface 30c, the fourth surface The surfaces 30d are generally directed downward and upward, respectively.
- the electronic musical instrument 100 is designed so that the keyboard KB is mainly played with the right hand when the shoulder is used.
- the neck portion 31 extends from the side portion of the main body portion 30.
- the neck portion 31 is provided with various operators including a forward operator 34 and a return operator 35.
- a display unit 33 made of liquid crystal or the like is disposed on the fourth surface 30 d of the main body 30.
- the main body portion 30 and the neck portion 31 have a substantially rectangular shape in a side view, but the four surfaces constituting the rectangle may not be flat surfaces but may be curved surfaces such as convex surfaces.
- the electronic musical instrument 100 is a musical instrument that can perform singing simulation in response to an operation on a performance operator.
- singing simulation is outputting the sound which simulated human voice by singing composition.
- a white key and a black key are arranged in pitch order, and each key is associated with a different pitch.
- the user presses a desired key on the keyboard KB.
- the electronic musical instrument 100 detects a key operated by the user and generates a singing sound having a pitch corresponding to the operated key. Note that the order of syllables of the singing sounds to be generated is predetermined.
- FIG. 3 is a block diagram of the electronic musical instrument 100.
- the electronic musical instrument 100 includes a CPU (Central Processing Unit) 10, a timer 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a data storage unit 14, a performance operator 15, and other operations.
- a child 16 a parameter value setting operator 17, a display unit 33, a sound source 19, an effect circuit 20, a sound system 21, a communication I / F (Interface), and a bus 23 are provided.
- the CPU 10 is a central processing unit that controls the entire electronic musical instrument 100.
- the timer 11 is a module that measures time.
- the ROM 12 is a non-volatile memory that stores control programs and various data.
- the RAM 13 is a volatile memory used as a work area for the CPU 10 and various buffers.
- the display unit 33 is a display module such as a liquid crystal display panel or an organic EL (Electro-Luminescence) panel. The display unit 33 displays an operation state of the electronic musical instrument 100, various setting screens, a message for the user, and the like.
- the performance operator 15 is a module that mainly accepts a performance operation that specifies a pitch.
- the keyboard unit KB, the advance operation unit 34, and the return operation unit 35 are included in the performance operation unit 15.
- the performance operator 15 when the performance operator 15 is a keyboard, the performance operator 15 may be configured such as note-on / note-off based on on / off of a sensor corresponding to each key, key-pressing strength (speed, velocity), and the like.
- Output performance information may be in the form of a MIDI (musical instrument digital interface) message.
- the other operation element 16 is an operation module such as an operation button or an operation knob for performing settings other than performance, such as settings related to the electronic musical instrument 100, for example.
- the parameter value setting operator 17 is an operation module such as operation buttons and operation knobs for setting sound parameters.
- parameters for the attributes of the singing sound there are, for example, harmonics, brightness, resonance, gender factor, and the like.
- Harmonic is a parameter that sets the balance of overtone components contained in the voice.
- Brightness is a parameter for setting the contrast of a voice and gives a tone change.
- Resonance is a parameter for setting the tone color and strength of a colored sound.
- the gender element is a parameter for setting a formant, and changes the voice thickness and texture in a feminine or masculine manner.
- the external storage device 3 is an external device connected to the electronic musical instrument 100, for example, and is a device that stores audio data, for example.
- the communication I / F 22 is a communication module that communicates with an external device.
- the bus 23 performs data transfer between each unit in the electronic musical instrument 100.
- the data storage unit 14 stores singing data 14a (lyric data).
- the singing data 14a includes lyric text data, phonological information database, and the like.
- Lyric text data is data describing the lyrics.
- the lyrics text data the lyrics for each song are described in syllable units. That is, the lyric text data has character information obtained by dividing the lyrics into syllables, and the character information is also display information for displaying lyrics corresponding to the syllables.
- the syllable is a group of sounds output in response to one performance operation.
- the phoneme information database is a database that stores speech segment data.
- the speech segment data is data indicating a speech waveform, and includes, for example, spectrum data of a sample sequence of speech segments as waveform data. Further, the speech unit data includes unit pitch data indicating the pitch of the waveform of the speech unit.
- the lyrics text data and the speech segment data may be managed by a database.
- the sound source 19 is a module having a plurality of sound generation channels. One tone generation channel is assigned to the sound source 19 according to the performance of the user under the control of the CPU 10.
- the sound source 19 reads out the speech segment data corresponding to the performance from the data storage unit 14 and generates singing sound data in the assigned sounding channel.
- the effect circuit 20 applies the acoustic effect specified by the parameter value setting operator 17 to the singing sound data generated by the sound source 19.
- the sound system 21 converts the singing sound data processed by the effect circuit 20 into an analog signal by a digital / analog converter. And the sound system 21 amplifies the singing sound converted into the analog signal, and outputs it from a speaker.
- FIG. 4 is a diagram showing the main part of the display unit 33.
- the display unit 33 includes a first main area 41, a second main area 42, a first sub area 43, and a second sub area 44 as display areas.
- the entire display area has two rows (two columns), the first main area 41 and the first sub area 43 are the first row (upper row), and the second main area 42 and the second sub area 44 are two rows.
- display frames 45 45-1, 45-2, 45-3... 45-13
- display frames 45 as a plurality of display units are continuously arranged in series in the longitudinal direction of the display unit 33. ing.
- Each of the sub areas 43 and 44 also has a display frame 46 (46-1, 46-2, 46-3) which is a plurality of display units.
- the plurality of display frames 45 is a first display unit group that is a part of a continuous display unit group, and the plurality of display frames 46 is a second display unit group that does not belong to the first display unit group. .
- the display frame 45 may display characters, and the display frame 46 may have any configuration that can display visual information such as icons. Each configuration is not limited, and it is not essential that the display frame 45 is surrounded by a frame. Characters corresponding to syllables are displayed in the order of pronunciation, starting from the leftmost display frame 45-1 in FIG.
- the main areas 41 and 42 are mainly used for displaying lyrics.
- the sub-areas 43 and 44 are displays other than lyrics, and are mainly used for status display.
- FIG. 5 is a flowchart showing an example of a processing flow when a performance by the electronic musical instrument 100 is performed.
- processing when the user performs selection of a performance song and performance of the selected song will be described.
- a case will be described in which only a single sound is output even when a plurality of keys are operated simultaneously. In this case, only the highest pitch among the pitches of keys operated simultaneously may be processed, or only the lowest pitch may be processed.
- the processing described below is realized, for example, when the CPU 10 executes a program stored in the ROM 12 or the RAM 13 and functions as a control unit that controls various components included in the electronic musical instrument 100.
- the CPU 10 waits until an operation for selecting a song to be performed is received from the user (step S101). If there is no music selection operation even after a predetermined time has elapsed, the CPU 10 may determine that the music set by default has been selected.
- the CPU 10 accepts the selection of the song, the CPU 10 reads the lyrics text data of the singing data 14a of the selected song. Then, the CPU 10 sets the cursor position at the first syllable described in the lyric text data (step S102). Here, the cursor is a virtual index indicating the position of the next syllable to be pronounced.
- the CPU 10 determines whether or not note-on based on the operation of the keyboard unit KB is detected (step S103).
- the CPU 10 determines whether or not the note-off is detected (step S107). On the other hand, when note-on is detected, that is, when a new key press is detected, the CPU 10 stops outputting the sound if a sound is being output (step S104). Next, CPU10 performs the output sound production
- the CPU 10 reads out speech unit data (waveform data) of a syllable corresponding to the cursor position, and outputs a sound having a waveform indicated by the read out speech unit data at a pitch corresponding to note-on. Specifically, the CPU 10 obtains a difference between the pitch indicated by the segment pitch data included in the speech segment data and the pitch corresponding to the operated key, and the waveform data is obtained only at a frequency corresponding to the difference. The spectrum distribution shown is moved in the frequency axis direction. Thereby, the electronic musical instrument 100 can output a singing sound with the pitch corresponding to the operated key.
- the CPU 10 updates the cursor position (reading position) (step S106), and advances the process to step S107.
- FIG. 6 is a diagram illustrating an example of lyrics text data.
- lyrics of five syllables c1 to c5 are described in the lyrics text data.
- Each character “ha”, “ru”, “yo”, “ko”, “i” indicates one character of Japanese hiragana, and each character corresponds to one syllable.
- the CPU 10 updates the cursor position in syllable units.
- the CPU 10 moves the cursor position to the next syllable c4. In this way, the CPU 10 sequentially moves the cursor position to the next syllable according to note-on.
- FIG. 7 is a diagram illustrating an example of the types of speech segment data.
- the CPU 10 extracts speech segment data corresponding to the syllable from the phonological information database in order to pronounce the syllable corresponding to the cursor position.
- speech segment data There are two types of speech segment data: phoneme chain data and stationary partial data.
- the phoneme chain data is data indicating a speech segment when the pronunciation changes, such as “silence (#) to consonant”, “consonant to vowel”, “vowel to consonant (vowel of the next syllable)”.
- the stationary partial data is data indicating a speech unit when the vowel sound continues.
- the sound source 19 includes the speech chain data “# -h” corresponding to “silence ⁇ consonant h” and “consonant h ⁇ vowel a”. Is selected from the speech chain data “ha” corresponding to “” and the stationary partial data “a” corresponding to “vowel a”. Then, when the CPU 10 detects the key depression after the performance is started, the singing sound based on the voice chain data “# -h”, the voice chain data “ha”, and the stationary partial data “a” is operated. Is output with the pitch according to the operation and the velocity according to the operation. In this way, the determination of the cursor position and the pronunciation of the singing sound are executed.
- step S107 in FIG. 5 If the note-off is detected in step S107 in FIG. 5, if the sound is being output, the CPU 10 stops outputting the sound (step S108) and advances the process to step S110. On the other hand, when note-off is not detected, the CPU 10 advances the process to step S110. In step S110, the CPU 10 determines whether or not the performance has ended. CPU10 returns a process to step S103, when performance is not complete
- the lyric text data included in the singing data 14a includes at least character information associated with a plurality of syllables corresponding to the selected song.
- the lyric text data is data for singing by the singing section (the sound source 19, the effect circuit 20, and the sound system 21).
- the lyric text data is divided into a plurality of sections in advance, and each divided section is referred to as a “phrase”.
- the phrase is a certain unit and is divided by a meaning that is easy for the user to recognize, but the definition of the section is not limited to this.
- the CPU 10 acquires the song in a state of being divided into a plurality of phrases.
- the phrase includes one or more syllables and character information corresponding to the syllable.
- the CPU 10 causes the first main area 41 (FIG. 4) of the display unit 33 to display character information corresponding to the first phrase among a plurality of phrases corresponding to the selected song.
- the first character of the first phrase is displayed in the leftmost display frame 45-1, and as many characters as can be displayed in the first main area 41 are displayed.
- the second phrase as many characters as can be displayed in the second main area 42 are displayed.
- the keyboard unit KB plays a role as a progress instruction acquisition unit that acquires a singing instruction and, in turn, an instruction to display character information.
- the CPU 10 causes the singing section to sing the syllable to be sung next, and advances the display of the characters displayed in the first main area 41 according to the progress of the syllable.
- the character display step is the left direction in FIG. 4, and characters that cannot be displayed first appear from the display frame 45 at the right end according to the progress of the singing.
- the cursor position indicates the syllable to be sung next, and indicates the syllable corresponding to the character displayed in the display frame 45-1 of the first main area 41.
- one character does not necessarily correspond to one syllable.
- “da” (da) having a cloud point two letters “ta” (ta) and “′′” correspond to one syllable.
- the lyrics may be in English.
- the lyrics are “september”, the lyrics are three syllables “sep”, “tem”, and “ber”.
- “Sep” is one syllable, but three letters “s”, “e”, and “p” correspond to one syllable. Since the character display step is in units of syllables, in the case of “da”, two characters are advanced by singing.
- the lyrics are not limited to Japanese and may be in other languages.
- FIG. 8 is a flowchart of the display process. This process is realized, for example, when the CPU 10 executes a program stored in the ROM 12 or the RAM 13 and functions as a control unit that controls various components included in the electronic musical instrument 100. 8 is executed in parallel with the process shown in FIG. 5 after the power is turned on. In the process illustrated in FIG. 8, the CPU 10 serves as a state acquisition unit, a display control unit, a data acquisition unit, and a setting instruction acquisition unit. 9, 10, 12, 13, and 14 are diagrams showing display examples on the display unit 33. FIG. 11A, FIG. 11B, and FIG. 11C are diagrams showing display examples of sub-areas.
- the CPU 10 displays the startup screen shown in FIG. 9 on the display unit 33 (step S201).
- the startup screen first, for example, the manufacturer name is displayed in the first main area 41 and the first sub area 43, and the product name is displayed in the second main area 42 and the second sub area 44.
- the CPU 10 acquires a “predetermined state” regarding the electronic musical instrument 100 (step S202).
- the CPU 10 may switch the display unit 33 to a display indicating that it is being activated.
- the predetermined state includes a state relating to power supply, and includes, for example, whether the type of the power source is a commercial power source or a battery, and further the state of the remaining battery level.
- the predetermined state includes a key transpose setting state and its value, the number of the currently selected song, a state such as the presence / absence of network connection, and the like.
- the predetermined state is not limited to these examples.
- the CPU 10 also acquires the song data 14a for the selected song when acquiring the predetermined state, and further extracts a plurality of phrases corresponding to the selected song from the lyrics text data of the acquired song data 14a. Several phrases are already ordered.
- the CPU 10 displays a normal screen as shown in FIG. 10 on the display unit 33 (step S203). Specifically, the CPU 10 displays lyrics in the main areas 41 and 42 and displays information on the acquired state in the sub areas 43 and 44. In particular, regarding the lyrics display, the CPU 10 displays the first phrase of the extracted plurality of phrases in the first main area 41 as much as it can be displayed from the top, and the second phrase of the second phrase. The area 42 is displayed as much as can be displayed from the top. For example, as shown in FIG. 10, the CPU 10 displays a character string “Dandant ...” in the first main area 41 and a character string “Aiweo ...” in the second main area 42. It should be noted that information related to the selected song and the set tone may be temporarily displayed during the transition from the display indicating that the program is being activated to the normal screen display.
- the CPU 10 may use at least one of the main areas 41 and 42 for displaying lyrics, or may use at least one of the sub-areas 43 and 44 for displaying information on the state (hereinafter, state information).
- state information As an example of the state information display, in FIG. 10, an icon indicates that the power supply state is a battery and the charge state (remaining battery amount) is fully charged.
- the electronic musical instrument 100 may include a singing synthesis mode and a musical instrument sounding mode.
- the display example in FIG. 10 assumes a singing synthesis mode.
- the CPU 10 may display information indicating the instrument tone color in the main areas 41 and 42 instead of the lyrics.
- the CPU 10 determines whether or not the power is turned off (step S204). If the power is turned off, the CPU 10 stores various current information (status, set value, etc.) in a nonvolatile memory (data storage unit 14). Etc.) (step S205), and the process shown in FIG. On the other hand, if the power is not turned off, the CPU 10 determines whether or not there has been a change in a predetermined state (step S206). If there is no change in the predetermined state, the CPU 10 advances the process to step S208. If there is a change in the predetermined state, the CPU 10 updates the display of the state information (step S207) and advances the process to step S208. Proceed.
- step S207 the CPU 10 updates the display of the sub-areas 43 and 44 based on the information regarding the newly acquired predetermined state. For example, when the CPU 10 acquires state information indicating that the remaining battery level is low or no battery level, the remaining battery level display in the sub-area 44 corresponds to the remaining battery level as shown in FIGS. 11A and 11B. Switch to the display you want. Alternatively, when the power source is switched from the battery to the commercial power source, the CPU 10 switches the display of the sub area 44 to an icon simulating an outlet as shown in FIG. 11C. In these cases, the lyrics display in the main areas 41 and 42 is maintained as it is.
- step S208 the CPU 10 determines whether or not a change related to the singing data 14a has been accepted.
- the change related to the singing data 14a corresponds to, for example, a change in the singing position in the currently selected song or a change in the selected song itself. If the CPU 10 has not received a change relating to the singing data 14a, the CPU 10 determines whether or not a predetermined setting instruction has been received (step S209).
- the predetermined settings include, for example, settings related to sound generation (parameter setting change, for example, effect, volume, etc.) and settings related to various functions (octave shift, mute, etc.).
- the predetermined setting also includes firmware update and the like. The predetermined setting is not limited to these examples.
- step S210 the CPU 10 displays a setting screen using both the main area and the sub area. That is, the CPU 10 changes the display mode from the normal screen display to the setting screen display for displaying information for predetermined setting using both the first display unit group and the second display unit group in the display unit 33. Switch. For example, when receiving a firmware update setting instruction, as shown in FIG. 12, the CPU 10 displays character information indicating that it is in the update mode in the main area 41 and the sub-area 43, as well as the current and updated versions.
- Character information indicating each version is displayed in the main area 42 and the sub area 44.
- the CPU 10 displays the effect types and values before and after the change using the main areas 41 and 42 and the sub areas 43 and 44.
- the 1st line and the 2nd line in the display unit 33 there is no limitation in the use aspect of each line. For example, a mode in which a setting change item is displayed on the first line, and a setting value is displayed on the second line is conceivable.
- step S211 the CPU 10 reflects the contents of the received setting instruction, and then determines whether or not there is no operation related to the setting instruction for a predetermined time while the setting screen display is displayed (step S21). S212). If there is a new operation related to the setting instruction before the predetermined time has elapsed, it is accepted and the process returns to step S210. Note that the display of the setting screen display changes every moment by repeating steps S210 to S212, and the example shown in FIG. 12 is an example on the way.
- the CPU 10 returns the display mode of the display unit 33 to the state immediately before switching to the setting screen display (step S213). Therefore, when there is no operation related to the setting instruction for a predetermined time while the information for the predetermined setting is displayed, the display returns to the previous normal screen display. Thereafter, the process returns to step S204.
- step S208 determines whether or not a change related to the singing data 14a has been received. If the change of the selected song has not been accepted, since the accepted song change is the song position in the currently selected song, the CPU 10 updates the lyrics display in the main areas 41 and 42 (step S218). That is, the CPU 10 advances the display of the phrase in the first main area 41 by one syllable. Specifically, the CPU 10 erases characters corresponding to one syllable at the left end of the first main area 41 and stuffs the character string to the left by the number of erased characters.
- step S218 corresponds to steps S105 and S106 in FIG.
- the display may be raised in units of phrases by the operation of the advance operator 34. Further, the display may be lowered in units of phrases by operating the return operator 35. In the case of adopting the raising and lowering in units of phrases, a process of updating the cursor position may be added immediately before step S110 in FIG. Thereafter, the process proceeds to step S209.
- step S214 the CPU 10 displays a screen for changing the tune using either or both of the main areas 41 and 42 in the display unit 33 (step S215), and a change instruction The change contents are reflected according to (step S216).
- the status display is maintained in the sub-areas 43 and 44. For example, as shown in FIG. 14, in the first main area 41, the titles of the songs before and after the change, etc. are displayed, and the data is exchanged.
- the CPU 10 switches the display mode to the normal screen display in which the lyrics of the selected song after the change are displayed in the main areas 41 and 42 in step S217. Accordingly, the content updated to the lyrics corresponding to the selected song is displayed on the display unit 33 in the normal screen display as shown in FIG. Thereafter, the process proceeds to step S209.
- CPU10 displays the character information contained in the acquired data 14a for singing on the 1st display part group (the display frame 45 group of the main areas 41 and 42) in the display unit 33.
- FIG. At the same time, the acquired state is displayed on the second display unit group (the display frame 46 group of the sub-areas 43 and 44).
- the display mode of the display unit 33 is changed to the normal screen display state immediately before switching. Since it returns, when the setting is completed, it is possible to return to the lyrics and status display without any operation.
- the entire display area of the display unit 33 has a two-row (two-tier) structure, but may have a structure of three or more lines. Further, although the main areas are arranged side by side, the arrangement relationship is not limited to the example, and the main areas may be arranged side by side.
- the acquisition destination of the singing data 14a is not limited to the storage unit, and an external device through the communication I / F 22 may be the acquisition destination. Further, it may be acquired by the CPU 10 when the user edits or creates the electronic musical instrument 100.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Electrophonic Musical Instruments (AREA)
- Auxiliary Devices For Music (AREA)
Abstract
L'invention concerne un appareil pour afficher des paroles et permettre de visualiser des paroles et un état sans que cela nécessite des opérations compliquées. Dans chacune des zones principales (41, 42) d'une unité d'affichage (33), des cadres d'affichage (45), une pluralité de parties d'affichage, sont agencés en série de manière continue. Une unité centrale (10) acquiert des données pour chanter (14a) d'une chanson sélectionnée et acquiert un état prédéterminé concernant un instrument de musique électronique (100). L'unité centrale (10) amène ensuite un premier groupe de parties d'affichage (le groupe de cadres d'affichage (45) dans les zones principales 41, 42) de l'unité d'affichage (33) à afficher des informations de caractères contenues dans les données pour chanter (14a), et amène un second groupe de parties d'affichage (un groupe de cadres d'affichage (46) dans des sous-zones 43, 44) à afficher l'état acquis.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780089625.1A CN110546705B (zh) | 2017-04-27 | 2017-04-27 | 歌词显示装置及方法 |
JP2019515080A JP6732216B2 (ja) | 2017-04-27 | 2017-04-27 | 歌詞表示装置及び歌詞表示装置における歌詞表示方法、電子楽器 |
PCT/JP2017/017436 WO2018198382A1 (fr) | 2017-04-27 | 2017-04-27 | Appareil et procédé d'affichage de paroles |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2017/017436 WO2018198382A1 (fr) | 2017-04-27 | 2017-04-27 | Appareil et procédé d'affichage de paroles |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018198382A1 true WO2018198382A1 (fr) | 2018-11-01 |
Family
ID=63918855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/017436 WO2018198382A1 (fr) | 2017-04-27 | 2017-04-27 | Appareil et procédé d'affichage de paroles |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP6732216B2 (fr) |
CN (1) | CN110546705B (fr) |
WO (1) | WO2018198382A1 (fr) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09146573A (ja) * | 1995-11-21 | 1997-06-06 | Ekushingu:Kk | カラオケ装置 |
JPH10161655A (ja) * | 1996-11-29 | 1998-06-19 | Casio Comput Co Ltd | 楽器のナビゲート装置 |
JP2002000734A (ja) * | 2000-06-22 | 2002-01-08 | Daiichikosho Co Ltd | 音楽療法支援装置 |
JP2006259236A (ja) * | 2005-03-17 | 2006-09-28 | Daiichikosho Co Ltd | 歌詞表示器付き携帯音楽プレーヤ |
JP2009295012A (ja) * | 2008-06-06 | 2009-12-17 | Sharp Corp | 情報表示の制御方法、表示制御プログラムおよび情報表示装置 |
JP2012083563A (ja) * | 2010-10-12 | 2012-04-26 | Yamaha Corp | 音声合成装置およびプログラム |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09146566A (ja) * | 1995-11-20 | 1997-06-06 | Fujitsu General Ltd | カラオケ装置 |
JPH11282464A (ja) * | 1998-03-26 | 1999-10-15 | Roland Corp | 自動演奏装置の表示装置 |
JP4049014B2 (ja) * | 2003-05-09 | 2008-02-20 | ヤマハ株式会社 | 楽譜表示装置および楽譜表示コンピュータプログラム |
JP4735544B2 (ja) * | 2007-01-10 | 2011-07-27 | ヤマハ株式会社 | 歌唱合成のための装置およびプログラム |
JP2012159575A (ja) * | 2011-01-31 | 2012-08-23 | Daiichikosho Co Ltd | 複数歌唱者による歌唱誘導システム |
JP6589356B2 (ja) * | 2015-04-24 | 2019-10-16 | ヤマハ株式会社 | 表示制御装置、電子楽器およびプログラム |
-
2017
- 2017-04-27 CN CN201780089625.1A patent/CN110546705B/zh active Active
- 2017-04-27 WO PCT/JP2017/017436 patent/WO2018198382A1/fr active Application Filing
- 2017-04-27 JP JP2019515080A patent/JP6732216B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09146573A (ja) * | 1995-11-21 | 1997-06-06 | Ekushingu:Kk | カラオケ装置 |
JPH10161655A (ja) * | 1996-11-29 | 1998-06-19 | Casio Comput Co Ltd | 楽器のナビゲート装置 |
JP2002000734A (ja) * | 2000-06-22 | 2002-01-08 | Daiichikosho Co Ltd | 音楽療法支援装置 |
JP2006259236A (ja) * | 2005-03-17 | 2006-09-28 | Daiichikosho Co Ltd | 歌詞表示器付き携帯音楽プレーヤ |
JP2009295012A (ja) * | 2008-06-06 | 2009-12-17 | Sharp Corp | 情報表示の制御方法、表示制御プログラムおよび情報表示装置 |
JP2012083563A (ja) * | 2010-10-12 | 2012-04-26 | Yamaha Corp | 音声合成装置およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN110546705A (zh) | 2019-12-06 |
JPWO2018198382A1 (ja) | 2019-11-21 |
CN110546705B (zh) | 2023-05-09 |
JP6732216B2 (ja) | 2020-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10354625B2 (en) | Digital sight-singing piano with a fixed-solfège keyboard, continuous keys and adjustable tones by kneading piano keys | |
US12183319B2 (en) | Electronic musical instrument, method, and storage medium | |
US20220076651A1 (en) | Electronic musical instrument, method, and storage medium | |
JP6728754B2 (ja) | 発音装置、発音方法および発音プログラム | |
CN111667554B (zh) | 信息处理装置的控制方法、电子设备、演奏数据显示系统 | |
US5399800A (en) | Electronic musical instrument including an apparatus for aurally and visually displaying specification explanations and states of the electronic musical instrument | |
WO2018198382A1 (fr) | Appareil et procédé d'affichage de paroles | |
JP6809608B2 (ja) | 歌唱音生成装置及び方法、プログラム | |
WO2018198380A1 (fr) | Dispositif et procédé d'affichage de paroles de chanson | |
JP6944366B2 (ja) | カラオケ装置 | |
JP6787491B2 (ja) | 音発生装置及び方法 | |
JP7377415B2 (ja) | 情報処理装置、電子楽器、方法及びプログラム | |
JP7338669B2 (ja) | 情報処理装置、情報処理方法、演奏データ表示システム、およびプログラム | |
WO2018198381A1 (fr) | Dispositif de génération de son, procédé et instrument de musique | |
JP2007163710A (ja) | 演奏支援装置及びプログラム | |
WO2019026233A1 (fr) | Dispositif de commande d'effet | |
JP2024089976A (ja) | 電子機器、電子楽器、アドリブ演奏方法及びプログラム | |
JP2017161721A (ja) | 歌詞生成装置および歌詞生成方法 | |
WO2023120121A1 (fr) | Dispositif de modification de longueur de consonne, instrument de musique électronique, système d'instrument de musique, procédé et programme | |
KR20100095226A (ko) | 화성학 연주조건 입력기 및 이를 채택한 화성학 연주 악기 | |
JP2024040846A (ja) | 作曲支援方法、プログラム、および電子機器 | |
WO2019003348A1 (fr) | Dispositif, procédé et programme de génération d'effet sonore de chant | |
CN117877459A (zh) | 记录介质、音响处理方法以及音响处理系统 | |
JP2020144345A (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP2007240556A (ja) | 楽譜表示装置及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17907183 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019515080 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17907183 Country of ref document: EP Kind code of ref document: A1 |