I know that many English producers primarily look up to the famous Japanese producers in Vocaloid, which isn’t a bad thing necessarily. Often Japanese producers seem to be the only ones doing music with Vocaloid on a proffessional or classically trained scale so there’s a lot to learn from them, however one thing seems to often be overlooked when learning this way.
Japanese doesn’t have the same sort of stress pattern that English does. For example, in English, if you say the word “turkey”:
“TURkey”
Sounds normal and natural, while
“turKEY”
Sounds strange. We naturally put more emphasis on certain syllables or say them longer than the others, which is carried over into English songs.
Additionally, putting stress on different sounds can change the meaning of the word. For example:
“Darth Vader decided to crush the rebel soldier.”
Vs
“Luke will rebel against his father’s wishes.”
Japanese doesn’t have the same sort of system, so in songs, words can be arranged differently.
You can see the way this is in pretty much any English lyric version of a song, as the organization of syllables sounds slightly awkward or unnatural, and often it can sound like too many words are stuffed into one part.
Ex:
(Not to pick on any translyricist though, because it’s not due to a lack of their skill but more the limitations with the langauge. Translyricism is hard)
Original English Voca/UTAU songs I feel often have the same effect. This also tends to contribute to difficulty in understanding the lyrics of the song (in addition to limitations with the vocal synth), and why even though it’s English, it can sound like it’s still another language.
Aside from being a little grating, this oversight also makes us miss many opportunities for wordplay and different cool/clever things you could do with stress and meter.
I’m a little blank on examples at the moment but I think the best examples of these types of tricks often come from Rap songs:
This type of wordplay is also utilized in musicals very often (though again, a little blank on examples at the moment).
It’s often a little hard to think about, as it is a slightly complicated concept in general, but on top of that we are using singing robots. Because of that, it’s sometimes hard to think of how we would say words naturally in real life.
But I think if we payed a little more attention to that, it could really help in improving our music.
Just wanted to point that out since it’s something I noticed
Japanese doesn’t have the same sort of stress pattern that English does. For example, in English, if you say the word “turkey”:
“TURkey”
Sounds normal and natural, while
“turKEY”
Sounds strange. We naturally put more emphasis on certain syllables or say them longer than the others, which is carried over into English songs.
Additionally, putting stress on different sounds can change the meaning of the word. For example:
“Darth Vader decided to crush the rebel soldier.”
Vs
“Luke will rebel against his father’s wishes.”
Japanese doesn’t have the same sort of system, so in songs, words can be arranged differently.
You can see the way this is in pretty much any English lyric version of a song, as the organization of syllables sounds slightly awkward or unnatural, and often it can sound like too many words are stuffed into one part.
Ex:
(Not to pick on any translyricist though, because it’s not due to a lack of their skill but more the limitations with the langauge. Translyricism is hard)
Original English Voca/UTAU songs I feel often have the same effect. This also tends to contribute to difficulty in understanding the lyrics of the song (in addition to limitations with the vocal synth), and why even though it’s English, it can sound like it’s still another language.
Aside from being a little grating, this oversight also makes us miss many opportunities for wordplay and different cool/clever things you could do with stress and meter.
I’m a little blank on examples at the moment but I think the best examples of these types of tricks often come from Rap songs:
This type of wordplay is also utilized in musicals very often (though again, a little blank on examples at the moment).
It’s often a little hard to think about, as it is a slightly complicated concept in general, but on top of that we are using singing robots. Because of that, it’s sometimes hard to think of how we would say words naturally in real life.
But I think if we payed a little more attention to that, it could really help in improving our music.
Just wanted to point that out since it’s something I noticed
Last edited: