You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your Music Generation lab is so interesting. Thanks very much for building this and sharing with the world!
I have one question: Given that the LSTM is trained on 817 songs, is it possible that it simply memorizes every song as is, and is able to reproduce exactly a trained song from end to end? Or it is learning a supposedly generalizable pattern to predict the next characters? I assume it is the latter.
But I don't see how you split the original data into training/validation/test sets in order to evaluate overfitting and generalizability performance, either, although you have this line of code "idx = np.random.choice(n-seq_length, batch_size)" in get_batch(). Thus, it is unlikely that the LSTM learns any of the 817 songs as is.
# Download the datasetsongs=mdl.lab1.load_training_data()
# Print one of the songs to inspect it in greater detail!example_song=songs[1]
Examplesong:
X:2T:AnBuachaillDreoiteZ: id:dc-hornpipe-2M:C|L:1/8K:GMajorGF|DGGBd2GB|d2GFGc (3AGF|DGGBd2GB|dBcAF2GF|!
DGGBd2GF|DGGFG2Ge|fgafgbag|fdcAG2:|!
GA|B2BGc2cA|d2GFG2GA|B2BGc2cA|d2DEF2GA|!
B2BGc2cA|d^cdef2 (3def|g2gfgbag|fdcAG2:|!
The text was updated successfully, but these errors were encountered:
Hi, MIT Deep Learning!
Your Music Generation lab is so interesting. Thanks very much for building this and sharing with the world!
I have one question: Given that the LSTM is trained on 817 songs, is it possible that it simply memorizes every song as is, and is able to reproduce exactly a trained song from end to end? Or it is learning a supposedly generalizable pattern to predict the next characters? I assume it is the latter.
I have this question because when I played the 2nd example song (see below). I found that it sounds identical to the one you shared from a former student via twitter (https://twitter.com/AnaWhatever16/status/1263092914680410112?s=20). Is that even possible?
But I don't see how you split the original data into training/validation/test sets in order to evaluate overfitting and generalizability performance, either, although you have this line of code "idx = np.random.choice(n-seq_length, batch_size)" in get_batch(). Thus, it is unlikely that the LSTM learns any of the 817 songs as is.
The text was updated successfully, but these errors were encountered: