ref: a06e9a96ad85d545facb38fc6d3c77c28d58526d
parent: ba46bb93daf823fa9fb373a415c85562461b9ee6
author: Jean-Marc Valin <jmvalin@jmvalin.ca>
date: Tue Jan 1 09:40:00 EST 2019
doc update
--- a/dnn/README.md
+++ b/dnn/README.md
@@ -19,13 +19,13 @@
1. Generate training data:
```
make dump_data
- ./dump_data -train input.s16 features.f32 pcm.s16
+ ./dump_data -train input.s16 features.f32 data.u8
```
where the first file contains 16 kHz 16-bit raw PCM audio (no header) and the other files are output files. This program makes several passes over the data with different filters to generate a large amount of training data.
1. Now that you have your files, train with:
```
- ./train_lpcnet.py features.f32 pcm.s16
+ ./train_lpcnet.py features.f32 data.u8
```
and it will generate a wavenet*.h5 file for each iteration. If it stops with a
"Failed to allocate RNN reserve space" message try reducing the *batch\_size* variable in train_wavenet_audio.py.
--
⑨