
Collect samples from generator and store in rds or pickle file.
Source:R/generator_utils.R
      dataset_from_gen.RdRepeatedly generate samples with data generator and store output. Creates a separate rds or pickle file in output_path for each
batch.
Usage
dataset_from_gen(
  output_path,
  iterations = 10,
  train_type = "lm",
  output_format = "target_right",
  path_corpus,
  batch_size = 32,
  maxlen = 250,
  step = NULL,
  vocabulary = c("a", "c", "g", "t"),
  shuffle = FALSE,
  set_learning = NULL,
  seed = NULL,
  random_sampling = FALSE,
  store_format = "rds",
  file_name_start = "batch_",
  masked_lm = NULL,
  ...
)Arguments
- output_path
- Output directory. Output files will be named - output_path+- file_name_start+ x + ".rds" or ".pickle", where x is an index (from 1 to- iterations) and file ending depends on- store_formatargument.
- iterations
- Number of batches (output files) to create. 
- train_type
- Either - "lm",- "lm_rds",- "masked_lm"for language model;- "label_header",- "label_folder",- "label_csv",- "label_rds"for classification or- "dummy_gen".- Language model is trained to predict character(s) in a sequence. 
- "label_header"/- "label_folder"/- "label_csv"are trained to predict a corresponding class given a sequence as input.
- If - "label_header", class will be read from fasta headers.
- If - "label_folder", class will be read from folder, i.e. all files in one folder must belong to the same class.
- If - "label_csv", targets are read from a csv file. This file should have one column named "file". The targets then correspond to entries in that row (except "file" column). Example: if we are currently working with a file called "a.fasta" and corresponding label is "label_1", there should be a row in our csv file- file - label_1 - label_2 - "a.fasta" - 1 - 0 
- If - "label_rds", generator will iterate over set of .rds files containing each a list of input and target tensors. Not implemented for model with multiple inputs.
- If - "lm_rds", generator will iterate over set of .rds files and will split tensor according to- target_lenargument (targets are last- target_lennucleotides of each sequence).
- If - "dummy_gen", generator creates random data once and repeatedly feeds these to model.
- If - "masked_lm", generator maskes some parts of the input. See- masked_lmargument for details.
 
- output_format
- Determines shape of output tensor for language model. Either - "target_right",- "target_middle_lstm",- "target_middle_cnn"or- "wavenet". Assume a sequence- "AACCGTA". Output correspond as follows- "target_right": X = "AACCGT", Y = "A"
- "target_middle_lstm": X = (X_1 = "AAC", X_2 = "ATG"), Y = "C"(note reversed order of X_2)
- "target_middle_cnn": X = "AACGTA", Y = "C"
- "wavenet": X = "AACCGT", Y = "ACCGTA"
 
- path_corpus
- Input directory where fasta files are located or path to single file ending with fasta or fastq (as specified in format argument). Can also be a list of directories and/or files. 
- batch_size
- Number of samples in one batch. 
- maxlen
- Length of predictor sequence. 
- step
- How often to take a sample. 
- vocabulary
- Vector of allowed characters. Characters outside vocabulary get encoded as specified in - ambiguous_nuc.
- shuffle
- Whether to shuffle samples within each batch. 
- set_learning
- When you want to assign one label to set of samples. Only implemented for - train_type = "label_folder". Input is a list with the following parameters- samples_per_target: how many samples to use for one target.
- maxlen: length of one sample.
- reshape_mode:- "time_dist", "multi_input"or- "concat".- If - reshape_modeis- "multi_input", generator will produce- samples_per_targetseparate inputs, each of length- maxlen(model should have- samples_per_targetinput layers).
- If reshape_mode is - "time_dist", generator will produce a 4D input array. The dimensions correspond to- (batch_size, samples_per_target, maxlen, length(vocabulary)).
- If - reshape_modeis- "concat", generator will concatenate- samples_per_targetsequences of length- maxlento one long sequence.
 
- If - reshape_modeis- "concat", there is an additional- buffer_lenargument. If- buffer_lenis an integer, the subsequences are interspaced with- buffer_lenrows. The input length is (- maxlen\(*\)- samples_per_target) +- buffer_len\(*\) (- samples_per_target- 1).
 
- seed
- Sets seed for - set.seedfunction for reproducible results.
- random_sampling
- Whether samples should be taken from random positions when using - max_samplesargument. If- FALSErandom samples are taken from a consecutive subsequence.
- store_format
- Either "rds" or "pickle". 
- file_name_start
- Start of output file names. 
- masked_lm
- If not - NULL, input and target are equal except some parts of the input are masked or random. Must be list with the following arguments:- mask_rate: Rate of input to mask (rate of input to replace with mask token).
- random_rate: Rate of input to set to random token.
- identity_rate: Rate of input where sample weights are applied but input and output are identical.
- include_sw: Whether to include sample weights.
- block_len(optional): Masked/random/identity regions appear in blocks of size- block_len.
 
- ...
- further generator options. See - get_generator.
Examples
if (FALSE) { # reticulate::py_module_available("tensorflow")
# create dummy fasta files
temp_dir <- tempfile()
dir.create(temp_dir)
create_dummy_data(file_path = temp_dir,
                  num_files = 3,
                  seq_length = 8, 
                  num_seq = 2)
# extract samples
out_dir <- tempfile()
dir.create(out_dir)
dataset_from_gen(output_path = out_dir,
                 iterations = 10,
                 train_type = "lm",
                 output_format = "target_right",
                 path_corpus = temp_dir, 
                 batch_size = 32,
                 maxlen = 5,
                 step = 1,
                 file_name_start = "batch_")
list.files(out_dir)
}