Random Vector Accumulator

  • Running out of memory when run on the whole corpus.
  • First generating the index vectors and saving them to disk using dbm and then generating the embeddings might take a bit longer but should use less memory.
  • Since the embeddings are weighted sums of index vectors generating the lexical memory vectors in batches and adding the batch embeddings later on should also fix the issue.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s