Saturday 20 May 2017

Elastic Search: Understanding how to optimize write-heavy operations so read isn't impacted

We've got a NodeJS application with an Elastic Search back-end that 90% of the time is very lightly used, and occasionally is absolutely slammed. For example, on a typical basis it might receive 50-100 read requests in an hour, and 1-2 write requests. At peak times it might receive 50,000 read requests and 30,000 write requests.

During these peak times we're running into a situation where there are so many write requests that the re-indexing, etc; has even the read requests coming to a crawl; which makes the website unresponsive. To handle this type of load, we clearly need to either somehow optimize Elastic Search or re-architect the application, and I'm trying to figure out how best to do that.

What I'd like to understand better is:

1) What is happening on a write operation that seems to kill everything, and what is available in order to optimize or speed that up?

2) I can tell from a code standpoint I can insert more records faster by using bulk operations, but I'm wondering if the way Elastic Search does indexing this is actually less efficient on the system. Should I see significantly better performance (specifically on the read side of things) if we get rid of bulk inserts, or at least make the size of the inserts smaller? Anything that helps me understand how this change might impact things would be helpful.

3) Is there a anyway to divide up the read/write operations so that even if the write-operations are backed up, the read operations still continue to work?

e.g. I was thinking of using a message queue rather than direct Elastic Search inserts, but again, back to question #2, I'm not positive how to optimize this for read operations to continue to work.

e.g. Is there a way to do the inserts into a different cluster than the read, and then merge the data? Would this be more or less efficient.

Thank-you for your help.



via Doug

No comments:

Post a Comment