it blocked, apparently waiting for the current file to finish
uploading on node 2. I thought there wasn't any locking? Was it
waiting for a quorum?
When I came back from lunch I found that node 2 died not long after with this:
binary_alloc: Cannot allocate 438689176 bytes of memory (of type "binary").
(But I see in the FAQ that 50MB is the recommended largest document size)
Are there any tracing or debugging facilities that I can use to
diagnose latencies or execution plans?
> On the macbook I looped across 400meg files using bash and curl to
> upload them as documents into a bucket:
There are other details in your post that I might comment on, but I
will focus on the main point.
What you describe here simply will not work. Single documents in Riak
at that size are going to cause problems. There is an interface atop
Riak ("Luwak") which can handle such things just fine, if large file
storage is your main use case.