<div dir="ltr">-WD<div><br></div><div>I believe it's either ext3 or ext4, I'd have to ssh in and check when I get back on Monday. </div><div><br></div><div>-David</div><div><br></div><div>I'll check into the Beowulf and see what that would entail. I'll try and talk with the developer and see what their thoughts are on the feasibility of running it on a cluster. They may have already gone down this path and rejected it, but I'll check anyway. </div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Dec 12, 2013 at 6:16 PM, David <span dir="ltr"><<a href="mailto:ainut@knology.net" target="_blank">ainut@knology.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Sounds like a perfect candidate for a Beowulf cluster to me. There are possibly some gotcha's but you'll have the same problems with just a single computer.<br>
<br>
Velly intewesting.<br>
<br>
Stephan Henning wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">
-WD<br>
<br>
The GPUs are sent data in chunks that they then process and return. The time it takes a GPU to process a chunk can vary, so I assume the bottle necks we were seeing was when several of the GPU cores would finish at about the same time and request a new chunk and the chunk they needed wasn't already in RAM, so the drive array would take a heavy hit.<br>
<br>
Beyond that, I can't really give you a numerical value as to the amount of data they are dumping into the pcie bus.<br>
<br>
<br>
-David<br>
<br>
Ya, not sure an FPGA exists large enough for this, it would be interesting though.<br>
<br>
While the process isn't entirely sequential, data previously processed is reused in the processing of other data, so that has kept us away from trying a cluster approach.<br>
<br>
Depending on the problem, anywhere from minutes per iteration, to weeks per iteration. The weeks long problems are sitting at about 3TB I believe. We've only run benchmark problems on the SSDs up till now, so we haven't had the experience of seeing how they react once they start really getting full.<br>
<br></div>
Sadly, 2TB of RAM would not be enough. I looked into this Dell box (<a href="http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#!tab=features" target="_blank">http://www8.hp.com/us/en/<u></u>products/proliant-servers/<u></u>product-detail.html?oid=<u></u>4231377#!tab=features</a> <<a href="http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#%21tab=features" target="_blank">http://www8.hp.com/us/en/<u></u>products/proliant-servers/<u></u>product-detail.html?oid=<u></u>4231377#%21tab=features</a>>) that would take 4TB, but the costs were insane and it can't support enough GPUs to actually do anything with the RAM...<br>
<br>
<br>
<br>
</blockquote>
<<<snip>>><div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<u></u>_________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</div></div></blockquote></div><br></div>