<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>I'm just curious what simulation program are you running? I've used a number in the past that also utilize the GPU's for processing. <br><br>James F.</div><div><br>On Dec 12, 2013, at 11:28 PM, David <<a href="mailto:ainut@knology.net">ainut@knology.net</a>> wrote:<br><br></div><blockquote type="cite"><div>
<meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type">
IIRC, the good thing about this cluster is the automagic load
leveling. Your existing binary may not run at max optimization but
if the task can be spread among processors, Beowulf does a nice job
of it. If each computer has it's own GPU(s), then all the better.<br>
<br>
You can test it right there without changing anything on the
system's disks. Just create and run all the cluster members off a
CD.<br>
<br>
Then to test, pick the fastest one of them (maybe even your existing
Xeon box), run your benchmark, record execution time, then boot all
the other machines in the cluster and run it again. There are only
about two dozen steps to set it up. One professor even put most of
those, along with automatic cluster setup(!) as a downloadable you
can boot off of. That leaves half a dozen steps to tweak the
cluster together, then you're good to go. I have one of those CD's
around here somewhere and I can get details if you're interested.
Something to play with. I did it with only 4 pc's around the house
with some code and even though the code was never designed for a
cluster (just multiprocessing), I got about 40% decrease in
execution time. The code was almost completely linear execution so
I'm surprised it got any improvement but it did.<br>
<br>
David<br>
<br>
<br>
<div class="moz-cite-prefix">Stephan Henning wrote:<br>
</div>
<blockquote cite="mid:CACu1UD6OtetCAmXAtpz9qLWDNt==pUEP8ba3vYDcVw3WcsGcpQ@mail.gmail.com" type="cite">
<div dir="ltr">-WD
<div><br>
</div>
<div>I believe it's either ext3 or ext4, I'd have to ssh in and
check when I get back on Monday. </div>
<div><br>
</div>
<div>-David</div>
<div><br>
</div>
<div>I'll check into the Beowulf and see what that would entail.
I'll try and talk with the developer and see what their
thoughts are on the feasibility of running it on a cluster.
They may have already gone down this path and rejected it, but
I'll check anyway. </div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Thu, Dec 12, 2013 at 6:16 PM, David
<span dir="ltr"><<a moz-do-not-send="true" href="mailto:ainut@knology.net" target="_blank">ainut@knology.net</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
Sounds like a perfect candidate for a Beowulf cluster to me.
There are possibly some gotcha's but you'll have the same
problems with just a single computer.<br>
<br>
Velly intewesting.<br>
<br>
Stephan Henning wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">
-WD<br>
<br>
The GPUs are sent data in chunks that they then process
and return. The time it takes a GPU to process a chunk
can vary, so I assume the bottle necks we were seeing
was when several of the GPU cores would finish at about
the same time and request a new chunk and the chunk they
needed wasn't already in RAM, so the drive array would
take a heavy hit.<br>
<br>
Beyond that, I can't really give you a numerical value
as to the amount of data they are dumping into the pcie
bus.<br>
<br>
<br>
-David<br>
<br>
Ya, not sure an FPGA exists large enough for this, it
would be interesting though.<br>
<br>
While the process isn't entirely sequential, data
previously processed is reused in the processing of
other data, so that has kept us away from trying a
cluster approach.<br>
<br>
Depending on the problem, anywhere from minutes per
iteration, to weeks per iteration. The weeks long
problems are sitting at about 3TB I believe. We've only
run benchmark problems on the SSDs up till now, so we
haven't had the experience of seeing how they react once
they start really getting full.<br>
<br>
</div>
Sadly, 2TB of RAM would not be enough. I looked into this
Dell box (<a moz-do-not-send="true" href="http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#%21tab=features" target="_blank">http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#!tab=features</a>
<<a moz-do-not-send="true" href="http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#%21tab=features" target="_blank">http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#%21tab=features</a>>)
that would take 4TB, but the costs were insane and it
can't support enough GPUs to actually do anything with the
RAM...<br>
<br>
<br>
<br>
</blockquote>
<<<snip>>>
<div class="HOEnZb">
<div class="h5"><br>
<br>
_______________________________________________<br>
General mailing list<br>
<a moz-do-not-send="true" href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a moz-do-not-send="true" href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
General mailing list
<a class="moz-txt-link-abbreviated" href="mailto:General@lists.makerslocal.org">General@lists.makerslocal.org</a>
<a class="moz-txt-link-freetext" href="http://lists.makerslocal.org/mailman/listinfo/general">http://lists.makerslocal.org/mailman/listinfo/general</a></pre>
</blockquote>
<br>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>General mailing list</span><br><span><a href="mailto:General@lists.makerslocal.org">General@lists.makerslocal.org</a></span><br><span><a href="http://lists.makerslocal.org/mailman/listinfo/general">http://lists.makerslocal.org/mailman/listinfo/general</a></span></div></blockquote></body></html>