<p dir="ltr">Woohoo!</p>
<div class="gmail_quote">On Dec 15, 2013 6:48 PM, "Kyle Centers" <<a href="mailto:kylecenters@gmail.com">kylecenters@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<p dir="ltr">Jeff Cotten says if this thread gets to 50 messages, he'll throw a party. So. This is my contribution?</p>
<div class="gmail_quote">On Dec 15, 2013 5:37 PM, "James Fluhler" <<a href="mailto:j.fluhler@gmail.com" target="_blank">j.fluhler@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div dir="auto"><div>Thanks for the link I will check it out!<br><br>James F.</div><div><br>On Dec 15, 2013, at 4:04 PM, Stephan Henning <<a href="mailto:shenning@gmail.com" target="_blank">shenning@gmail.com</a>> wrote:<br>

<br></div><blockquote type="cite"><div><div dir="ltr">-WD<div><br></div><div>I'll check the arrays and see what they are currently formatted as, it's not a big deal to reformat one of these arrays, so that something that can be changed quick and easy.</div>



<div><br></div><div>Eh, I'm not involved in the development, but I'll bring it up and if it is something that hasn't been considered I'll put some pressure on them to look into it. </div><div><br></div><div>



<br></div><div><div>-James</div></div><div><a href="http://www.ierustech.com/product/v-lox/" target="_blank">http://www.ierustech.com/product/v-lox/</a><br></div><div><br></div><div>It's internally built, just got rolled out to market. </div>



<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sun, Dec 15, 2013 at 2:04 PM, James Fluhler <span dir="ltr"><<a href="mailto:j.fluhler@gmail.com" target="_blank">j.fluhler@gmail.com</a>></span> wrote:<br>



<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div>I have not heard of VLOX before and a quick google search turned up nothing? Is it commercially available or internally built? I've typically used NEC, GEMS, EMDS, and Genesys, for eMag simulation work. </div>



<div><br></div><div>Just curious but where do you work haha</div><div><br>James F.</div><div><div><div><br>On Dec 13, 2013, at 11:13 AM, Stephan Henning <<a href="mailto:shenning@gmail.com" target="_blank">shenning@gmail.com</a>> wrote:<br>



<br></div><blockquote type="cite"><div><div dir="ltr">Method of Moment, Computational ElectroMagnertics. <div><br></div><div>Program is called Vlox</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Dec 13, 2013 at 10:47 AM, David <span dir="ltr"><<a href="mailto:ainut@knology.net" target="_blank">ainut@knology.net</a>></span> wrote:<br>





<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    MoM CEM vlox -- could you expand those acronyms, please?  Is this a
    logistics planning tool?<div><div><br>
    <br>
    <br>
    <div>Stephan Henning wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">-David
        <div><br>
        </div>
        <div>Hmm, sounds interesting. The problem is distributed a
          little currently, you can think of it kind of what is being
          done as a form of monte carlo, so the same run will get
          repeated many times with light parameter adjustments. Each of
          these can be distributed out to the compute nodes very easily,
          currently this is being done with condor.</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>-James</div>
        <div><br>
        </div>
        <div>It's a MoM CEM tool called vlox.</div>
        <div><br>
        </div>
      </div>
      <div class="gmail_extra"><br>
        <br>
        <div class="gmail_quote">On Fri, Dec 13, 2013 at 5:43 AM, James
          Fluhler <span dir="ltr"><<a href="mailto:j.fluhler@gmail.com" target="_blank">j.fluhler@gmail.com</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="auto">
              <div>I'm just curious what simulation program are you
                running? I've used a number in the past that also
                utilize the GPU's for processing. <br>
                <br>
                James F.</div>
              <div>
                <div>
                  <div><br>
                    On Dec 12, 2013, at 11:28 PM, David <<a href="mailto:ainut@knology.net" target="_blank">ainut@knology.net</a>>
                    wrote:<br>
                    <br>
                  </div>
                  <blockquote type="cite">
                    <div> IIRC, the good thing about this cluster is the
                      automagic load leveling.  Your existing binary may
                      not run at max optimization but if the task can be
                      spread among processors, Beowulf does a nice job
                      of it.  If each computer has it's own GPU(s), then
                      all the better.<br>
                      <br>
                      You can test it right there without changing
                      anything on the system's disks.  Just create and
                      run all the cluster members off a CD.<br>
                      <br>
                      Then to test, pick the fastest one of them (maybe
                      even your existing Xeon box), run your benchmark,
                      record execution time, then boot all the other
                      machines in the cluster and run it again.  There
                      are only about two dozen steps to set it up.  One
                      professor even put most of those, along with
                      automatic cluster setup(!) as a downloadable you
                      can boot off of.  That leaves half a dozen steps
                      to tweak the cluster together, then you're good to
                      go.  I have one of those CD's around here
                      somewhere and I can get details if you're
                      interested.  Something to play with.  I did it
                      with only 4 pc's around the house with some code
                      and even though the code was never designed for a
                      cluster (just multiprocessing), I got about 40%
                      decrease in execution time.  The code was almost
                      completely linear execution so I'm surprised it
                      got any improvement but it did.<br>
                      <br>
                      David<br>
                      <br>
                      <br>
                      <div>Stephan Henning wrote:<br>
                      </div>
                      <blockquote type="cite">
                        <div dir="ltr">-WD
                          <div><br>
                          </div>
                          <div>I believe it's either ext3 or ext4, I'd
                            have to ssh in and check when I get back on
                            Monday. </div>
                          <div><br>
                          </div>
                          <div>-David</div>
                          <div><br>
                          </div>
                          <div>I'll check into the Beowulf and see what
                            that would entail. I'll try and talk with
                            the developer and see what their thoughts
                            are on the feasibility of running it on a
                            cluster. They may have already gone down
                            this path and rejected it, but I'll check
                            anyway. </div>
                        </div>
                        <div class="gmail_extra"><br>
                          <br>
                          <div class="gmail_quote">On Thu, Dec 12, 2013
                            at 6:16 PM, David <span dir="ltr"><<a href="mailto:ainut@knology.net" target="_blank">ainut@knology.net</a>></span>
                            wrote:<br>
                            <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Sounds like
                              a perfect candidate for a Beowulf cluster
                              to me.  There are possibly some gotcha's
                              but you'll have the same problems with
                              just a single computer.<br>
                              <br>
                              Velly intewesting.<br>
                              <br>
                              Stephan Henning wrote:<br>
                              <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                <div> -WD<br>
                                  <br>
                                  The GPUs are sent data in chunks that
                                  they then process and return. The time
                                  it takes a GPU to process a chunk can
                                  vary, so I assume the bottle necks we
                                  were seeing was when several of the
                                  GPU cores would finish at about the
                                  same time and request a new chunk and
                                  the chunk they needed wasn't already
                                  in RAM, so the drive array would take
                                  a heavy hit.<br>
                                  <br>
                                  Beyond that, I can't really give you a
                                  numerical value as to the amount of
                                  data they are dumping into the pcie
                                  bus.<br>
                                  <br>
                                  <br>
                                  -David<br>
                                  <br>
                                  Ya, not sure an FPGA exists large
                                  enough for this, it would be
                                  interesting though.<br>
                                  <br>
                                  While the process isn't entirely
                                  sequential, data previously processed
                                  is reused in the processing of other
                                  data, so that has kept us away from
                                  trying a cluster approach.<br>
                                  <br>
                                  Depending on the problem, anywhere
                                  from minutes per iteration, to weeks
                                  per iteration. The weeks long problems
                                  are sitting at about 3TB I believe.
                                  We've only run benchmark problems on
                                  the SSDs up till now, so we haven't
                                  had the experience of seeing how they
                                  react once they start really getting
                                  full.<br>
                                  <br>
                                </div>
                                Sadly, 2TB of RAM would not be enough. I
                                looked into this Dell box (<a href="http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#%21tab=features" target="_blank">http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#!tab=features</a>
                                <<a href="http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#%21tab=features" target="_blank">http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#%21tab=features</a>>)

                                that would take 4TB, but the costs were
                                insane and it can't support enough GPUs
                                to actually do anything with the RAM...<br>
                                <br>
                                <br>
                                <br>
                              </blockquote>
                              <<<snip>>>
                              <div>
                                <div><br>
                                  <br>
_______________________________________________<br>
                                  General mailing list<br>
                                  <a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
                                  <a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a><br>
                                </div>
                              </div>
                            </blockquote>
                          </div>
                          <br>
                        </div>
                        <br>
                        <fieldset></fieldset>
                        <br>
                        <pre>_______________________________________________
General mailing list
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a></pre>
                      </blockquote>
                      <br>
                    </div>
                  </blockquote>
                  <blockquote type="cite">
                    <div><span>_______________________________________________</span><br>
                      <span>General mailing list</span><br>
                      <span><a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a></span><br>
                      <span><a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a></span></div>
                  </blockquote>
                </div>
              </div>
            </div>
            <br>
            _______________________________________________<br>
            General mailing list<br>
            <a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
            <a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a><br>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset></fieldset>
      <br>
      <pre>_______________________________________________
General mailing list
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a></pre>
    </blockquote>
    <br>
  </div></div></div>


<br>_______________________________________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a><br></blockquote></div><br></div>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>General mailing list</span><br><span><a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a></span><br>



<span><a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a></span></div></blockquote></div></div></div><br>_______________________________________________<br>




General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a><br></blockquote></div><br></div>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>General mailing list</span><br><span><a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a></span><br>

<span><a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a></span></div></blockquote></div><br>_______________________________________________<br>


General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a><br></blockquote></div>
<br>_______________________________________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/mailman/listinfo/general</a><br></blockquote></div>