[General] Any linux gurus?

Erik Arendall earendall at gmail.com
Sun Dec 15 19:15:37 CST 2013


99 luftballoons?
On Dec 15, 2013 6:55 PM, "Barbara Attilio" <barbara.attili at gmail.com> wrote:

> I can bring balloons!
> On Dec 15, 2013 6:53 PM, "enabrintain at yahoo.com" <enabrintain at yahoo.com>
> wrote:
>
>> I want a cookie cake!
>>
>> *Sent from my Verizon Wireless 4G LTE DROID*
>>
>>
>> Justin Richards <ratmandu at gmail.com> wrote:
>>
>> Woohoo!
>> On Dec 15, 2013 6:48 PM, "Kyle Centers" <kylecenters at gmail.com> wrote:
>>
>>> Jeff Cotten says if this thread gets to 50 messages, he'll throw a
>>> party. So. This is my contribution?
>>> On Dec 15, 2013 5:37 PM, "James Fluhler" <j.fluhler at gmail.com> wrote:
>>>
>>>> Thanks for the link I will check it out!
>>>>
>>>> James F.
>>>>
>>>> On Dec 15, 2013, at 4:04 PM, Stephan Henning <shenning at gmail.com>
>>>> wrote:
>>>>
>>>> -WD
>>>>
>>>> I'll check the arrays and see what they are currently formatted as,
>>>> it's not a big deal to reformat one of these arrays, so that something that
>>>> can be changed quick and easy.
>>>>
>>>> Eh, I'm not involved in the development, but I'll bring it up and if it
>>>> is something that hasn't been considered I'll put some pressure on them to
>>>> look into it.
>>>>
>>>>
>>>> -James
>>>> http://www.ierustech.com/product/v-lox/
>>>>
>>>> It's internally built, just got rolled out to market.
>>>>
>>>>
>>>>
>>>> On Sun, Dec 15, 2013 at 2:04 PM, James Fluhler <j.fluhler at gmail.com>wrote:
>>>>
>>>>> I have not heard of VLOX before and a quick google search turned up
>>>>> nothing? Is it commercially available or internally built? I've typically
>>>>> used NEC, GEMS, EMDS, and Genesys, for eMag simulation work.
>>>>>
>>>>> Just curious but where do you work haha
>>>>>
>>>>> James F.
>>>>>
>>>>> On Dec 13, 2013, at 11:13 AM, Stephan Henning <shenning at gmail.com>
>>>>> wrote:
>>>>>
>>>>> Method of Moment, Computational ElectroMagnertics.
>>>>>
>>>>> Program is called Vlox
>>>>>
>>>>>
>>>>> On Fri, Dec 13, 2013 at 10:47 AM, David <ainut at knology.net> wrote:
>>>>>
>>>>>>  MoM CEM vlox -- could you expand those acronyms, please?  Is this a
>>>>>> logistics planning tool?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Stephan Henning wrote:
>>>>>>
>>>>>> -David
>>>>>>
>>>>>>  Hmm, sounds interesting. The problem is distributed a little
>>>>>> currently, you can think of it kind of what is being done as a form of
>>>>>> monte carlo, so the same run will get repeated many times with light
>>>>>> parameter adjustments. Each of these can be distributed out to the compute
>>>>>> nodes very easily, currently this is being done with condor.
>>>>>>
>>>>>>
>>>>>>  -James
>>>>>>
>>>>>>  It's a MoM CEM tool called vlox.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Dec 13, 2013 at 5:43 AM, James Fluhler <j.fluhler at gmail.com>wrote:
>>>>>>
>>>>>>>  I'm just curious what simulation program are you running? I've
>>>>>>> used a number in the past that also utilize the GPU's for processing.
>>>>>>>
>>>>>>> James F.
>>>>>>>
>>>>>>> On Dec 12, 2013, at 11:28 PM, David <ainut at knology.net> wrote:
>>>>>>>
>>>>>>>   IIRC, the good thing about this cluster is the automagic load
>>>>>>> leveling.  Your existing binary may not run at max optimization but if the
>>>>>>> task can be spread among processors, Beowulf does a nice job of it.  If
>>>>>>> each computer has it's own GPU(s), then all the better.
>>>>>>>
>>>>>>> You can test it right there without changing anything on the
>>>>>>> system's disks.  Just create and run all the cluster members off a CD.
>>>>>>>
>>>>>>> Then to test, pick the fastest one of them (maybe even your existing
>>>>>>> Xeon box), run your benchmark, record execution time, then boot all the
>>>>>>> other machines in the cluster and run it again.  There are only about two
>>>>>>> dozen steps to set it up.  One professor even put most of those, along with
>>>>>>> automatic cluster setup(!) as a downloadable you can boot off of.  That
>>>>>>> leaves half a dozen steps to tweak the cluster together, then you're good
>>>>>>> to go.  I have one of those CD's around here somewhere and I can get
>>>>>>> details if you're interested.  Something to play with.  I did it with only
>>>>>>> 4 pc's around the house with some code and even though the code was never
>>>>>>> designed for a cluster (just multiprocessing), I got about 40% decrease in
>>>>>>> execution time.  The code was almost completely linear execution so I'm
>>>>>>> surprised it got any improvement but it did.
>>>>>>>
>>>>>>> David
>>>>>>>
>>>>>>>
>>>>>>> Stephan Henning wrote:
>>>>>>>
>>>>>>> -WD
>>>>>>>
>>>>>>>  I believe it's either ext3 or ext4, I'd have to ssh in and check
>>>>>>> when I get back on Monday.
>>>>>>>
>>>>>>>  -David
>>>>>>>
>>>>>>>  I'll check into the Beowulf and see what that would entail. I'll
>>>>>>> try and talk with the developer and see what their thoughts are on the
>>>>>>> feasibility of running it on a cluster. They may have already gone down
>>>>>>> this path and rejected it, but I'll check anyway.
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Dec 12, 2013 at 6:16 PM, David <ainut at knology.net> wrote:
>>>>>>>
>>>>>>>> Sounds like a perfect candidate for a Beowulf cluster to me.  There
>>>>>>>> are possibly some gotcha's but you'll have the same problems with just a
>>>>>>>> single computer.
>>>>>>>>
>>>>>>>> Velly intewesting.
>>>>>>>>
>>>>>>>> Stephan Henning wrote:
>>>>>>>>
>>>>>>>>>  -WD
>>>>>>>>>
>>>>>>>>> The GPUs are sent data in chunks that they then process and
>>>>>>>>> return. The time it takes a GPU to process a chunk can vary, so I assume
>>>>>>>>> the bottle necks we were seeing was when several of the GPU cores would
>>>>>>>>> finish at about the same time and request a new chunk and the chunk they
>>>>>>>>> needed wasn't already in RAM, so the drive array would take a heavy hit.
>>>>>>>>>
>>>>>>>>> Beyond that, I can't really give you a numerical value as to the
>>>>>>>>> amount of data they are dumping into the pcie bus.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -David
>>>>>>>>>
>>>>>>>>> Ya, not sure an FPGA exists large enough for this, it would be
>>>>>>>>> interesting though.
>>>>>>>>>
>>>>>>>>> While the process isn't entirely sequential, data previously
>>>>>>>>> processed is reused in the processing of other data, so that has kept us
>>>>>>>>> away from trying a cluster approach.
>>>>>>>>>
>>>>>>>>> Depending on the problem, anywhere from minutes per iteration, to
>>>>>>>>> weeks per iteration. The weeks long problems are sitting at about 3TB I
>>>>>>>>> believe. We've only run benchmark problems on the SSDs up till now, so we
>>>>>>>>> haven't had the experience of seeing how they react once they start really
>>>>>>>>> getting full.
>>>>>>>>>
>>>>>>>>>  Sadly, 2TB of RAM would not be enough. I looked into this Dell
>>>>>>>>> box (
>>>>>>>>> http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#!tab=features<
>>>>>>>>> http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=4231377#%21tab=features>)
>>>>>>>>> that would take 4TB, but the costs were insane and it can't support enough
>>>>>>>>> GPUs to actually do anything with the RAM...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  <<<snip>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> General mailing list
>>>>>>>> General at lists.makerslocal.org
>>>>>>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> General mailing listGeneral at lists.makerslocal.orghttp://lists.makerslocal.org/mailman/listinfo/general
>>>>>>>
>>>>>>>
>>>>>>>   _______________________________________________
>>>>>>> General mailing list
>>>>>>> General at lists.makerslocal.org
>>>>>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> General mailing list
>>>>>>> General at lists.makerslocal.org
>>>>>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> General mailing listGeneral at lists.makerslocal.orghttp://lists.makerslocal.org/mailman/listinfo/general
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> General mailing list
>>>>>> General at lists.makerslocal.org
>>>>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> General mailing list
>>>>> General at lists.makerslocal.org
>>>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> General mailing list
>>>>> General at lists.makerslocal.org
>>>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>>>
>>>>
>>>> _______________________________________________
>>>> General mailing list
>>>> General at lists.makerslocal.org
>>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>>
>>>>
>>>> _______________________________________________
>>>> General mailing list
>>>> General at lists.makerslocal.org
>>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>>
>>>
>>> _______________________________________________
>>> General mailing list
>>> General at lists.makerslocal.org
>>> http://lists.makerslocal.org/mailman/listinfo/general
>>>
>>
>> _______________________________________________
>> General mailing list
>> General at lists.makerslocal.org
>> http://lists.makerslocal.org/mailman/listinfo/general
>>
>
> _______________________________________________
> General mailing list
> General at lists.makerslocal.org
> http://lists.makerslocal.org/mailman/listinfo/general
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.makerslocal.org/pipermail/general/attachments/20131215/dd4a1377/attachment-0001.html>


More information about the General mailing list