The PS3 has a number of downsides, which can mostly be summarized with the lack of support since Sony removed the OtherOS option.<div><br><div>Secondarily, it's difficult to write a compiler smart enough to take good advantage of the PSUs on the Cell Broadband Engine.</div><div><br><div>If you decide you want to learn about doing highly parallel computing on the Cell, I can bring my BladeCenter to the shop and fire up one (or many) of my QS20 blades for experiments.</div><div>But honestly, the Cell is lots of trouble without so much benefit.<br><br><div class="gmail_quote">On Tue Feb 03 2015 at 4:14:46 PM david <<a href="mailto:ainut@knology.net" target="_blank">ainut@knology.net</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
IF and it's a big IF, your problem lends itself to a pure
distributed processing paradigm (think Cray...), a very low cost
setup with phenomenal compute speeds is the Sony Playstation 3,
believe it or not. You can find them really cheap nowadays,.
Network a few of them together, install LINUX/UNIX on them (might be
available out there) and setup the Cray-type compiler (from SGI) and
you'll have a honking system. In the right problem domain, 5 of
those would outperform hundreds of the pico-computers.</div><div bgcolor="#FFFFFF" text="#000000"><br>
<br>
<br>
<br>
<br>
<div>On 02/03/2015 03:57 PM, Stephan Henning
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">There was a group that did it a while back. I want
to say they did it with Atom processors. Ended up with 400+
nodes in a 10U rack I think. </div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Feb 3, 2015 at 3:55 PM, Erik
Arendall <span dir="ltr"><<a href="mailto:earendall@gmail.com" target="_blank">earendall@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">This would be a cool project to develop a
module board that contains the cpu/gpu of choice and
required ram for use. then the modules could plug in to a
supervisory control node. </div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Feb 3, 2015 at 3:50
PM, Stephan Henning <span dir="ltr"><<a href="mailto:shenning@gmail.com" target="_blank">shenning@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hey Hunter,
<div><br>
</div>
<div>Well, with the Edison, it wouldn't be 27
devices, it would be closer to 400 :)</div>
<div><br>
</div>
<div>I <i>think</i> I can fit 27 mini-itx
motherboards in a 4U chassis (maybe only
21-24, depending on heatsink height). For the
raspi's or the Edisons to be viable they would
need to beat that baseline on a flop/watt vs
$$ comparison. Even in that case, the low RAM
amount limits their usefulness. </div>
</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Feb 3, 2015
at 3:44 PM, Hunter Fuller <span dir="ltr"><<a href="mailto:hfuller@pixilic.com" target="_blank">hfuller@pixilic.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<p dir="ltr">27 devices in a metal box
will work fine, provided there is also
a fairly robust AP in that box. I
would personally still lean toward USB
Ethernet though. But that increases
your devices size and complexity...
Hm.</p>
<p dir="ltr">As far as PXE boot, since
there is no wired Ethernet available,
I doubt that is a thing. However, you
can Mount the internal storage as
/boot, and have a script run that
rsyncs the /boot fs between the boxes
and a server. The rest can be achieved
by using an NFS volume as your root
partition. This setup is commonly done
on armies of raspberry pis.</p>
<p dir="ltr">There wouldn't be much
difference between original prep on
this and originally preparing several
SD cards. In one case, you have to
connect each device to a provisioning
station. In the other case,you connect
each SD card to the same station. Not
much different, and once you boot one
time, you can do the maintenance in an
automated fashion across all nodes. </p>
<div>
<div>
<div class="gmail_quote">On Jan 23,
2015 9:36 AM, "Michael Carroll"
<<a href="mailto:carroll.michael@gmail.com" target="_blank">carroll.michael@gmail.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="ltr">Stephan,
<div><br>
</div>
<div>I didn't realize that the
Edison was wifi-only. I'm
interested to hear how 27
wifi devices in a metal box
will work?</div>
<div><br>
</div>
<div>Also, do you know if the
edison can pxeboot? I think
that's the best approach for
booting a whole bunch of
homogeneous computers, it
would certainly be more
maintenance overhead without
that capability.</div>
<div><br>
</div>
<div>~mc</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22, 2015 at 11:04
PM, Stephan Henning <span dir="ltr"><<a href="mailto:shenning@gmail.com" target="_blank">shenning@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="ltr">@Erik
<div>Well, the raspi and
beaglebone have less
ram than the Edison.
I'll have to take a
look at the Rock, the
Pro version offers
2gb, but since the
Edison is an x86
platform it is
advantageous in many
ways.
<div><br>
</div>
<div>@Tim</div>
</div>
<div>Ya, that looks very
similar. I'll give it
a read through in the
morning. I'll make
sure to keep you
updated. </div>
</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22, 2015
at 10:11 PM, Erik
Arendall <span dir="ltr"><<a href="mailto:earendall@gmail.com" target="_blank">earendall@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<p dir="ltr">Not
sure of your
ram
requirements,
but there are
options in the
RasPI,
beaglebone
black, and
check out
Radxa Rock. </p>
<p dir="ltr"><a href="http://radxa.com/Rock" target="_blank">http://radxa.com/Rock</a></p>
<span><font color="#888888">
<p dir="ltr">Erik</p>
</font></span>
<div>
<div>
<div class="gmail_quote">On
Jan 22, 2015
10:07 PM, "Tim
H" <<a href="mailto:crashcartpro@gmail.com" target="_blank">crashcartpro@gmail.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="ltr">
<div>
<div>
<div>This
sounds like a
fun project! <br>
Reminds me of
this guy:<br>
<a href="http://www.pcworld.idg.com.au/article/349862/seamicro_cloud_server_sports_512_atom_processors/" target="_blank">http://www.pcworld.idg.com.au/<u></u>article/349862/seamicro_cloud_<u></u>server_sports_512_atom_<u></u>processors/</a><br>
</div>
(cluster of
low power
processors in
a single box)<br>
<br>
</div>
I'd also been
kicking a
similar idea
around for the
last year, but
no real
ability to do
it, so I'd
love to see
your progress!<br>
</div>
<div>-Tim<br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22,
2015 at 9:10
PM, Stephan
Henning <span dir="ltr"><<a href="mailto:shenning@gmail.com" target="_blank">shenning@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="ltr">In
some ways,
yes. The
biggest
limitation
with the
Edison for me
is the ram.
While there is
a lot that we
could run on
it, it's
restricts them
enough that I
don't think it
would be as
useful, which
changes alters
the true
'cost' of the
setup.
<div><br>
</div>
<div>Granted,
you could
probably fit a
few hundred of
them in a 4U
chassis. It
would be an
interesting
experiment in
integration
though since
they have no
ethernet
interface,
only
wireless. </div>
</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22,
2015 at 9:02
PM, Erik
Arendall <span dir="ltr"><<a href="mailto:earendall@gmail.com" target="_blank">earendall@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<p dir="ltr">I've
often kicked
the idea
around doing
this with
Arduinos and
FPGAs. I guess
you could also
do it with
Intel Edison
modules. Cost
wise the
Edison modules
would better
than a PC. </p>
<span><font color="#888888">
<p dir="ltr">Erik
</p>
</font></span>
<div>
<div>
<div class="gmail_quote">On
Jan 22, 2015
6:44 PM,
"Stephan
Henning" <<a href="mailto:shenning@gmail.com" target="_blank">shenning@gmail.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="auto">
<div>@mc</div>
<div>Both. If
I start to
scale this to
a large number
of nodes I can
foresee many
headaches if I
can't easily
push
modifications
and updates.
From the job
distribution
side, it would
be great to
maintain
compatibility
with condor,
I'm just
unsure how
well it will
operate if it
has to hand
jobs off to
the head node
that then get
distributed
out further. </div>
<div><br>
</div>
<div>@ Brian</div>
<div>Our
current
cluster is
made up of
discrete
machines only
about 20
nodes. Many of
the nodes are
actual user
workstations
that are
brought in
when inactive.
There is no
uniform
provisioning
method. Every
box has a
slightly
different
hardware
configuration.
Thankfully we
do a pretty
good job
keeping all
required
software
aligned to the
sam version. </div>
<div><br>
</div>
<div>The VM
idea is
interesting. I
hadn't
considered
that. I will
need to think
on that and
how I might be
able to
implement it. </div>
<div><br>
</div>
<div>@david</div>
<div>Yup, I'm
fully aware
this level of
distributed
computing is
only good for
specific
cases. I
understand
your position,
thanks. <br>
<br>
<div>-stephan</div>
<div><br>
</div>
<div>---———---•---———---•---———---</div>
Sent from a
mobile device,
please excuse
the spelling
and brevity. </div>
<div><br>
On Jan 22,
2015, at 5:54
PM, Brian
Oborn <<a href="mailto:linuxpunk@gmail.com" target="_blank">linuxpunk@gmail.com</a>>
wrote:<br>
<br>
</div>
<blockquote type="cite">
<div>
<div dir="ltr">I
would be
tempted to
just copy what
the in-house
cluster uses
for
provisioning.
That will save
you a lot of
time and make
it easier to
integrate with
the larger
cluster if you
choose to do
so. Although
it can be
tempting to
get hardware
in your hands,
I've done a
lot of work
with building
all of the
fiddly Linux
bits
(DHCP+TFTP+root
on NFS+NFS
home) in
several VMs
before moving
to real
hardware. You
can set up a
private
VM-only
network
between your
head node and
the slave
nodes and work
from there.</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22,
2015 at 5:31
PM, Michael
Carroll <span dir="ltr"><<a href="mailto:carroll.michael@gmail.com" target="_blank">carroll.michael@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="auto">
<div>So is
your concern
with
provisioning
and setup or
with actual
job
distribution?<br>
<br>
~mc mobile</div>
<div>
<div>
<div><br>
On Jan 22,
2015, at
17:15, Stephan
Henning <<a href="mailto:shenning@gmail.com" target="_blank">shenning@gmail.com</a>>
wrote:<br>
<br>
</div>
<blockquote type="cite">
<div>
<div dir="ltr">This
is a side
project for
the office.
Sadly, most of
this type of
work can't be
farmed out to
external
clusters,
otherwise we
would use it
for that. We
do currently
utilize AWS
for some of
this type
work, but only
for internal
R&D.
<div><br>
</div>
<div>This all
started when
the Intel
Edison got
released. Some
of us were
talking about
it one day and
realized that
it <i>might</i>
have <i>just
enough</i> processing
power and ram
to handle some
of our smaller
problems.
We've talked
about it some
more and the
discussion has
evolved to the
point where
I've been
handed some
hours and a
small amount
of funding to
try and
implement a
'cluster-in-a-box'. </div>
<div><br>
</div>
<div>The main
idea being to
rack a whole
bunch of
mini-itx
boards on edge
into a 4U
chassis (yes,
they will
fit). Assuming
a 2"
board-board
clearance
across the
width of the
chassis and 1"
spacing
back-to-front
down the depth
of a box, I
think I could
fit 27 boards
into a 36"
deep chassis,
with enough
room for the
power supplies
and
interconnects. </div>
<div><br>
</div>
<div>Utilizing
embedded
motherboards
with Atom
C2750 8-core
CPU's and 16gb
of ram per
board, that
should give me
a pretty
substantial
cluster to
play with.
Obviously I am
starting
small,
probably with
two or three
boards running
Q2900 4-core
cpus until I
can get the
software side
worked out.</div>
<div><br>
</div>
<div>The
software-infrastructure
side is the
part I'm
having a hard
time with.
While there
are options
out there for
how to do
this, they are
all relatively
involved and
there isn't an
obvious 'best'
choice to me
right now.
Currently our
in-house HPC
cluster
utilizes
HTCondor for
it's backbone,
so I would
like to
maintain some
sort of
connection to
it. Otherwise,
I'm seeing
options in the
Beowulf and
Rocks areas
that could be
useful, I'm
just not sure
where to start
in all
honesty. </div>
<div><br>
</div>
<div>At the
end of the day
this needs to
be relatively
easy for us to
manage (time
spent working
on the cluster
is time spent
not billing
the customer)
while being
easy enough to
add notes to,
assuming this
is a success
and I get the
OK to expand
it to a full
42U racks
worth. </div>
<div><br>
</div>
<div><br>
</div>
<div>Our
current
cluster is
almost always
fully
utilized.
Currently
we've got
about a 2
month backlog
of jobs on
it. </div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22,
2015 at 4:55
PM, Brian
Oborn <span dir="ltr"><<a href="mailto:linuxpunk@gmail.com" target="_blank">linuxpunk@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="ltr">If
you can keep
your
utilization
high, then
your own
hardware can
be much more
cost
effective.
However, if
you end up
paying
depreciation
and
maintenance on
a cluster
that's doing
nothing most
of the time
you'd be
better off in
the cloud.</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22,
2015 at 4:50
PM, Michael
Carroll <span dir="ltr"><<a href="mailto:carroll.michael@gmail.com" target="_blank">carroll.michael@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="ltr">Depending
on what you
are going to
do, it seems
like it would
make more
sense to use
AWS or Digital
Ocean these
days, rather
than standing
up your own
hardware.
Maintaining
your own
hardware
sucks.
<div><br>
</div>
<div>That
being said, if
you are doing
something that
requires
InfiniBand,
then hardware
is your only
choice :)</div>
<span><font color="#888888">
<div><br>
</div>
<div>~mc</div>
</font></span></div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22,
2015 at 4:43
PM, Joshua
Pritt <span dir="ltr"><<a href="mailto:ramgarden@gmail.com" target="_blank">ramgarden@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="ltr">My
friends and I
installed a
Beowulf
cluster on a
closet full of
Pentium 75 Mhz
machines we
were donated
just for fun
many years ago
back when
Beowulf was
just getting
popular. We
never figured
out anything
to do with it
though...</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Thu, Jan 22,
2015 at 5:31
PM, Brian
Oborn <span dir="ltr"><<a href="mailto:linuxpunk@gmail.com" target="_blank">linuxpunk@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div dir="ltr">In
my previous
job I set up
several
production
Beowulf
clusters,
mainly for
particle
physics
simulations
and this has
been an area
of intense
interest for
me. I would be
excited to
help you out
and I think I
could provide
some good
assistance.
<div><br>
</div>
<div>Brian
Oborn (aka
bobbytables)<br>
<div><br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote"><span>On
Thu, Jan 22,
2015 at 4:25
PM, Stephan
Henning <span dir="ltr"><<a href="mailto:shenning@gmail.com" target="_blank">shenning@gmail.com</a>></span>
wrote:<br>
</span>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><span>
<div dir="ltr">Does
anyone on the
mailing list
have any
experience
with setting
up a cluster
computation
system? If so
and you are
willing to
humor my
questions, I'd
greatly
appreciate a
few minutes of
your time. <span><font color="#888888">
<div><br>
</div>
<div>-stephan</div>
</font></span></div>
<br>
</span><span>______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</span></blockquote>
</div>
<br>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<blockquote type="cite">
<div><span>______________________________<u></u>_________________</span><br>
<span>General
mailing list</span><br>
<span><a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a></span><br>
<span><a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a></span></div>
</blockquote>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<blockquote type="cite">
<div><span>______________________________<u></u>_________________</span><br>
<span>General
mailing list</span><br>
<span><a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a></span><br>
<span><a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a></span></div>
</blockquote>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
<br>
______________________________<u></u>_________________<br>
General
mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General mailing
list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
<br>
______________________________<u></u>_________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<br>
______________________________<u></u>_________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a><br>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>______________________________<u></u>_________________
General mailing list
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>mailman/listinfo/general</a></pre>
</blockquote>
<br>
</div><div bgcolor="#FFFFFF" text="#000000"><pre cols="72">--
This headspace for rent</pre>
</div><div bgcolor="#FFFFFF" text="#000000"></div>
______________________________<u></u><u></u>_________________<br>
General mailing list<br>
<a href="mailto:General@lists.makerslocal.org" target="_blank">General@lists.makerslocal.org</a><br>
<a href="http://lists.makerslocal.org/mailman/listinfo/general" target="_blank">http://lists.makerslocal.org/<u></u>m<u></u>ailman/listinfo/general</a></blockquote></div></div></div></div>