[ML-General] hardware RAID
david
ainut at knology.net
Fri Jun 12 09:21:08 CDT 2015
replies embedded (pun intended):
On 06/12/15 07:53, Kirk D Mccann wrote:
> So I've noticed two things that no one has mentioned yet. 3 drives
> running in raid 0 and rebuild times of large drives.
>
> Raid 0:
> You realize that if any one of the drives that are running in raid 0
> fail then you lose all your data right? The only time you want to use
> raid 0 is when you dont care about the data and are looking for
> speed. I use raid 0 for our build server build drive because we
> always have source code that can be used to rebuild the builds.
>
The two sets of RAIDs are on different machines right now. The striped
0's are used for video capture, then they are compressed and put on
different media (blu-ray write-once usually), and then that video is
deleted from the stripe array.
> Large Drives:
> So if you have more two drives mirroring isnt really what you want
> because true mirroring only works with two sets. (That could be two
> drives or two sets of raided disks).
> Since you have more than two drives you are going to want raid 5,6, or
> 7. The raid that you choose should be based on the size of your
> drives and the class of the drives.
The mirrored drives, qty 2, are each 2 Tb, and are identical drives.
Again, intent is easy failover without losing existing data. I'm
horrible about doing the proper backups at home. (Like the mechanic's
personal car always needs a lot of work.)
That machine is not a data or compute intensive one. Email, games, and
such.
>
> If the drives you have are more than 1TB in size and they are consumer
> grade drives then you shouldn't be using raid 5.
I was incorrect; they are RAID 1.
> This is a problem because the likelihood of a read failure while
> rebuilding a disk is higher the larger the drives are. So then you
> have to be able to handle a read failure which requires a higher
> raid. Check out the calculator:
> http://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/
>
> Also if you are using raid dont use Western Digital Green drives
> unless you plan to reflash the firmware to make them function like red
> drives.
>
Will look further into ZFS.
> My recommendation:
> All that being said I am a big fan of Freenas, because it uses ZFS.
> Btrfs is great but its not quite where ZFS is yet(or at least that was
> the case when I last looked at it). If you go the ZFS route you REALLY
> REALLY need to read up on how ZFS, uDevs, and vDevs work. Because
> what many people dont realize is you cant just add a single disk to
> the array when you do it makes the entire array fail if that single
> added disk fails. When you add drives you have to add drives in sets.
>
> Personally I'm paranoid about my data, I use ZFS raid z3 on two vDevs,
> each vDev has 5 2+TB drives.
>
> Oh and with ZFS you can use different size drives but you waist a good
> amount of space when you do that.
> And lastly be sure to schedule scrubs of your drives and do it in a
> way that the scrub will not occur while a long smart test is running.
> That can cause problems.
>
> -Kirk
>
Thanks!
> On Thu, Jun 11, 2015 at 5:33 PM, WebDawg <webdawg at gmail.com
> <mailto:webdawg at gmail.com>> wrote:
>
>
>
> On Thu, Jun 11, 2015 at 3:30 PM, WebDawg <webdawg at gmail.com
> <mailto:webdawg at gmail.com>> wrote:
>
> On Thu, Jun 11, 2015 at 3:11 PM, david <ainut at knology.net
> <mailto:ainut at knology.net>> wrote:
>
> I'm thinking about putting all the computers on the
> network disk array, including the SOC's: beaglebone
> blacks, arduino mega256, and the RPi2. I would not mind
> doing the compiles (and maybe even the booting!) on the
> hard drives instead of the limited-life SD cards and
> 'flash' that are on the SOC's. Any of you guys done
> that? Everything in the house is 1Gb Ethernet. If only I
> could get that to the outside world <heavy sigh.> :-)
> Already have the BBB's booting of the SD card, which you
> have to do with the rev B's and their 2 Gb size..
>
> David
>
>
> I have thought about it. At that point you need to consider
> the transport mechanism between the systems and such. NFS,
> CIFS, iSCSI?
>
> You need to back up. Live. (ZFS Snapshot)
>
> I do not know what you do with them, so I cannot help there.
>
> PXE boot? Other ways to boot? I do not know a lot about that
> stuff except the RPi.
>
> I would consider bonding more then one port together on the
> network server if you are doing anything major.
>
> I run some VMS over NFS right now, I do not like it.
>
> I was using CIFS but after learning that is really bad to do
> over and over again. I stopped. None of this was mission
> critical.
>
> I want speed so I am leaning towards some physical disks for
> the virtual systems. In the future I would use a fiber target
> or bonded target that was dedicated if I wanted network stuff.
>
> Fun Fun.
>
>
> I forgot to mention. You are going to need a few UPS units. I
> have destroyed virtual systems when a server that hosts its
> filesystem has went down. It is not fun bringing it back to life.
>
> If you are talking about creating and image, and having the
> devices pull any new images when new ones exist, still using them
> on the SD cards while they are on, I think you have a different
> situation all together.
>
> _______________________________________________
> General mailing list
> General at lists.makerslocal.org <mailto:General at lists.makerslocal.org>
> http://lists.makerslocal.org/mailman/listinfo/general
>
>
>
>
> _______________________________________________
> General mailing list
> General at lists.makerslocal.org
> http://lists.makerslocal.org/mailman/listinfo/general
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.makerslocal.org/pipermail/general/attachments/20150612/3264f82b/attachment.html>
More information about the General
mailing list