[ML-General] hardware RAID
Hunter Fuller
hfuller at pixilic.com
Fri Jun 12 10:57:53 CDT 2015
All,
There are a couple of things still to be addressed here...
- David, why did your md raid fail? This should never ever ever ever ever
happen. Maybe it didn't fail and the differences you see are in the
metadata and boot loader...? I hope that is the case. There's no reason for
a md raid 1 to not mirror. If you have that problem, you may have much
bigger problems. Such as a failing disk, controller, or something.
- I'm as big of a zfs fanboy as anyone. But remember, you NEED error
corrected ram to run zfs! The chances of a ~8-16GB set of memory developing
an error are extremely high compared to back when we ran 128MB. If zfs is
doing a patrol read ("scrub") and a bit gets flipped in your ram, it will
treat that data as authoritative and push it out to all your disks and
caches!!! This can and will destroy your data in subtle but important ways.
I have seen it happen. Be careful out there!
On Jun 12, 2015 10:11 AM, "Kirk D Mccann" <kirk.mccann at gmail.com> wrote:
>
>
> On Fri, Jun 12, 2015 at 9:37 AM, WebDawg <webdawg at gmail.com> wrote:
>
>>
>>
>> On Fri, Jun 12, 2015 at 5:53 AM, Kirk D Mccann <kirk.mccann at gmail.com>
>> wrote:
>>
>>> So I've noticed two things that no one has mentioned yet. 3 drives
>>> running in raid 0 and rebuild times of large drives.
>>>
>>> Raid 0:
>>> You realize that if any one of the drives that are running in raid 0
>>> fail then you lose all your data right? The only time you want to use raid
>>> 0 is when you dont care about the data and are looking for speed. I use
>>> raid 0 for our build server build drive because we always have source code
>>> that can be used to rebuild the builds.
>>>
>>> Large Drives:
>>> So if you have more two drives mirroring isnt really what you want
>>> because true mirroring only works with two sets. (That could be two drives
>>> or two sets of raided disks).
>>> Since you have more than two drives you are going to want raid 5,6, or
>>> 7. The raid that you choose should be based on the size of your drives and
>>> the class of the drives.
>>>
>>> If the drives you have are more than 1TB in size and they are consumer
>>> grade drives then you shouldn't be using raid 5.
>>> This is a problem because the likelihood of a read failure while
>>> rebuilding a disk is higher the larger the drives are. So then you have to
>>> be able to handle a read failure which requires a higher raid. Check out
>>> the calculator:
>>> http://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/
>>>
>>> Also if you are using raid dont use Western Digital Green drives unless
>>> you plan to reflash the firmware to make them function like red drives.
>>>
>>
>> Do you know if the WD greens can still be reflashed? I have read "The WD
>> Green drives did allow you to disable TLER up to a point, then WD caught on
>> that people were using these drives for RAID instead of their more
>> expensive enterprise level drives and put and end to it. Now the popular
>> choice seems to be Hitachis, as they work in RAID arrays without any TLER
>> adjustments required."
>>
>>
>> Do you know if this is true or is it just some forum BS?
>>
>> Check out this forum post:
>
> https://forums.freenas.org/index.php?threads/hacking-wd-greens-and-reds-with-wdidle3-exe.18171/
>
>>
>>
>>>
>>> My recommendation:
>>> All that being said I am a big fan of Freenas, because it uses ZFS.
>>> Btrfs is great but its not quite where ZFS is yet(or at least that was the
>>> case when I last looked at it). If you go the ZFS route you REALLY REALLY
>>> need to read up on how ZFS, uDevs, and vDevs work. Because what many
>>> people dont realize is you cant just add a single disk to the array when
>>> you do it makes the entire array fail if that single added disk fails.
>>> When you add drives you have to add drives in sets.
>>>
>>> I talked to one of the btrfs devs onces. That dev was extremely against
>> me talking about btrfs like it was ZFS. That being said I still do not
>> know alot about it. Being the dev they were I am sure we are talking about
>> a lot of the details that really do make a difference in the end. I know I
>> need to do more research but at the time the dev was more into making btrfs
>> stable for enterprise then features.
>>
>>
>>
>>> Personally I'm paranoid about my data, I use ZFS raid z3 on two vDevs,
>>> each vDev has 5 2+TB drives.
>>>
>> This is one of the reasons I use ZFS w/ ECC memory. It can detect bit
>> flips!
>>
>>
>>>
>>> Oh and with ZFS you can use different size drives but you waist a good
>>> amount of space when you do that.
>>> And lastly be sure to schedule scrubs of your drives and do it in a way
>>> that the scrub will not occur while a long smart test is running. That can
>>> cause problems.
>>>
>>> Can I ask what type of problems?
>>
> According to the forums. "It isnt able to handle a scrub, offline test
> and normal traffic well."
>
> https://forums.freenas.org/index.php?threads/scrub-and-smart-testing-schedules.20108/
>
> Sounds like performance. The guy who made the post (cyberjock) is one of
> their forum admins and he has helped me with some difficult questions so I
> trust his opinion.
>
>
>>
>>> -Kirk
>>>
>>>
>> _______________________________________________
>> General mailing list
>> General at lists.makerslocal.org
>> http://lists.makerslocal.org/mailman/listinfo/general
>>
>
>
> _______________________________________________
> General mailing list
> General at lists.makerslocal.org
> http://lists.makerslocal.org/mailman/listinfo/general
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.makerslocal.org/pipermail/general/attachments/20150612/26549ff4/attachment.html>
More information about the General
mailing list