ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • Re: [zfs-macos Re: Office For Mac
    카테고리 없음 2020. 1. 31. 03:54
    Re: [zfs-macos Re: Office For Mac
    1. Re Zfs-macos Re Office For Mac Free

    The leading members of the FreeNAS community make it 1 (with a detailed explanation and links to reports of data loss) that if you use ZFS without ECC RAM that there is a very good chance that you will eventually experience a total loss of your data without any hope of recovery. 2 (Unless you have literally thousands of dollars to spend on that recovery. And even then there's no guarantee of said recovery.) The features of ZFS, checksumming and scrubbing, work together to silently spread the damage done by cosmic rays and/or bad memory throughout a file system and this corruption then spreads to your backups.

    Actually, thinking about this some more, the real reason that this hypothetical horror scenario cannot actually happen in real life is that the checksum would never get recomputed from the improperly “corrected” data to begin with: The checksum for a given block is stored in its.parent. block (which itself has a checksum that is stored in its parent, and so on and so forth, all the way up to the uberblock), not in the block itself. Therefore, if a checksum failure is detected for a block, only the block itself will be corrected (and possibly corrupted as a result of a memory error), not its checksum (which is protected by the parent block’s checksum).

    Abstract: present a study of the effects of disk and memory corruption on file system data integrity. Our analysis focuses on Sun’s ZFS, a modern commercial offering with numerous reliability mechanisms.

    Through careful and thorough fault injection, we show that ZFS is robust to a wide range of disk faults. We further demonstrate that ZFS is less resilient to memory corruption, which can lead to corrupt data being returned to applications or system crashes. Our analysis reveals the importance of considering both memory and disk in the construction of truly robust file and storage systems.

    On Feb 26, 2014, at 10:51 PM, Daniel Becker wrote: Incidentally, that paper came up in a ZFS-related thread on Ars Technica just the other day (as did the link to the FreeNAS forum post). Let me just quote what I said there: The conclusion of the paper is that ZFS does not protect against in-memory corruption, and thus can't provide end-to-end integrity in the presence of memory errors. I am not arguing against that at all; obviously you'll want ECC on your ZFS-based server if you value data integrity - just as you would if you were using any other file system. That doesn't really have anything to do with the claim that ZFS specifically makes lack of ECC more likely to cause total data loss, though. The sections you quote below basically say that while ZFS offers good protection against on-disk corruption, it does.not. effectively protect you against memory errors.

    Re: RE: Introducing groups in Outlook for Mac, iOS and Android We've seen the Groups experience come and go in Outlook for Mac installs that have the correct licensing. We built three machines with a non-volume license and two of them still have the correct license but the Groups cog vanished overnight. How do you activate Office 2016 for MAC with a license key? It keeps prompting me to log on and then it cannot find the license key. Hi Ishie, The details on how to activate using the Product Key are given on the MSDN portal, you can expand the details section under 'Office Home and Business 2016 for Mac' for the instructions. As far as I understand things, as long as your OS X ZFS version is compatible with your Solaris ZFS version you're fine. If the OS X ZFS version is too old (likely) you may have to do a NFS or SMB mount, both of which are not optimal.

    Or, put another way, the authors are basically finding that despite all the FS-level checksumming, ZFS does not render ECC memory unnecessary (as one might perhaps naively expect). No claim is being made that memory errors affect ZFS more than other filesystems. Just like anything else, end-to-end data integrity is needed. So until people write apps that self-check everything, there is a possibility that something you trust 1 can fail. As it happens, only the PC market demands no ECC.

    1 - richard billw 27/2/2014, 9:42 น. This is a ZFS issue because ZFS is advertised as being the most resilient file system currently available; however, a community leader in the FreeNAS forums (though, as pointed out by Daniel Becker, one without knowledge of ZFS internals) has argued repeatedly and strongly, and in detail that this robustness is severely compromised by using ZFS without ECC memory. Further he argues that ZFS without ECC memory is more vulnerable than other file systems to data corruption and that this corruption is likely to silently cause complete and unrecoverable pool failure. This in turn, if true, is an issue because ZFS is increasing being used on systems that either are not using or can not use ECC memory. On Feb 28, 2014, at 1:32 AM, Philip Robar wrote: On Thu, Feb 27, 2014 at 11:42 AM, Bill Winnett wrote: Why is this a zfs issue. This is a ZFS issue because ZFS is advertised as being the most resilient file system currently available; however, a community leader in the FreeNAS forums (though, as pointed out by Daniel Becker, one without knowledge of ZFS internals) has argued repeatedly and strongly, and in detail that this robustness is severely compromised by using ZFS without ECC memory. Further he argues that ZFS without ECC memory is more vulnerable than other file systems to data corruption and that this corruption is likely to silently cause complete and unrecoverable pool failure.

    This in turn, if true, is an issue because ZFS is increasing being used on systems that either are not using or can not use ECC memory. We might buy this argument if, in fact, no other program had the same vulnerabilities. But.all. of them do - including OSX.

    So it is disingenuous to claim this as a ZFS deficiency. richard Philip Robar 1/3/2014, 14:39 น. I have been running ZFS in production using the past and current versions for OS X on over 60 systems (12 are servers) since Apple kicked ZFS loose. No systems (3 run ECC) have had data corruption or data loss. Some pools have disappeared on the older ZFS but were easily recovered on modern (current development) and past OpenSolaris, FreeBSD, etc., as I keep clones of 'corrupted' pools for such tests. Almost always, these were the result of connector/cable failure. In that span of time no RAM has failed 'utterly' and all data and tests have shown quality storage.

    In that time 11 drives have failed and easily been replaced, 4 of those were OS drives, data stored under ZFS and a regular clone of the OS also stored under ZFS just in case. All pools are backed-up/replicated off site.

    Probably a lot more than most are doing for data integrity. On 28 February 2014 20:32, Philip Robar wrote: This is a ZFS issue because ZFS is advertised as being the most resilient file system currently available; however, a community leader in the FreeNAS forums (though, as pointed out by Daniel Becker, one without knowledge of ZFS internals) has argued repeatedly and strongly, and in detail that this robustness is severely compromised by using ZFS without ECC memory. Further cyberjock is the biggest troll ever, not even the people actually involved with FreeNAS (iX system) knows what to do with him.

    He does spend an awful amount of time on the freenas forums helping others and as such tolerate him on that basis. Otherwise, he just someone doing nothing, with a lot of time on his hand and spewing the same stuff over and over simply because he has heard about it. Back to the ECC topic; one core issue to ZFS is that it will specifically write to the pool even when all you are doing is read, in an attempt to correct any data found to have incorrect checksum.

    So say you have corrupted memory, you read from the disk, zfs believes the data is faulty (after all, the checksum will be incorrect due to faulty RAM) and start to rewrite the data. That is one scenario where ZFS will corrupt an otherwise healthy pool until its too late and all your data is gone. As such, ZFS is indeed more sensitive to bad RAM than other filesystem. Having said that; find me.ONE. official source other than the FreeNAS forum stating that ECC is a minimal requirements (and no a wiki written by cyberjock doesn't count). Solaris never said so, FreeBSD didn't either, nor Sun. Bad RAM however has nothing to do with the occasional bit flip that would be prevented using ECC RAM.

    The probability of a bit flip is low, very low. Back to the OP, I'm not sure why he felt he had to mentioned being part of SunOS.

    ZFS was never part of sunos. JY Philip Robar 1/3/2014, 23:32 น. Back to the ECC topic; one core issue to ZFS is that it will specifically write to the pool even when all you are doing is read, in an attempt to correct any data found to have incorrect checksum. So say you have corrupted memory, you read from the disk, zfs believes the data is faulty (after all, the checksum will be incorrect due to faulty RAM) and start to rewrite the data. That is one scenario where ZFS will corrupt an otherwise healthy pool until its too late and all your data is gone.

    As such, ZFS is indeed more sensitive to bad RAM than other filesystem. Most Sun/Solaris documentation isn't going to mention the need for ECC memory because all Sun systems shipped with ECC memory. FreeBSD/PC-BSD/FreeNAS/NAS4Free/Linux in turn derive from worlds where ECC memory is effectively nonexistent so their lack of documentation may stem from a combination of the ZFS folks just assuming that you have it and the distro people not realizing that you need it. FreeNAS's guide does state pretty strongly that you should use ECC memory. But if you insist: from 'Oracle Solaris 11.1 Administration: ZFS File Systems', ' Consider using ECC memory to protect against memory corruption. Silent memory corruption can potentially damage your data.' I'm sorry, but I'm not following your logic here.

    Are you saying that ZFS doesn't use RAM so it can't be affected by it? ZFS likes lots of memory and uses it aggressively. So my understanding is that large amounts of data are more likely to be in memory with ZFS than with other file systems. If Google's research is to believed then random memory errors are a lot more frequent than you think that they are. As I understand it, ZFS does not checksum data while it's in memory.

    (While there a debug flag to turn this on, I'm betting that the performance hit is pretty big.) So how does RAM failure or random bit flips have nothing to do with ZFS? Some pools have disappeared on the older ZFS but were easily recovered on modern (current development) and past OpenSolaris, FreeBSD, etc., as I keep clones of 'corrupted' pools for such tests. Almost always, these were the result of connector/cable failure. In that span of time no RAM has failed 'utterly' and all data and tests have shown quality storage. In that time 11 drives have failed and easily been replaced, 4 of those were OS drives, data stored under ZFS and a regular clone of the OS also stored under ZFS just in case. All pools are backed-up/replicated off site. Probably a lot more than most are doing for data integrity.

    You keep claiming this, but I still haven't seen any conclusive evidence that lack of ECC poses a higher overall risk for your data when using ZFS than with other file systems. On Mar 2, 2014, at 2:33 AM, Bjoern Kahl wrote: On the other side you say (only) 8% of all DIMMs are affected per.year. I.guess.

    (and might be wrong) that the majority of installed DIMMs nowadays are 2 GB DIMMs, so you need four of them to build 8 GB. Assuming equal distribution of bit errors, this means on average.every. DIMM will experience 1 bit error per hour. That doesn't fit. The disconnect is in the fact that they are not uniformly distributed at all; see my other email. Some (bad) DIMMs produce tons of errors, while the vast majority produce none at all. Quoting the averages is really kind of misleading.

    Chris Ridd 2/3/2014, 4:49 น. On 2 Mar 2014, at 09:16, Philip Robar wrote: On Sat, Mar 1, 2014 at 5:07 PM, Jason Belec wrote: RAM/ECC RAM is like consumer drives vs pro drives in your system, recent long term studies have shown you don't get much more for the extra money. Do you have references to these studies? This directly conflicts with what I've seen posted, with references, in other forums on the frequency of soft memory errors, particularly on systems that run 24x7, and how ECC memory is able to correct these random errors. I don't have any reference to Jason's claims about ECC, but recently Backblaze published some stats on their experiences with a variety of drives.

    Jason might have been thinking about these: They have lots more related articles on their blog that are well worth a read. Chris Eric Jaw 31/3/2014, 14:23 น.

    I completely agree. I'm experiencing these issues currently. Doing a scrub is just obliterating my pool. I just started using ZFS a few weeks ago. Thanks for the idea! I used all new SATA cables when I built this I have no idea what's causing this, so I posted some more details here: @Daniel Becker has a very good point about how I have the disks set. I'll have to look into that some more On Wednesday, February 26, 2014 8:56:50 PM UTC-5, Philip Robar wrote: Please note, I'm not trolling with this message.

    I worked in Sun's OS/Net group and am a huge fan of ZFS. The leading members of the FreeNAS community make it clear 1 (with a detailed explanation and links to reports of data loss) that if you use ZFS without ECC RAM that there is a very good chance that you will eventually experience a total loss of your data without any hope of recovery. 2 (Unless you have literally thousands of dollars to spend on that recovery. And even then there's no guarantee of said recovery.) The features of ZFS, checksumming and scrubbing, work together to silently spread the damage done by cosmic rays and/or bad memory throughout a file system and this corruption then spreads to your backups. Given this, aren't the various ZFS communities-particularly those that are small machine oriented 3-other than FreeNAS (and even they don't say it as strongly enough in their docs)doing users a great disservice by implicitly encouraging them to use ZFS w/o ECC RAM or on machines that can't use ECC RAM?

    As an indication of how persuaded I've been for the need of ECC RAM, I've shut down my personal server and am not going to access that data until I've built a new machine with ECC RAM. Phil 1 ECC vs non-ECC RAM and ZFS: 2 cyberjock: 'So when you read about how using ZFS is an 'all or none' I'm not just making this up. I'm really serious as it really does work that way. ZFS either works great or doesn't work at all. That really truthfully is how it works.'

    3 ZFS-macos, NAS4Free, PC-BSD, ZFS on Linux - - Bjoern Kahl Siegburg Germany 'googlelogin@-my-domain-' Languages: German, English, Ancient Latin (a bit:-)) -BEGIN PGP SIGNATURE- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - iQCVAgUBUznj8lsDv2ib9OLFAQLo8wP/SkuFv8lYZdP2+8wuazrPCcLJtToAhoIf 7LVaRznmkrMPOBiDgmcBG+vZbT6y8KRYvsH8D60W33KFASJE4Qj3NIu+kX3I94Ol UfbcIuivc3VAVCCLNoxVv3khzDEBg9pk7kcF2Yy65ot+xR1l5zLvMWeP7Cult9Xc 60+9aEyIOOE= =lrdh -END PGP SIGNATURE- Eric Jaw 31/3/2014, 19:41 น. Thanks for the response! Here's some more details on the setup: I started using ZFS about a few weeks ago, so a lot of it is still new to me. I'm actually not completely certain about 'proper procedure' for repairing a pool. I'm not sure if I'm supposed to clear the errors after the scrub, before or after (little things). I'm not sure if it even matters. When I restarted the VM, the checksum counts cleared on its own.

    I wasn't expecting to run into any issues. But I drew a part of my conclusion from the high numbers of checksum errors that never happened until I started reading from the dataset and that number went up in the tens' when I scrubbed the pool; almost doubling when scrubbed for a second time. The long and the short of it, is that most likely you have a failing disk or controller/connector more than anything. I used to run an 8-disk, 4 mirrored pair pool on a small box without good airflow and slow, SATA-150 controllers that were supported by Solaris 10. I ended up replacing the whole system with a new large box with 140mm fans as well as sata-300 controllers to get better cooling.

    Over time, every disk has failed because of heat issues. Many of my SATA cables failed too. They were cheap junk. Equipment has to be selected carefully. I do not see any failing bits for the 3+ years now that I have been running on the new hardware with all of the disks being replaced 2 years ago, so I have been making no changes for the past 2 years. All is good for me with ZFS and non-ECC ram.

    If I build another system, I will build a new system with ECC RAM and will get new controllers and new cables just because. My current select is to use ZFS on Linux, because I haven't had a disk array/container that I could hook up to the Macs in the house. My new ZFS array might end up being Mac Pro based with some of the thunderbolt based disk carriers. I have about 8TB of stuff that I need to be able to keep safe. Amazon Glacier is on my radar. At some point I may just get a 4TB USB3.0 drive to copy stuff to and ship off to Glacier.

    Gregg - - You received this message because you are subscribed to the Google Groups 'zfs-macos' group. To unsubscribe from this group and stop receiving emails from it, send an email to.

    For more options, visit. Daniel Becker 31/3/2014, 21:13 น. My oldest system running ZFS is an Mac Mini Intel Core Duo with 3GB RAM (not ECC) it is the home server for music, tv shows, movies, and some interim backups.

    Re Zfs-macos Re Office For Mac Free

    Re: [zfs-macos Re: Office For Mac

    The mini has been modded for ESATA and has 6 drives connected. The pool is 2 RaidZ of 3 mirrored with copies set at 2. Been running since ZFS was released from Apple builds. Lost 3 drives, eventually traced to a new cable that cracked at the connector which when hot enough expanded lifting 2 pins free of their connector counter parts resulting in errors.

    Visually almost impossible to see. I replaced port multipliers, Esata cards, RAM, mini's, power supply, reinstalled OS, reinstalled ZFS, restored ZFS data from backup, finally to find the bad connector end one because it was hot and felt 'funny'. The long and the short of it, is that most likely you have a failing disk or controller/connector more than anything.

    I used to run an 8-disk, 4 mirrored pair pool on a small box without good airflow and slow, SATA-150 controllers that were supported by Solaris 10. I ended up replacing the whole system with a new large box with 140mm fans as well as sata-300 controllers to get better cooling. Over time, every disk has failed because of heat issues. Many of my SATA cables failed too. They were cheap junk. I have my HDD at a steady 40 degrees or below. I thought about replacing the SATA cables, but I have two of them using new ones and the rest using old ones, and from the checksum errors I'm seeing, it would mean all the cables need replacing, which I don't believe could be the case in this build.

    A failing disk controller on all four drives that were barely used? I have higher confidence in HDD production than that. I feel certain it's something else, but thank you for your input. I'll keep it as a consideration if all else fails. I'm running this all through a VM, which is where I believe could be the issue, but we need to figure out why and how to work around it if this is the case. I'm not sure it's come across particularly well in this thread, but ZFS doesn't and can't cope with hardware that's so unreliable that it tells lies about basic things, like whether your writes have made it to stable storage, or doesn't mind the shop, as is the case with non-ECC memory.

    It's one thing when you have a device reading back something that doesn't match the checksum, but it gets uglier when you've got a single I/O path and a controller that seems to write the wrong bits in stride (I've seen this) or when the problems are even closer to home (and again I emphasise RAM). You may not have problems right away. You may have problems where you can't tell the difference, like flipping bits in data buffers that have no other integrity checks. But you can run into complex failure scenarios where ZFS has to cash in on guarantees that were rather more approximate than what it was told, and then it may not be a case of having some bits flipped in photos or MP3s but no longer being able to import your pool or having someone who knows how to operate zdb do some additional TXG rollback to get your data back after losing some updates. RAID0 Host HHD0 -. PhysicalDrive0 - raw vmdk - PhysicalDrive0.vmdk HHD1 -. PhysicalDrive1 - raw vmdk - PhysicalDrive1.vmdk HHD2 -.

    PhysicalDrive2 - raw vmdk - PhysicalDrive2.vmdk HHD3 -. PhysicalDrive3 - raw vmdk - PhysicalDrive3.vmdk HHD4 -. PhysicalDrive4 - raw vmdk - PhysicalDrive4.vmdk HHD5 -. PhysicalDrive5 - raw vmdk - PhysicalDrive5.vmdk Guest PhysicalDrive0.vmdk.

    On Apr 2, 2014, at 1:38 PM, Daniel Becker wrote: The only time this should make a difference is when your host experiences an unclean shutdown / reset / crash. On Apr 2, 2014, at 8:49 AM, Eric wrote: Not true. ZFS flushes also mark known states. If the zfs stack issues a flush and the system returns, it uses that as a guarantee that that data is now on disk. Later writes will assume that the data was written and if the hard drive later changes the write order (which some disks will do for performance) things break. You can have issues if any part of the disk chain lies about the completion of flush commands.

    Daniel Becker 2/4/2014, 17:37 น. On Apr 2, 2014, at 3:08 PM, Matt Elliott wrote: Not true. ZFS flushes also mark known states.

    If the zfs stack issues a flush and the system returns, it uses that as a guarantee that that data is now on disk. However, that guarantee is only needed to ensure that on-disk data is consistent even if the contents of the cache is lost, e.g. Due to sudden power loss. A disk cache never just loses dirty data in normal operation. later writes will assume that the data was written and if the hard drive later changes the write order (which some disks will do for performance) things break. You can have issues if any part of the disk chain lies about the completion of flush commands.

    What would break, in your opinion? Again, as long as you don’t somehow lose the contents of your cache, it really doesn’t matter at all what’s physically on the disk and what’s still in the cache. Eric Jaw 2/4/2014, 21:03 น. The Wikipedia article, correctly summarising the Google study, is plain in saying not that extremely high error rates are common but that error rates are highly variable in large-sample studies, with some systems seeing extremely high error rates. ECC gives a significant assurance based on an incremental cost, so what's your data worth?

    You're not guaranteed to be screwed by not using ECC (and the Google paper doesn't say this either), but you are assuming risks that ECC mitigates. Look at the above blog, however: even DIMMs that are high-quality but non-ECC can go wrong and result in nasty system corruption. I also think it's mistaken to say this is distinctly a problem with ZFS.

    Any 'next-generation' filesystem that provides protections against on-disk corruption via checksums ends up with a residual risk focus on making sure that in-core data integrity is robust. You could well have those problems on the pools you've deployed, and there are a lot of situations in you'd never know and quite a lot (such as most of the bits in a photo or MP3) where you'd never notice low rates of bit-flipping. The fact that you haven't noticed doesn't equate to there being no problems in a strict sense, it's far more likely that you've been able to tolerate the flipping that's happened. The guy at Sun with the blog above got lucky: he was running high-quality non-ECC RAM, and it went pear-shaped, at least for metadata cancer, quite quickly, allowing him to recover by rolling back snapshots. Take a look out there, and you'll find people who are very confused about the risks and available mitigations.

    I found someone saying that there's no problem with more traditional RAID technologies because disks have CRCs. By comparison, you can find Bonwick, educated as a statistician, talking about SHA256 collisions by comparison to undetected ECC error rates and introducing ZFS data integrity safeguards by way of analogy to ECC. That's why the large-sample studies are interesting and useful: none of this technology makes data corruption impossible, it just goes to extreme length to marginalise the chances of those events by addressing known sources of errors and fundamental error scenarios-in-core is so core that if you tolerate error there, those errors will characterize systematic behaviour where you have better outcomes reasonably available (and that's.reasonably. available, I would suggest, in a way that the Madison paper's recommendation to make ZFS buffers magical isn't). CRC-32 does a great job detecting bad sectors and preventing them from being read back, but SHA256 in the right place in a system detects errors that a well-conceived vdev topology will generally make recoverable.

    That includes catching cases where an error isn't caught by CRC-32, which may be a rare result, but when you've got the kind of data densities that ZFS can allow, you're rolling the dice often enough that those results become interesting. My oldest system running ZFS is an Mac Mini Intel Core Duo with 3GB RAM (not ECC) it is the home server for music, tv shows, movies, and some interim backups.

    The mini has been modded for ESATA and has 6 drives connected. The pool is 2 RaidZ of 3 mirrored with copies set at 2. Been running since ZFS was released from Apple builds. Lost 3 drives, eventually traced to a new cable that cracked at the connector which when hot enough expanded lifting 2 pins free of their connector counter parts resulting in errors. Visually almost impossible to see. I replaced port multipliers, Esata cards, RAM, mini's, power supply, reinstalled OS, reinstalled ZFS, restored ZFS data from backup, finally to find the bad connector end one because it was hot and felt 'funny'. So to summarize that article, 'using ECC memory is safer than not using ECC memory.'

    I don't think this was ever in doubt. Note that he does.not. talk about anything like the hypothetical 'a scrub will corrupt all your data' scenario (nor is anything like that mentioned in his popular 'ZFS: Read Me 1st' article); in fact, the only really ZFS-specific point that he raises at all is the part about dirty data likely being in memory (= vulnerable to bit flips) for longer than it would be in other file systems.

    If you feel this is necessary go for it. Those that have systems that don't have ECC should just run like the sky is falling by your point view. That said, I can guarantee non of the systems I have under my care have issues. How do I know? Well the data is tested/compared at regular intervals. Maybe I'm the luckiest guy ever, where is that lottery ticket.

    Is ECC better, possibly, probably in heavy load environments, no data has been provided to back this up. Especially nothing in the context of what most users needs are at least here in the Mac space. They are not all the same. Just like regular RAM are not all the same. Just like HDDs are not all the same. Fear mongering is wonderful and easy.

    Putting forth a solution guaranteed to be better is what's needed now. Did you actually reference a wiki? A document anyone can edit to suit there view? I guess I come from a different era. Although I moved from OS X to illumos as a primary platform precisely because of ZFS (I ended up posting to the original list about the demise of the project because I happened to be doing an install the week Apple people the plug), I've spent enough time with OS X, including debugging storage interop issues with NexentaStor in significant commercial deployments, that it's risible to suggest I have zero knowledge of the platform and even more risible to imply that the role of ECC in ZFS architecture is here somehow fundamentally a matter of platform variation. I've pointed to a Solaris engineer showing core dumps from non-ECC RAM and reporting data corruption as a substantiated instance of ECC problems, and I've pointed to references to how ECC serves as a point of reference from one of its co-creators. I've explained that ECC in ZFS should be understood in terms of the scale it allow and the challenges that creates for data integrity protection, and I've tried to contrast the economics of ECC to what I take to be a less compelling alternative sketched out by the Mdison paper.

    At the same time I've said that ECC use is genereally assumed in ZFS, I've allowed that doing so is a question of an incremental cost against the value of your data and costs to replace it. I don't understand why you've decided to invest so much in arguing that ECC is so completely marginal a data integrity measure that you can't have a reasonable discussion about what gets people to different conclusions and feel the need to be overtly dismissive of the professionalism and expertise of those who come to fundamentally different conclusions, but clearly there's not going to be a dialogue on this. My only interest in posting at this point is so that people on this list at least have a clear statement of both ends of the argument and can judge for themselves. It sounds like people are missing the forest for the trees. Some of us have been successfully RAIDing/deploying storage for years on everything from IDE vinum to SCSI XFS and beyond without ECC. We use ZFS today because of its featureset. Data integrity checking through checksumming is just one of those features which would have mitigated some issues that other file systems have historically failed to do.

    (Otherwise we should all be happy with existing journaling filesystems on a soft or hard RAID). ECC just adds another layer of mitigation (and even in a less-implementation-specific way like how ZFS may 'prefer' raw device access instead of whatever storage abstraction the controller is presenting). Asserting that ECC is 'required' has about the same logic to it (and I would say less logic to it) than asserting a 3ware controller with raw jbod passthrough is 'required'. On Sat, Apr 12, 2014 at 7:46 AM, Bayard Bell Rob Lewis 12/4/2014, 17:47 น. This has been quite the interesting thread. Way back long ago when I was dong graduate work in microarchitecture (aka processor design) there were folks who wanted to put an x86 processor in a satellite.

    X86, especially at the time, was totally NOT qualified for use in space. The Pentium chip (way back) had this really cool feature, that a single bit flip (e.g. Transient fault from alpha particle strike) would deadlock the processor cold.

    If the correct bit in the reservation queue got toggled. So why the little story: Because people who really care about their computation, for the longest time, didn't use x86 processors. They used IBM mainframe processors, SPARC chips, etc. Because, at least 10 years ago, the ALU's in x86 chips had.zero. protection. So while there may have been memory protection - the results of the ALU were completely unprotected. PowerRISC, SPARC, PA-RISC, etc.

    At least all had parity protected ALU's. Parity can't correct the calculation, but it can detect a single bit fault. If you really want to protect your data end-to-end, you likely, still need to buy a better class of machine. It might now be included in x86 class processors, but I can't find anything that says the ALU's are protected. The old addage, 'you get what you pay for' still applies.

    If you're interested, you can read about Fujitsu's SPARC 64 data protection:. And I know this type of technology is in things like PowerRISC chips; IBM's mainframe line has had ECC protected ALU's for a long time, (which I've never spent the time to figure out how they work.). Back then, I kind of saw what he meant, but the funny part is that nowadays, it's as if his school of thought is being obsoleted by the reality around us.

    It's kind of valid to say that x86 chips are not 'proper', but the reality is that 95% of the Internet runs on the bloody things. Back twenty years ago, there were things like SPARC servers, Silicon Graphics workstations, and all. It's all just PCs.

    PCs that fit in your handbag, PCs that fit under your desk, PCs that fit in a server rack. It's still just PCs. I'm pretty sure that the X86 architecture has had ALU error correction for a while now. I know my AMD X2s had L1 and L2 ECC, and I think the ALU was protected (though I wouldn't swear to that). However, looking at an Intel white paper on the Xeon E7 family reliability features it says: 'E7 family provides internal on-die error protection to protect processor registers from transient faults, and enables dynamic processor sparing and migration in the case of a failing processor.'

    Re: [zfs-macos Re: Office For Mac

    In fact the over all architecture looks like it robustness was a top priority concern. If you'd like to read the paper you can find it here: Bu Jin 3/3/2016, 9:56 น. I know this is an old thread, but I didn't see where you ever got word back from the Open ZFS dev team, and this is an issue I feel needs to be address. I am a software engineer, and I have many years of experience working with ZFS. Though admittedly I have not worked on ZFS development myself, but I am familiar with the sort of data structures and processes used by ZFS.

    I'm very skeptical of this idea of 'ZFS cancer' as I would call it, where ZFS's self-healing routines become poisonous and start corrupting the entire filesystem due to a data error which occurs in memory. Now this is a very complicated subject, because there is a lot to take into consideration, but let us consider only the data for a moment. ZFS uses an implementation of what in Computer Science is called a self validating Merkle tree, where each node is validated by a hash from it's parent node all the way back up to the uberblock (the root node) which is then duplicated else where. The proposed cancer scenario is that there is an in memory error which affects the data in question and in return causes a check sum invalidation to occur, and so ZFS starts self-healing, and writing the corrupted data all over the system. However, this is not how this works. Before ZFS corrects a single block of corrupted data, it first finds a validated copy.

    That means there has to be redundant data. If you are running ZFS on a single drive in a standard configuration, without block duplication or a split volume, you only have one copy of data, which means self-healing doesn't even turn on. Now let's assume you are running a mirror, or Raidz-1,2,3, where you have duplicate data, and ZFS detects data corruption due to a hash failure. Before ZFS starts healing itself it will try to find a valid copy of the data, by looking at the redundant data and doing hash validation on it. The data must pass this hash validation in order to be propagated. So now you need a second failure where the redundant data is also wrong, BUT MORE OVER the data has to also pass the validation which would require a hash collision (a collision is where you have different data that hashes to the same value). The odds of this are astronomical!!!

    But assuming you have a check sum failure, which triggers a self-healing operation, which then finds a corrupted piece of data which also managed to pass the hash check, then yes, it would replicate that data. However, it would only replicate the error for that one block!because every block is hashed individually. Hardly destroying your entire data set!

    So it would take a gross set of improbabilities for ZFS to decide to corrupt the single block containing your 32nd picture of Marilyn Monroe. If ZFS was going to corrupt a second block we'd have to repeat all of this! The above is assuming errors in the data itself the MORE LIKELY case to succeed, if you can believe it. Now lets assume an error in the hash. Well each hash is hashed against it's parent node. So the faulty hash sum would need a hash collision with it's parent node's hash!

    That is especially difficult, because there are fewer possible collisions in a 1:1 relationship than in say a 1:100 relationship. But even assuming some how you manage to have a successful collision, you still fall back into the above scenario, where you now need to find data that successfully matches the hash, so you now need a second collision!and again, that's for a single block of data! That's to say nothing the fact that you will have a hash mismatch between the original corrupted hash, and the hash of the prospective replacement data. So the system will realize at that point there is a problem, and will move to into tie breaker routines in order to sort out the issue. I don't even see a path where this ultimately manages to propagate. I've read a fair amount of this thread and a lot of stuff has been thrown around which seems poorly understood. Like someone mentioned Jeff Bonwick's comments on SHA256.

    However, these comments are really tried to the deduplication feature (which I highly recommend not using unless you have a VERY good reason to) where you have data validation disabled (where ZFS checks to make sure duplicates are actually duplicates instead of simply going off of the hash). SHA256 is extreme over kill for block level validation, in fact MD5 would be extreme over kill, which is why the original ZFS implementations used CRC (if I remember correctly, it's been a while), though not I believe ZFS defaults to fletcher (fletcher4?). However, if you were to use SHA256 (which you can specify) all of the above becomes multiple orders of magnitude more remote! Ok, so that address all of the data related corruption problems. Let's say you have a memory error (be in the system RAM, CPU cache, the ALU registers, etc) that actually affects ZFS's algorithms and routines themselves. 4) But let's assume the error gets past all of the above considerations and actually causes ZFS to perform operations outside of spec. Such as bypassing hash validation, this means the validation code would never be triggered, thus the self-healing would never take place!

    So even though the system would then be vulnerable to new errors coming it, it wouldn't be replicating them. Again, even if the system wanted to replicate errors it would be on a block by block basis. You'd have to have massive coordinated errors to the ZFS routines for it to go into a run away destroy the data condition, but then similar failures could happen to any system process (processes that aren't anywhere nearly as hardened, and for which constitute a larger mount memory usage, and thus a larger threat vector). It's actually more likely that some other piece of software would be corrupted in such a way as to tell ZFS to do bad things, such as delete this or that, or pass ZFS bad data to start with. Say you're working on editing a picture and it's corrupted while in the editor and you save, well obviously ZFS won't fix that. Or say that you are accessing data via samba, well if samba hands ZFS corrupt data, ZFS won't fix that.

    There are so many ways corrupted data could be handing to ZFS that ZFS would just see as data. Like say the data is corrupted while it's crossing the network, where all you have to do is get back the relatively weak TCP safe guards (which uses CRC). (Though honestly TCP is pretty darn safe, which should really say something about how much better ZFS is!) ZFS's fail safes only kick in AFTER ZFS has the data, so any corruption created by the system's use of the data wouldn't be protected against. This is where the data corruption happens in most cases. Really, not only is ZFS not more dangerous under unprotected memory conditions, ZFS is in fact a more secure file system under all use cases, included unprotected memory. ZFS does provide for corruption resistance, even from memory errors, ASSUMING the corruption takes place while ZFS is safe guarding the data (if the corruption happens else where in the system, and then it's passed back to ZFS, ZFS will simply see it as an update). Because of ZFS multistep data validation process, ZFS is less likely to get into a runaway data destruction condition than other filesystems approaches, which don't have those steps which must be traversed before writes occur.

    Further, because of ZFS's copy-on-write nature, even if ZFS did get into such a state, recovery is MUCH easier (especially if prudent safe guards are established) because ZFS isn't going to write over the block in question, and so the data is still there to be recovered. As an aside: I have found myself in truly nasty positions using ZFS beta code, where I ended up with a corrupted pool (I was working with early deduplication code), and still managed to recover the data! ZFS's built in data recover tools are truly extraordinary! With all of that said, if you are building a storage server, where the point is to store data, and you are selecting ZFS specifically for it's data integrity functionality, you are crazy if you don't buy ECC memory, because you need to not only protect ZFS, but all of the surrounding software. Because, as noted above, external software can corrupt data, and when it is handed back to ZFS it will look like regular data. Also, this improves over all system reliability.and ECC memory isn't that expensive Bu Jin 20/3/2016, 23:46 น.

    Re: [zfs-macos Re: Office For Mac
Designed by Tistory.