* Posts by Rebecca M

29 publicly visible posts • joined 25 Oct 2010

Wikipedia to go all HTTPS, all the time

Rebecca M

Playing to the gallery

No doubt this will be hugely popular among editors as a way of sticking it to the man but my initial thoughts are simply "Why?"

The technical arguments are against it - Wikipedia is already an incredibly slow site thanks to the seemingly infinite number of templates and other elements making up each and every page. If a page consists of 100 or more separate elements (fairly common for WP pages) adding even a small overhead to each and every one adds up to a large amount of additional sluggishness. Reducing the cacheability of the site makes matters even worse.

The next consideration is what proportion of activity is actually sensitive. The vast bulk is perfectly innocuous. There may occasionally be things you don't want making public, e.g. looking up those genital warts, but is the average spook or fraudster really looking for things like that? How much traffic is actually and genuinely sensitive in nature? Calling for this kind of thing as a universal default strikes me as a poorly considered knee-jerk reaction that can only harm the project in the long term.

The watts in a box that kept West London's lights on

Rebecca M

Really the lesson is to sit down and consider what eventualities might befall your business, them develop and TEST those plans.

This strikes me as one of those hand-waving magic wand assertions by someone who thinks it sounds good but has never even put a plan in place, yet alone "tested" it.

Pray tell, how the hell do you sensibly test a DR plan? Even the best plans will entail massive disruption to the business once a non-trivial portion of them is in effect and typically significant expense - following a genuine event business continuity insurance will kick in if obtained, not so for a test. Our plan calls for around 60% of staff to be laid off immediately in the event their premises are not available. In a disaster that is unavoidable but it is completely unacceptable for a test. Last but not least which scenario do you test? Any DR plan worthy of the name will cover a multitude of scenarios but have flexibility built in to cater for the unexpected or the particular flavour of the situation - something such as "can't use that site" sounds fairly digital in nature but simple practicalities such as "Can you still access it to retrieve equipment?" make massive differences to how the plan unfolds.

In short a DR plan is not a simple list of things that are carried out regardless of the situation or the impact of the business but a menu of options that have some degree of preparation for implementation, the decision to execute them depends on the scenario. The idea that it can be tested and then it is proven is naive to the point even of what is in a genuine DR plan. Sure you can test specific aspects and assumptions: for example our plan depends on our "backup box"es - DJ style 18 or 20U flight cases with a disk array, a few servers, switch etc that are supposed to be able to host a couple of hundred users without them even noticing. Do they? Yes, of course, we can and have tested that. We can also check we are able to get suitable office accommodation at the drop of a hat - we do so regularly and yes we can, no need to actually hire and fit out the premises for that.

However, the idea we are going to simulate e.g. a massive fire taking out a major office block, down to hiring an alternative, fitting out in short order, and laying off people who are not needed - that simply isn't based in reality, and would probably cause more harm to the business than doing nothing at all.

Why are enterprises being irresistibly drawn towards SSDs?

Rebecca M

I know, we all know that. One of the incorrect assumption made by some people, is that these checks always either correct the error or return an error flag to the RAID controller or OS, or that the cases where they do not, (bad data is passed upwards as good), are so rare as to be ignorable.

Go back to my original post and you see that I acknowledge their existence but point out that is the uncommon case. You have accepted that uncommon nature. Now let's consider your different error types:

1. Complete and sudden failiure. The classic, 'doesn't spin up'.

An irrelevant distraction from your original point which specifically excluded drive failure. Let's move on immediately.

2. Media errors which enter the drive's onboard controller, are corrected by the controller logic, and good data is passed upwards to the RAID controller or OS. You don't even know this is happening, unless you notice reduced performance or monitor SMART statistics.

Well, yes you do since the block remap table is available to any software that asks for it - this isn't even a SMART feature. Again, it doesn't alter the analysis one jot since it happens in both single drive and RAID configurations.

3. Media errors which enter the drive's onboard controller, which is detects but cannot correct, an error flag is passed upwards to the RAID controllor or OS. The classic 'Error reading drive C:, (r)etry, (i)gnore, (a)bort, or (f)ail?'.

Now this is the key point - this is the instance a single drive configuration can't recover from but a RAID1 setup can, by reading the other disk.

4. Controller firmware bugs, (or other rare causes), that pass BAD data upwards to the RAID controller or OS, as if it was GOOD data. Rebecca originally claimed that this never happens. Now she is claiming that it is a very low percentage of errors.

I never claimed anything of the sort, just that the effect is so small we can ignore it when considering your claim. Remember your claim: that a RAID1 configuration suffers from more silent corruption than a single drive setup. We have already established that the case 3 errors are the vast majority of errors of this kind and that RAID1 virtually eliminates them. Fiddling around with this tiny percentage of errors does nothing to make that original claim correct unless the rate of them goes up by several orders of magnitude. That is what you have to show and what you have failed to do.

It is your job to show where all those extra errors come from. In the absence of that I consider this closed, I'm not letting you continue to redefine and clarify everything you say and misrepresent every argument to the contrary.

Rebecca M

Re: @Rebecca M

With a periodic "scrub", where the system attempts to real all HDD sectors so errors are seen and re-written to hopefully fix the problem via sector reallocation, you get a good chance of not ever suffering from known RAID failure under normal conditions (data read, or more commonly when a HDD is replaced and a rebuild is needed).

Yes, I'm familiar with ZFS but that actually makes the point for me. ZFS's integrity checks really come into there own for data that is not accessed for years (decades?) at a time and ensuring that it remains readable by correcting any errors as they occur and hopefully while they are still correctable, rather than remaining undetected for years by which time things have decayed to the point that you don't have enough left from which to reconstruct the original data.

However, there is an implicit assumption made, namely the errors that result from that additional processing are more than offset by the reduction in errors caused by the underlying storage. There is always an outside chance that the maintenance process itself introduces errors as a result of bugs or disturbances somewhere along the path - electronics, firmware, interconnects, system software etc. The scrub process is still regarded as a good thing to do because those errors are rare enough to be not worth considering when set against against the much greater risks of corruption on the underlying media.

Rebecca M

Secondly, as for the idea that these problems are so rare that you can essentially ignore them, just a few weeks ago, Corsair released a firmware update for one of their 60 Gb drives, (and I'm not picking on Corsair particularly, but it was an issue that affected me, so I have the details to hand):

Great, so you've managed to prove that controller errors occur. Great, but it doesn't get you anywhere. You've also accepted that notional 1% error rate from the electronics which is what you need to bear in mind when defending the point you originally made:

You now have two disks, each with it's own on-board electronics, cache RAM, and firmware bugs, storing the same data. The same data is read from one drive one time you access it, and another driver another time. So the chance of any block of data being silently affected by a drive fault it doubled.

With that 1% figure in mind you have to show not that controller errors exist or even that they increase with the number of drives. You have to show that doubling the number of drives increases the error rate of the associated electronics not by a factor of two but by a hundredfold, simply to get back to where you were, or increase 200x to get to that claimed doubling of risk.

That's a big claim to make. Arguing about whether the errors attributable to that circuitry is 1% or 2% is most of the statistical insignificance to which I was referring. The rest is of course a read error that causes a CRC to pass even after corruption. Yes that's always possible too, but generally a one in 2^16 chance even for a completely garbaged sector (we're not talking one or two bit errors here). Even combined the two effects are small enough to make nonsense of the entire argument.

Rebecca M

OK, you knew that, but did you realise that the risk of silent data corruption is actually higher with a RAID-1 array than it is with a single disk? No? Well, it is.

You now have two disks, each with it's own on-board electronics, cache RAM, and firmware bugs, storing the same data. The same data is read from one drive one time you access it, and another driver another time. So the chance of any block of data being silently affected by a drive fault it doubled. Your RAID controller won't even notice, and will pass the bad data up to the application level twice as often.

Bollocks. The overwhelming majority of hard drive errors (>99%) are between the platter and the head. Controller gremlins are so rare that you can essentially ignore them from a statistical perspective. Look at where the real errors happen - either the head didn't record the right thing in the first place (e.g. undervoltage coil), spontaneous corruption (i.e. a bit is toggled on the disk surface) or the head is unable to read valid data from the disk as the result of some issue after writing.

In all of these cases the read will fail the CRC check built into any hard drive for decades, and the drive firmware will typically retry the operation a few times. If it manages to get the data subsequently the sector is mapped out and re-written in a spare block. This is standard single drive stuff, no need even for RAID at this point. The error only gets reported when the drive abandons the read. If that happens in a system with no redundancy you have a problem. With RAID it is not an issue - the sector is reconstructed from parity information. The possibility that data can be misread from the disk, passes the integrity tests even though it is invalid and passed on to the application without comment is remote in the extreme - the error detection mechanisms built in as standard work. When people talk about silent errors on the hard drive they are generally talking about spontaneous transitions that prevent the data being read as these checks fail, not bad data coming out of the drive without comment.

As said before that eliminates well over 99% of errors but we'll call it 99% for ease of analysis. If we eliminate 99% of errors through RAID1 but double the remaining 1% does that mean the system as a whole is more or less reliable? You can't simply pretend that 60 years of research in data storage hasn't happened. Basic ignorance of hard drive integrity checking here, Blurays as the backup gold standard last week, a robust backup strategy that had no redundancy the week before. Perhaps it is time to stop pontificating and start learning.

Amazon's new WorkMail for enterprise: Another Fire dud – or a Kindle?

Rebecca M

Re: Once again - US cloud service. Two words from Europe.

The PATRIOT Act empowers the security services to force Amazon* to provide the information they request (secretly - you'd never know about it). Wherever that information is held, and irrespective of any contract Amazon agreed to.

If Amazon don't comply - irrespective of why - it's serious jail time for the execs.

Did you even bother to read the comment you replied to? Amazon are in the clear if they fail to provide information they don't have access to, just as you are if they demand you hand over e.g. Russian nuclear launch codes. The current Microsoft case hinges on the extent to which access can be inherited from a wholly-owned subsidiary. If you can't see the difference I'd suggest steering clear of commenting on legal matters.

Boffin finds formula for four-year-five-nines disk arrays

Rebecca M

Re: The concept of spares needs to go

All those spindles provide additional performance. Use them. Arrays are virtual these days anyway, so as drives fail and rebuilds occur you lose spindles (performance declines) and you lose available capacity (not a big deal until it gets too close what you're actually using)

How do you use that performance? The kind of medium scale array that is studied here will have no problem saturating a couple of 10GbE links even with relatively slow drives and dumb controllers. If you make the reasonable assumption that a mid range array is tied to a mid range network where is that performance going to go?

I'd sooner have the spares in place and spun down when not in use. Less power, less cooling, less noise and the opportunity to force asymmetric wear on each drive, so that come the end of the array's life you don't get clumps of failures in quick succession according to what batch they were from.

Four downvotes and not one comment as to why they think I'm wrong? Did the fanboys take a wrong turn on the way to the article about Apple's record quarter?

I can't speak for everyone else but for me there comes a point were a comment is so far removed from real world experience it simply isn't worth commenting on in the first instance.

Has Switzerland cracked the net neutrality riddle?

Rebecca M

Re: This seems as good an approach as any

It was never about not prioritising traffic, its about not prioritising/blocking organisations.

I think that's the point: a lot of net neutrality advocates have jumped on the wrong bandwagon. Anyone that suggests all traffic must be treated equally IS advocating the banning of that kind of traffic management whether they intend to or not - they are two sides of the same coin. That is the wrong thing to back but it is precisely what many advocates have been demanding, without any thought at all to how it impacts the integrity of the network.

This 125mph train is fitted with LASERS. Sadly no sharks, though

Rebecca M

Re: half an inch precision tech?

I thought much the same - I recall reading decades ago that in the height of the rails differed by half an inch that was regarded as a serious misalignment. Precision measurement has always been easy in controlled contexts (vernier micrometers reading to one ten thousandth were around 100 years ago) but to be fair we are talking about high speed measurement in a relatively uncontrolled environment here.

How the FLAC do I tell MP3s from lossless audio?

Rebecca M

Re: "Everything between sample points is lost"

Mmmm. Messrs Nyquist and Shannon might have a bit to say about this.

I'm sure you feel such a big boy quoting those names. Pity that it doesn't automatically make you right or knowledgeable, indeed it simply shows that you missed their central tenet. Encode a 100kHz signal at 44.1kHz and then regnerate the wave from the sampled data. That 100kHz signal is not present in the output. If it hasn't been lost then where has it gone. That is the whole point of Shannon-Nyquist - the sampling frequency determines the maximum frequency that can be sampled.

From there you get to events that occur faster than the sampling frequency can't be captured which remains true however those samples are captured - I see another poster is bringing in whether the samples are instaneous readouts or integrations which is an utter irrelevance - the principle holds regardless of the sampling methodology.

The article states that events that happen faster than the sampling frequency can't be represented. That is true. So again, precisely what is wrong with that quoted text?

Scottish independence: Will it really TEAR the HEART from IT firms?

Rebecca M

Re: It will be business as usual.

Scotland will continue to use the pound.

It's not going to happen - this isn't some supreme bluff from Westminster, it would be political suicide to allow it. What Salmond conveniently ignores is that there's a UK general election between the referendum and independence: if Scotland votes yes all three main parties will have no choice but to make a manifesto commitment against currency union, given how firmly attitudes are against any such arrangement in the rest of the UK.

He speaks about seeking a democratic mandate as a moral argument for forcing a union but in reality the boot is squarely on the other foot - if the 90% of the UK that is not Scotland votes for parties against a union by a sizeable majority, it is clear where the overall balance of opinion is.

Munich considers dumping Linux for ... GULP ... Windows!

Rebecca M

Re: Lack of integrated email/contacts/calendar?

Hmm. My smartphone connects totally well and easily to a FOSS mailserver. Well, it does not connect nicely to an Exchange server, but that's not what is mentioned in the article.

You do know what integration means don't you?

So you can access your email. What about your calendar and contacts from the same app? Oh, right, that isn't going to happen.

So how is it integrated? If you're faced with a question that is tough to answer replacing it with a different question that you can is generally not helpful.

Linux turns 23 and Linus Torvalds celebrates as only he can

Rebecca M

Re: 23 Years

Most people have a TV, that almost certainly runs Linux, I don't know of any that don't. The only popular set-top box that doesn't run Linux is the Apple TV. Most routers run Linux.

Sources please. If these devices are all running Linux you'll be able to point to e.g. the source for them. That simple measure instantly excludes most routers, TVs etc. I know for a fact my router doesn't and of five TVs only one runs Linux.

Rebecca M

Re: 23 Years

Over 85% of smartphones sold in the last quarter are running (a kernel which was forked from) Linux.

And what proportion of households bought a smartphone in the last quarter? Remember it has to be "pretty hard" NOT to find a Linux device in a household, so even if ALL smartphones EVER made ran Linux it wouldn't by itself fit the bill.

I've not really looked into this myself but yes, the Unix = Linux assumption seems very prevalent among the Linux community, even to the extent that is a given tool is available on say Ubuntu, Debian and Fedora it can be considered portable and even a Unix standard. Yes, I've seen that exact claim made on these very fora. Seeing a Linux-style filesystem (or even a tool such as OpenSSH) could easily be enough for some of that contingent to make false claims.

Honey, I shrunk the workstation … into a Chromebook

Rebecca M

Re: Didn't we do this already?

While the technology stack is compelling (192 core GPUs in an under $300 package) I think the industry shut the door on this type of setup a while ago.

It's probably a lot more common than you might imagine. Look in call centres, large offices etc and you'll see that kind of set up fairly frequently, often a nice silent machine on the Vesa mount. I've even seen it for programmers where it has additional attractions - programmers generally don't use a lot of processor power except when compiling when they need as much as they can get. A single beefy machine serving a dozen or so users gives them that without costly, overpowered machines sat on every desk running at 1% utilisation.

Cleversafe CEO: We would tell you about the 8TB drive, but...

Rebecca M

Re: C'mon Moore's Law, Hit The Wall.

... higher definition images when 2.2MP is the most that the human eye can see. 48-bit color when 27-bit tests the limit of the best eyesight. 4K TV's where there is nothing wrong with 1080 displays.

24-bit colour is all that is needed in the final result but you do lose at least a bit channel with a lot of post-processing. Adding on a layer on top of the image? No longer true colour. Adjusting brightness and contrast? No longer true colour, and so on almost ad infinitum. That's without even considering that not everyone is simply taking and viewing pictures of their cat but might be doing something productive. I have a friend who does astrophotography and he commented a while back that his CCD is supposedly rated for 16 bits per channel but in practice it is more like 12 bits plus noise, even with all the cooling and advanced trickery needed for the very best images. He's NOTICED that and shown me - are you really going to tell him or me that he's imaging it?

As for coming up with a supposed resolution of the human eye, it's a mug's game that shows complete ignorance of how the eye actually works. The overall resolution is fairly low in pixel count terms but nowhere near uniform - i.e. you have a comparatively high resolution in the very centre of view and very low resolution in the extreme periphery of view. Since you don't know where the viewer is looking ALL of the image needs to be good for that very high centre resolution.

Your encrypted files are 'exponentially easier' to crack, warn MIT boffins

Rebecca M

Re: Compression

"Predictability" (by which I would assume you mean deterministic behaviour) and randomness are two completely different qualities... To qualify as random any value in the target domain must be as likely an output as any other - if there is any weighting or bias in the output it is not random. A lot of real-world systems have been compromised by this very implied assumption - it's unpredictable, therefore it's random.

Right on. I've long believed that most modern security flaws are not down to lack of thought or effort but lack of study or lack of knowledge. People spend 10 minutes studying this stuff and imagine themselves to be some kind of expert and begin to spout fundamentally flawed premises and if they were absolute truths. History is littered with examples of how non-random systems have been broken - it was essentially this very issue of a slight bias (not encoding a letter to its original value) that allowed even Enigma to be broken.

Any storm in a port

Rebecca M

Re: PS2

And they came with the bundled 50 / 50 chance of getting the keyboard and mouse the right way around when trying to plug them, arm twisted like a SCO lawyers soul, into the back of the computer.

I never understand why they didn't simply make the ports identical in the first place - they used a six pin mini-DIN - two for power, two for signalling, and two unused. The keyboard and mouse used the same two pins for signalling even though they were not automatically compatible with each other. It would have been a trivial matter to put the signals for one or the other to the unused pins and wiring both signals to both ports as many laptops actually ended up doing, Then you would simply have two interchangeable keyboard or mouse ports with no possibility of connecting them up the wrong way round.

Rebecca M

Re: I used to have the same problem

The upper side of the socket is the one furthest from the floor. Where sockets are mounted vertically, it's the side that would be nearest to the floor if you laid the device on its side with the wrong side facing up. You may find it easier to think of it as the left-hand side (right-hand side for left-handed people). But then you will have to decide whether you're in front of the computer or behind it.

This would be great if it was true, but it isn't. Indeed, it isn't even what the standard says. The standard gives orientation in terms of (to paraphrase) the side facing the user, but does not specify how that is determined. For many applications is obvious but e.g. rack equipment could conceivably be mounted above head height in which case the orientation is reversed. I've seen plenty of cases that make this very assumption which can be bloody inconvenient when indicator light on directly mounted equipment shine downwards on a box mounted at knee height to start with.

That always assuming any attention is paid the the standards in the first instance. I've seen plenty of examples where they are simply ignored with no justifiable defence under the spec. Cheap flash MP3 players are a favourite - for some reason the screen always seems to point downwards when they are plugged into a computer in violation of the spec.

Who ate all the Pis?

Rebecca M

Re: The vi thing

Actually Edlin was ditched from DOS 6.x...

Super-thin iMacs WILL be here for Xmas, cram warehouses even NOW

Rebecca M

Re: P. Lee

I don't even see those. Precisely how much difference does a computer being an inch or two THINNER help in most space constrained environments> The limiting factor is going to be the larger dimensions, i.e. its width or height.

As for regularly travelling with a desktop, well it's only a couple of years since I completed my doctorate and was flying between Dublin (Uni) and Manchester (home) eight or ten tens a year, invariably with desktop in tow. I had a flat screen telly, keyboard and mouse at both locations so it was just a mini-itx base unit that I could stuff in a relatively small suitcase and stash clothes, paperwork, toiletries etc immediately around and on top of it. I wouldn't even consider chancing that with something like this - even if you treat your baggage with kid gloves it's only a matter of time before Ryanair smash something through the large, vulnerable, expensive screen.

Boffins biff over ‘twisted radio’

Rebecca M

Re: The fundamental things apply

Can I suggest reading up even a little before spouting this kind of mock-intellectual mumbo jumbo. Modulate the amplitude and you introduce harmonics. Modulate the frequency and you introduce harmonics. Both increase the signal's bandwidth without any need to resort to quantum theory. Indeed there is ultimately little that can't be explained using traditional wave theory, Shannon and the Fourier transform. Introducing things like quantum theory when they are complete irrelevances is like those idiots who try (and fail) to apply GR to phenomena that can be satisfactorily explained with Newtonian mechanics.

Ultimately it is this one research team that are making outlandish claims and they don't have the facts to back up their case. I was dubious on first hearing about this even without reading their reports in full so frankly I not surprised now. When your claims fly in the face of received wisdom it doesn't necessarily make you wrong, but the onus is very much on you to prove your assertions than for others to disprove them.

Microsoft's 32GB Surface RT has 16GB of free storage

Rebecca M

Re: Spectacularly Refined Chap

IEEE Std 1541-2002 is relevant here given the whole standard is on the 10^9/2^30 issue. Paragraph 4.1 is cut and dry:

The SI prefixes shall not be used to denote multiplication by powers of two.

I see you've already been thumbed down. I'm not surprised. After all, of course some anonymous nobody commenting on a discussion forum knows better than a recognized international committee of experts.

Windows 8: An awful lot of change for a single release

Rebecca M

Re: Pre-95 & NT Windows weren't OSes!

Not that old chestnut again. Consider the tasks performed by an operating system: managing processes and memory, controlling devices, managing the filesystem... Windows 3.1 did all of these. Dos was little more than a boot loader. You may as well say that Linux isn't an OS - it's dependent on GRUB after all.

I just LOVE Server 2012, but count me out on Windows 8 for now

Rebecca M

Don't be too obsessed by home users

Home users do not fund the industry. Home users tend to stick with the commercial software installed on the machine when they bought it, at massive OEM discounts. Anything else is either a free download, pirated, or not bothered with. The industry seems to be obsessed with home users now that industry is basically saturated, but you need at least ten installed home systems to provide the same income as one commercial system. For many or even most commercial environments the PC form factor - monitor, keyboard, mouse, is still the most general, productive, and cost effective. How many PCs are primarily used for data entry, word processing and email? Keyboards are still a pretty central feature to most users who are actually paying the bills.

If companies forget that core market they will be abandoned and they will lose the income to fund the loss-leading trendier stuff. We moved to thin clients a long time ago for flexibility and ease of management, so it ultimately makes no difference if an app runs on Windows, Solaris, Linux or whatever. Microsoft can't provide a decent word processor? Fine, they get one on Linux or whatever.

Flashboys: HEELLLP, we're trapped in a process size shrink crunch

Rebecca M

That is precisely what he said and precisely what is relevant here. The commentator remarked that we are facing a shortage because no plans are on the table now - not just architectural plans, simple aspirations of the "we will need a new plant in 2013 variety". He makes clear no such plans are in the pipeline. Even if every flash company in the world did an about turn now it would still take those new plants five years to come on line which is where the problem cited actually lies. How long it takes to physically build a building and populate it with pre-ordered plant is an irrelevance.

BBC Watchdog crew sink teeth into dodgy PC repair shops

Rebecca M

Re: Agreement

Cable select works well until it doesn't. As soon as you encounter even a handful of instances where particular devices refuse to play ball you switch over almost instantly to never trusting it again.

Windows to Linux defections to outpace Unix shifts in 2011

Rebecca M

Hardware support?

Yes, another myth that is repeated so often it is accepted without question. For bread and butter stuff none of the free Unix systems are really lacking. Linux may have the edge on random consumer tat but nothing more than that. If you want hardware support look at NetBSD with support for dozen of different architectures in a single kernel (Linux is a differnt kernel for each new system) as well as support for devices that simply doen't exist in the world of x86.

It's still a tiny fraction of the size of the Linux kernel. So what was that point about it all being hardware support?

Like it or not as Linux has matured it has grown disproportionately. Yes, it is now huge and still growing for comparatively little in terms of substantial benefit. This has only considered the kernel since that is what is most relevant to server apps, but the user interface stuff is if anything else worse. I agree with the "Spectacularly Refined Chap": there is a bloat problem there but pointing anything like that out will be instantly shot down by the religious Linux advocates instead of measures being taken to actually address what are growing into serious issues.