Ten to twenty per cent of utterances collected by voice biometrics systems are not strong identifiers of the individual that spoke them, according to Dr. Clive Summerfield, the founder of Australian voice biometrics outfit Armorvox. Voice biometrics systems could therefore wrongly identify users under some circumstances. Most …
Expect people^Wusers to conform to the machine, why don't you
This sounds a wee bit desperate to me. Against better judgement (20% intrinsic failure rate, what fun) pushing because-we-claim-we-can technology. And those details you need for impersonation? Eh, often quite easy to find, not a chore for the experienced impostor (just have to have the voice, might turn into an interesting line of contracting down the line) and, oh, you're expected to SAY IT OUT LOUD every time you talk to the machine. Just hope nobody ever overhears you then, eh. Or one of them newfangled devices that can RECORD then PLAY BACK sound. Luckily those are really rare in practice.
And then there's the thing that biometrics generally suck for casual identification as they're adversarial in nature beyond the simple finnickiness even without considering illness or a night out with the lads. I don't know why people keep on believing that biometric sauce somehow is going to make them more secure; it'll sooner lock them out of their own identities instead. Being securely at rock bottom isn't quite my cup of tea. I don't know why all those companies keep digging that mine for gold, either, as from a security perspective it's fool's gold, and with that, worse than useless. Lots of fools buying up the gold, apparently. Wish they'd have the good grace to not foist it upon anyone but themselves.
This sort of thing is laughably insecure for the ease it can be casually, accidentally compromised, and doesn't stand a snowflake's chance in hell against spearphishing. Voice printing might be useful for lots of things, but as biometric authentication? Not so much.
Combining the worst of...
* The false positive / false negative problems of biometrics
* The transmission encryption of telnet
* The secrecy of password reset questions
And the solution is to ask the user to change their voice unnaturally. What could possibly go wrong?
I mean this was just about _accidential_ breakage of the systems. Nobody apparently has tried to use voice synthesis to generate samples of voice which are designed to systematically scan the feature space. You could record those onto a record and use a newfangled device like a gramophone to play it to the system.
The guy's never heard of Rich Little, has he?
Or maybe he didn't recognize Mr. Little's voice.
- Superfish 2.0: Dell ships laptops, PCs with huge internet security hole
- Windows 10 pilot rollouts will surge in early 2016, says Gartner
- Dell: How to kill that web security hole we put in your laptops, PCs
- Research: Microsoft the fastest growing maker of tablet OSs ... by 2019
- Exclusive Oracle confesses to quietly axing its UK software support centre