* Posts by Frank Rysanek

118 publicly visible posts • joined 2 Oct 2007

Page:

I'm not Boeing anywhere near that: Coder whizz heads off jumbo-sized maintenance snafu

Frank Rysanek

Re: I have flown on aircraft running my software (ish)

Windows 95 cannot cope with 4 GB of RAM. Just under 512 MB would be believable. Even that feels pretty far fetched for the PC platforms (chipsets, motherboards, RAM technology) of the Windows95 era. Apologies - this is not meant to ruin the otherwise excelent story. I just know for a fact that stock Win98 SE have a problem with 512 MB or more RAM, and the machines where this has become a problem have only come several years after Windows 98 has been dead.

How do you sing 'We're jamming and we hope you like jamming, too' in Russian? Kremlin's sat-nav spoofing revealed

Frank Rysanek

Re: rebroadcasting

Actually if you read the details in the original report, it would almost seem that a simple re-broadcasting was exactly what the Russians were using :-) Having an RX station typically at an airport, use some licensed frequency for a relay, and re-broadcast with a minimal delay a couple dozen km away to confuse GPS reception in the desired place.

In doing that, the Russians would directly disclose the location of the RX site, and as others have pointed out, detecting position of the terrestrial spoofing transmitter is also pretty simple. It almost sounds too ridiculous for them to be this careless. It sounds more like kicking your opponent's ankles for the sheer cheeky joy, rather than being actually useful if the opponent was a serious threat (e.g. in a combat scenario). Funny tactical ideas may follow, as to whether to let them transmit their spoofed signal or use it to aim your weapons etc.

BTW, among the last pictures in the original report, there are a few spectrograms featuring tall nail-shaped spikes (carriers) in the middle of the GPS "noisy hilltop". If this was genuine, the GPS receivers would be clearly jammed, they wouldn't be able to see useful data (the noisy hilltop). They'd report "0 satellites good". I've seen this happen in practice. So the description in the original report saying that "the receivers could see the satellites but did not obtain any navigational data" sounds odd.

Frank Rysanek

Re: A signal that's below the noise floor

Actually if you try to look with a basic spectrum analyzer and a passive antenna, the GPS RX *is* below your noise floor. The RX level from a passive GPS antenna reaches like -130 dBmW at its best, if memory serves. Miraculously, actual GPS receivers tend to have a sensitivity of about -160 dBmW. And I believe quite some noise is actually present in the reception, but above some SNR level the correlator is able to find a plausible match between the noise and the expected "bitstreams" - from a handful of satellites transmitting at the same carrier frequency, only with minor Dopller shifts... (low units of kHz, inside a channel bandwidth of about 2 or 10 MHz if memory serves).

if dev == woman then dont_be(asshole): Stack Overflow tries again to be more friendly to non-male non-pasty coders

Frank Rysanek

noob at StackExchange myself

I'm a noob to StackExchange myself - several departments including StackOverflow. With a "reputation" of a couple dozen, I don't have a problem to sustain steep criticism from someone with a reputation of a couple dozen thousand. My approach is to thank him for his criticism and for his good work, perhaps clean up my own mess if cleanup is due, do my homework, and generally stick to the original topic. I don't recall any outright personal attacks at the SE sites.

Frank Rysanek
Joke

a man or a woman?

AManFromMars in disguise?

Oh bucket! Unpack the suitcases. TRAPPIST-1 planets too wet to support life

Frank Rysanek

Re: 0.7% -- average ocean depth

What's the average depth of the world's ocean? 1 km maybe? Compare that to the 6000-some km radius of the globe. Well actually it's a ball, so the ratio is (r1^3 - r2^3) / r1^3 == 1 - r2^3/r1^3 . Where r1 = outer radius at sea level, r2 = at average depth. Anything under the ocean floor is mostly solid rock. Till the depth where it melts. Oh wait... that would be about 0.05 % of water... :-)

After we ran our article about the fate of .sk, the nation of Slovakia flew into a rage. And now, here's part two...

Frank Rysanek

Re: where do Czechs come into play in this .SK ccTLD affair

...if some journalist did a bit of research into the common history of Slovaks and Czechs, it would make a nice article in its own right, though possibly off topic at TheRegister.

Being a native Czech, I myself have to be super-cautious not to assume a patronizing position. When our federation broke up in 1993, just four years after the commie era formally ended in 1989, as a teenager I was confused, I had no problem living in a Czechoslovakia... Later on I found out that many people in both our countries had actually felt that a divorce was perhaps due, so the politicians merely did their job and followed public opinion - and whatever we may think about the motives and methods of the leaders who arranged the split, the one thing both of our nations are nowadays proud of is: how smooth and organized that divorce went. During the various periods of our common Czechoslovak existence, the formal capital city has always been Prague, and the Slovak autonomy was always somewhat limited by the rule of Prague, to a greater or lesser extent, depending on the government or regime at the helm at a given time... In that sense, the Slovaks have peacefully gained sovereignty, at long last, within a Europe where they were no longer menaced by any of the neighboring nations. And, the now independet country flourishes without the rule of Prague, and I'm holding my thumbs up for them. In some important developmental respects, the smaller Slovakia is ahead of the Czech Republic, has ventured into some needed reforms earlier. Slovaks go to study and work in Prague and vice versa. We understand each other's language. We are civilized neighbors.

The community of people around the internet infrastructure in SK and CZ is still relatively small and tight, people know each other, today's seniors have grown up together building the Internet since early/mid nineties. As for the administration of our ccTLD domains, I believe we used to start out with a similar model in the early nineties, we also had a for-profit company called EUnet.CZ run the administration for several years, and it took I believe two major steps 1) for CZ.NIC to get established and 2) for the for-profit businesses involved to hand over control to the "non-profit" CZ.NIC. I don't know the details "behind the scenes" of how and why and on what conditions it went in that direction in the Czech Republic, but I'm proud of the way things are now. (Check the "about" section of www.nic.cz for details of who is behind that "consortium". Not sure if "NGO" is the right word here.)

The team of CZ.NIC labs are the authors of FRED, apparently a well designed OSS system for running a national register, serving to a flock of delegated commercial registrars... CZ.NIC sponsors FRED's development (and other OSS packages and public-interest activities) from the domain registration fees. There are lone voices that the fees could go even lower if the "loosely related activities" were curbed... On my part I don't see the point in lowering the fees even further.

(So much to debunk some ill-pointed references to FRED in the original article, and this is probably what made some other Czechs respond to this "Slovak" topic.)

Apart from the reference to FRED, I believe many Czechs feel for the Slovak neighbors having... what seems like a difficulty with their ccTLD. Don't get me wrong, us Czechs we have our own "tough topics" (unrelated to the Internet). The one thing we should not do is finger-pointing.

Should someone suspect offended patriotism that CZ.NIC did not acquire SK-NIC... well that's quite a primitive explanation :-) and perhaps I'm not alone in thinking that the idea of paying 20M EURO from "our" cash coffin to take over a friendly neighbor state's ccTLD, would necessarily bring back some unhappy connotations of the past, of the "rule of Prague" that Slovaks seem pretty happy without.

I don't have any vested interest in CZ.NIC, in the ISP business or anything remotely relevant to this whole topic, except that 15 years ago I quit working in that area.

Intel Atom chips have been dying for at least 18 months – only now is truth coming to light

Frank Rysanek

BayTrail vs. Linux

Personally I don't care much about the C2000 as I don't come in contact with those. What I do come in contact with, is the BayTrail family - so far my favourite ATOM of those available on the market today. And I recently got bitten by this: https://bugzilla.kernel.org/show_bug.cgi?id=109051 Even the BayTrail isn't flawless, at least in Linux - but fortunately a remedy is a software patch and the problem does not consist in permanent HW damage. Curiously, many BayTrail-based HW models work just fine. And some just don't, under Linux, for many months now, unless you use a particular SW-side workaround...

‘Artificial Intelligence’ was 2016's fake news

Frank Rysanek

ANN-based AI needs more macro architecture

yes... fortunately it's still mostly just hype, and some say the machines beating humans at various contests are often ugly kludges. I do fear the actual arrival of a capable human-level AI. And, it seems to me that the AI people (including the deep learning folks) are still somewhat clueless as to *how* consciousness should eventually emerge. No matter how much pattern-recognition power you throw at it, that alone won't result in a motivated, focused thinking machine. I would expect to see some cunning "macro architecture", possibly involving some cognitive loopy/feedback circuit, and generally macro-topologies faintly similar to a Von Neumannean CPU: search the huge memory for concepts, select one, keep it in temporary storage, search for further concepts, select some, keep them in temporary storage, search for more, run like that in cycles, with some general drives / motives / goals / aux inputs behind this basic loop (working as a filter in the selection process), with some generative/action outputs etc. Something along those lines, just possibly an order or two more complicated, multi-layered and whatnot. I actually feel that many of the basic concepts and functionality of this "sapient macro-architecture" could be tested / proven on a much smaller scale.

US election pollsters weren't (very) wrong – statistically speaking

Frank Rysanek

Binary choices promote extreme personalities up the media circus

Most elections systems in action today apparently work like this: out of many political parties, two of them happen to collect the topmost vote percentages... And these two keep fighting it out in every elections, on and on, period after period. Maybe worse yet, this system tends to bring extreme personalities to the front, and turns the last rounds of the election into a cock-fighting exercise... which easily guides public attention away from issues that actually matter...

Where I live, there's a guy who seems to evangelize a voting system where consensus is more valued than cockerel fighting. Actually a family of systems - the "algorithm" has tunable knobs, for a particular application / society / culture / whatever. The principle appears to be: more than one vote to each voter, and negative votes. Which allows consensual candidates to get votes from several parts of the political spectrum, and suppresses candidates who are "polarizing the spectrum". Interesting ideas, if academic. So far they've been testing the system informally ("shadowing" some actual votes) or on smaller-scale projects. The idea is called "Democracy 2.1" - apparently very different from something called "Democracy 2.0" :-)

http://news.d21.me/en/inside-d21/our-mision/

Google's neural network learns to translate languages it hasn't been trained on

Frank Rysanek

follow-up questions

1) is Ray Kurzweil still at the helm? He's not mentioned in the article, nor in the referenced sources, but this is pretty much in the vein of what he was hired for, and correlated to his past work.

2) on the title photo, what does the finger-puppet with the red hat actually say? Is it equivalent to Hello or is it some prank? :-)

Frank Rysanek

Re: human translator jobs

Have an upvote for mentioning Linguee :-)

For the last couple years, I've been trying to gradually improve my German by writing in German to our German suppliers. I tend to combine several dictionaries: one from my own (non-English) mother tongue which has some common cultural background/history with the Germans, then the Beolingus at TU Chemnitz, and I combine that with Linguee and general Google to prove that the meanings and phrases that I come up with sound remotely plausible in German. The downside is, that Sprechentlich es mir noch weh tut, weil ich die Wörterbücher nicht verwenden kann...

Frank Rysanek

Re: Not bad

Impressive. Considering how much trouble it gives me to translate "precisely" from English to my mother tongue or vice versa, I probably wouldn't get anywhere near this precision on a "back and forth" double translation (for proof) - separated by a couple days so that I don't remember the original in any great detail.

BTW... "da sie auf der gleichen Architektur arbeitet" : if I was to nitpick, the "sie" (she) seems to depart from the original's gender. In German, "Modell" is a neutral gender, but the translation engine chose to refer to it as a "she"... or maybe it picked up the gender from a broader context of the article? (Wrong subject on my part?) Doesn't seem so: the previous paragraph contains "the system" as a subject, which is also neutral gender in German...

Frank Rysanek

Re: Swedish/Danish/Norwegian

...I seem to recall that Norwegian and Danish are closer to each other than Swedish to either of the two... some historical reasons. But it's just a faint memory of some tour guide's explanation. (Myself coming from a slavic background.)

Frank Rysanek

Re: *the researchers found evidence that....*

> Nor do you know where "the logic" is when it's finished

Actually... if you know what to look for, and you equip your ANN engine with the right instrumentation, and/or you structure the ANN deliberately in particular ways, in the end you can get a pretty good insight into what the ANN does, how it's doing that, how fine-grained the learned model is etc. This is judging by what Ray Kurzweil has to say about his speech models, and the recent "deep" image recognition projects/startups also tend to produce visualized data learned in the upper layers...

Matt Zeiler has some nice videos on YouTube: https://www.youtube.com/watch?v=ghEmQSxT6tw

You may object that giving some a priori "structure" to the network is cheating. Well without some a priori architecture, borrowed from natural brains, our ANN's would take geological timescales to learn something useful - using GA's to start with and then some "conventional" learning...

This is actually where I see quite some room for further progress: adding more "architecture" to the ANN's. Not just more layers on top for more abstraction - maybe some loopy structures? Denser cross-connects to link multiple sensory and motor control subsystems? Reciprocating structures vaguely resembling control by feedback loop, but applied on mental reasoning tasks, attention, symbol manipulation... driven by goals/motives, maybe stratified by priority. I would hazard a guess that a cunning ANN macro-architecture could bring some radical results (at a technology demo level) even without "throwing even more raw crunching horsepower at the problem". Ray Kurzweil hints at how our brain is structured in "How to create a Mind" - someone should start playing with a coarse structure along those lines and extrapolate from there... Kurzweil himself merely concludes that we need orders of magnitude more compute horsepower and "the mind will somehow emerge on its own". I would not be so sure :-)

Leaked paper suggests EM Drive tested by NASA actually works

Frank Rysanek
Thumb Up

Re: It's a scam.

Instant peer review, at TheRegister fora :-) I'm always amazed what kind of beasts lurk here.

Robot babies fail in role as teenage sex deterrents

Frank Rysanek
Joke

Re: association with procreation

Good point... set my mind racing, the thread of thought being "how to train the two distinct situations in a realistic fashion, properly touchy feely, in close succession, to augment a cognitive association of a causal relationship" :-)

On a slightly serious note, it would probably be difficult to encourage an authentic emotional experience for the first situation too, even if real live subjects were asked to "simulate" the activity... down to choice of subjects?

Frank Rysanek

Re: Useless for the intended purpose

Thanks for that comment :-)

(joke alert) I would add a simulation of accute otitis media, when the baby starts weeping and yelling for no apparent reason, and possibly the playback of a recording of myringotomy/paracenthesis being performed to a live kid (along with an explanation that episodes of otitis media are often repetitive throughout the years of childhood). And yes I know that this therapy is far from commonplace in the developed world. A popular mothering discipline is "what the hell, what is it this time" - why does the baby yell and refuses food. There can be several other causes beyond those mentioned above, only a few of them are accompanied by a fever or some other outside hint. I.e. a proper simulation should include the "panic-level yelling for no apparent reason for at least an hour", combined with a deliberately impossible task to measure the baby's temperature...

How do you simulate a week of "coughing all night", coupled to the always present possibility of receding into a serious bronchitis or pneumonia, which would require a stay in the hospital. Heh I wouldn't dare to suggest a simulated lactational psychosis. And that's for an infant/toddler/child that's still pretty much normal and healthy, and knowing that with our modern-day medical care the risk of actual death is near zero (until the relieved pressure on natural selection has the inevitable consequences on our average health, a couple dozen generations down the road).

There are babies who start refusing breastfeeding after a few days or weeks, in spite of the mother being desperate to breastfeed, for the sake of responsibility and proper natural nourishment. That combined with lactational psychosis... "splendid". Difficult to properly simulate in a doll play.

The choice of code brown / code yellow / burp / fart is a piece of cake to master. Could be pretty entertaining to a girl, actually. Possibly attractive to a girl in her late teenage = with the right natural hormonal setup to think about motherhood. Reality is much more difficult than playing with dolls. In reality you have the accute knowledge that "this is for real", you feel alone in it, you feel responsibility, whipped up by some hormonal developments in the fresh mother... The reality is much more of a shock compared to playing with dolls who you know are just dolls.

Also, depending on your family background, as a young parent you may have a problem to feed your family - how do you simulate that? What if the teenager whom you're trying to "educate" is used to that sort of environment?

Then again... do we want to scare our adolescent population with the actual brutality of life? Some are scared enough without our deliberate effort, others would just shrug it off anyway, the way they always do. Responsibility is down to individual personality traits... For some, even the actual reality shocker of having their own baby is not shocking enough to prevent them from smoking and drinking during pregnancy or breastfeeding...

Microsoft has open-sourced PowerShell for Linux, Macs. Repeat, Microsoft has open-sourced PowerShell

Frank Rysanek

PowerShell... ah well...

On the current job, I've encounted powershell - for a couple things in W2k12 and SQL 2k12 that should've been easily configurable, but for some reason, they were not... I downloaded some other people's scripts and wrote one or two (simple ones) of my own.

Seems to me that the syntax of Powershell is a little uncertain / not strict in one particular style / confusing to me.

Powershell is more confusing to me than Perl syntax, and yes I do mind the Perl's multiple ways of doing the same thing, the boundary between Perl 5 "canonical best practices" and the mess supported for backward compatibility with Perl 4 and older...

I'm actually using Perl on Windows for slightly more complicated scripting, things that exceed the capabilities of cmd.

Sarcasm off this time - I only dare being sarcastic as an AC (not when logged in).

Microsoft's Windows Phone folly costs it another billion dollars

Frank Rysanek

Re: spontaneous upgrade to win 10

Happend to two sane people in our small biz last week. Both were royally pissed off. No data loss though - so far so good. It does seem like the GWX thing didn't even ask for permission this time. It just went ahead.

Building a fanless PC is now realistic. But it still ain't cheap

Frank Rysanek

Re: Cheating ...

In a chimney I would be suspicious of conductive carbon dust and maybe rainfall, but if those two factors were taken care of, your friend's got my thumbs up, all four of them :-)

As for separating yourself from the PC using a long cabling trunk: yes this is perfectly possible.

USB: about 5 or 8 meters max, can be extended by a hub

Gb Ethernet: 100 m over CAT6 material

HDMI/DVI/DP: this depends very much on your screen resolution. I recall someone reporting success when transporting DVI (equals HDMI) at full HD resolution at 60 Hz over a distance of 10m using just an extension cable. Must've been some pretty good cabling material. Today's highest resolutions (4k) may limit your video cable to 2-3 meters from the PC to the monitor.

BTW, attics tend to suffer from heat during the summer (unless thermally isolated).

Frank Rysanek

fanless with high performance is moot

My first silenced PC was my home 486 @ 120 MHz, back in 1995 or so. This was at the time when CPU fans started to appear - I removed the flimsy active CPU heatsink and used a larger passive heatsink body. In addition to that, I undervolted the 80mm PSU fan from 12 to 5 V. At that point, the AT PSU already had some 4 years of service behind it, catering for a 386DX - and survived maybe 6 more years, until the 486 PC finally went to the scrapyard for moral obsolescence.

Nowadays I work as a troubleshooter in a shop selling industrial PC's, both classic 19" machines and also some fanless models. We don't make fanless PC's, we import them from TW. There are maybe 5 famous brands of such fanless IPC's in TW (famous in Europe and the U.S.). It's not something you cobble together in a miditower ATX case and unplug all the fans. We sell tightly integrated x86 machines with a die-cast or extruded aluminum outside shell, with the outside surface consisting of fins everywhere. Yes, surface is the keyword. But what's even more complicated, is proper thermocoupling of the power-hungry components on the inside, to the outer shell. If the PC maker is not pedantic enough, he cheats by not thermocoupling everything inside properly to the outer shell. Whatever lives inside and consumes electricity, runs hotter than the outside shell... As there's no forced air flow, and telepathic heat transfer doesn't work, any "uncoupled" heat sources on the inside have to rely on natural convection... not a very good prospect. Especially with first-generation fanless PC's, using Banias / Dothan / Core2 "notebook" CPU platforms with about 35 W total TDP, and with botched thermo-coupling, the PC's were plagued by overheating problems. A modern Haswell or BayTrail SoC in a generous finned enclosure, that's a very different story :-)

I've seen people build fanless PC's by taking a desktop ATX case and stuffing all fanless components inside: a fanless CPU heatsink, a fanless GPU card, a fanless PSU... such a PC is rather short-lived :-) No matter how big your heatsinks are, if you keep them closed in a Miditower case without any fan, they don't have too much effect, as it's the PC case's outside surface that matters for heat dissipation - and that surface isn't very big. Without airflow, everyting inside roasts in its own heat. Even if you just use a tall CPU heatsink with heatpipes (can even be active), or water cooling with an outside radiator, pay attention to the fact that CPU and memory VRM's (point-of-load buck converters) on ATX motherboards often *rely* on the toroidal vortex around the CPU socket, caused by a conventional CPU heatsink fan.

For small internal use, I tend to build low-power servers in retired 19"/2U IPC server cases. I use a new silent-ish high-efficiency industrial PSU (with a rear 80mm fan), I adapt the old 19" case to accept a MicroATX motherboard, I remove all the noisy 80mm chassis fans and insert a single low-RPM radial 120mm blower over the CPU heatsink, with its exhaust channeled through the rear wall. The silence is marvellous...

Generally, in home-cobbled ATX PC's and small servers, I like to use large "passive" heatsinks, but only combined with a slow fan that creates a very basic draft of fresh air though the case.

Google wants new class of taller 'cloud disk' with more platters and I/O

Frank Rysanek

Re: Wishing wells are nice things

Exact same feeling by me. "I must be missing something, or the paper was written by someone from the PR."

What would you achieve by having two different platter sizes per drive, on a common spindle? (The smaller platters would be small AND slow in terms of IOps AND slow in terms of MBps - but let's not start with that.) On Google's scale, if you know beforehand what data is hot and what is not, you can sort it in software and store it on different storage volumes built with different optimizations (slow big drives vs. fast smaller drives vs. flash or whatever). How is the *drive* supposed to know which LBA sector is going to be hot and which one not? Also, map LBA sectors to physical sectors in some hashed way, other than "as linear as possible" ? Really, at the spinning drive's discretion? Even if the drive did have two sorts of platters, fast and slow, considering that it had no a-priori knowledge of what data was going to be hot, perhaps the idea is that it could move the data to the fast vs. slow region "afterwards", based on how hot vs. cold it actually turned out to be... thus wasting quite some of its precious IOps on housekeeping! Also, it would better infer the FS layout happening at several layers *above* LBA, to relocate whole files, rather than individual sectors, as otherwise it would ruin any chance of enjoying a sequential read ever again ;-) And oh, by the way, we'll take your RAM cache, you can't really use it efficiently anyway, it's more use to our FS-level caching, thank you. Seems to me that complexity-wise, having several categories of simple disk drives (of different sizes and IOps rates) is obviously more manageable than having mechanically complex drives with hallucinogenic firmware managing nonlinear sector mapping and a fiendish tendency to try to guess your FS-level allocations and occasionally get them wrong...

There's an old Soviet Russian children book series by Nikolai Nosov, sharing a lead character called Neznaika. Never mind the plots with often leftist outcomes/morales... I recall in one of the books there was a lazy lunatic cook, wasting a lot of time by trying to invent "dry water" in his head... The point of dry water was, that you could wash dishes without getting wet :-) This is a hell of a metaphor for many concepts and situations...

EU could force countries to allocate 700 MHz band to mobile by mid-2020

Frank Rysanek

Re: retune

Exactly.

Where I live, we have like 6 DVB-T MUXes (carriers) in the air, from about 4 directions, with varying signal strength (RX). Years ago when I moved in, I started to look after the "shared terrestrial aerial", serving the small appartment block I live in... I made it through the first retune (analog darkness) and a roof reconstruction with relatively modest investment, and by sheer luck the two bandpass antenna preamps are just about right for the mix of frequencies and levels... Even if we did buy a proper blade chassis with channel cards, the channel amp cards tend to have a limited tuning range = another retune into a different band would cost us another hefty sum...

When the time comes again, I'll see what I can do :-)

Interestingly, as the analog TV channels were squatting in the lower TV bands, the analog phase-out has resulted in most of the DVB-T occupying the upper channels in band V - which I didn't like very much, as the upper channels are a pig to catch if you don't have direct line of sight to the transmitter, and they have shorter reach on the legacy coax cabling I have inherited in the building. Another retune back to the lower channels (in our case, it might correlate with the shift to DVB-T2) might improve signal levels at wall sockets in our house :-) We probably won't be moving back to band III, but even band IV would be nicer to work with, than channel 58 in band V.

While we weren't looking, the WAN changed

Frank Rysanek

Same old same old

It's still the last mile that matters first and foremost. While reading the article, I was itching with curiosity all along, wondering if telepathic broadband transfer has finally been invented, or if the article would culminate with postal pidgeons over MPLS or something... none of that, actually.

In terms of weird stuff over MPLS, the weirdest I've read about was something like SDH over MPLS.

Back to last mile.

Even if you have a local cable co. / ISP wiring the whole neighborhood (around your offices) with dense FTTB at very friendly consumer prices, and you let him enter your building with a plastic pipe (the extra trench is like 15 m), once you ask for an actual proposal, for some modest symmetric bandwidth over fiber, with a /29 block of public static IPv4... if there's no competition, he will possibly propose the fairly basic service at an outrageous sum of money. I was in that situation as a small business admin and kept using two microwave links (redundancy) for several years, until the local optical ISP finally gave in and proposed something sensible (and the sales guy got fired shortly afterwards.)

Here in CZ in a mid-size town (100k people), the real news in the recent years has been that local optical startups have started trenching across our post-commie residential areas (highrise condos with lots of grass inbetween). Actually in our very town, it's not that optimistic - it's a nation-wide cable co. vs. a local optical competitor. The nation-wide behemoth doesn't bother to offer better pricing, hence the local cable/optical company (in the business for some 25 years now) is earning most of the new consumer customers, for its symmetric optical (FTTB) Ethernet... but they're actually not a new startup, they're more like a local incumbent. Next to the incumbent telco, selling DSL over 20 years old copper, which was then (in mid nineties) totally overhauled using govt subsidies...

The midsize and bigger cities tend to be barricaded against "trenching optical startups" by local incumbents with political connections. I keep hearing about even smaller towns (~10-20k residents) where wireless ISP's turned optical startups are busy trenching consumer broadband and selling it cheap, with the support of an elucidated local authority. Excellent places to live, less excellent to find a job apparently...

Hell I'm told that many locations in Prague are absolutely hopeless in terms of modern broadband, consumer or business-class. And, it's always about the last mile. Noone bothers to lay new optical cables in the densely cobbled urban areas. I used to work for two ISP's in Prague for several years around Y2k, I remember very well the numerous sales opportunities where there simply was no last mile transmission line to use... Where I work now, we have an office in Prague as well, at an outskirt of the city (a residential area with highrises and lots of grass) and our office still uses a microwave link!

I work for an admittedly small business. We don't care about MPLS. Most of our sales people are scattered throughout the country anyway, and the business software has to be useable for them from anywhere they stop for a while, so it wouldn't matter if some bigger "remote offices" had MPLS or some L2 VPN... It's OpenVPN for all of them and RDP on top of that, and the database client running against a local RDBMS on an RDP desktop is throttled mostly by ODBC latency, much more than by RDP screen refresh.

Once you get a good last mile, VPN can be quite a breeze. Perhaps we're lucky that we have a good local (national level) peering arrangement: the independent peering point (called NIX.CZ) now actually runs a distributed infrastructure with nodes in several cities... and I haven't heard about bilateral peering skirmishes among ISP's in the last 15 years or so. As for the firewalls... if you know the necessary basics, a good basic firewall can consist of a Linux PC with OpenVPN for the tunnels and Quagga(Zebra) to do some internal routing of your private subnets. Dual uplinks to two ISP's (with a double NAT) have their inherent limits for outbound internet traffic, but can be pretty nifty for a redundant VPN = if combined with redundant VPN tunnels and some dynamic routing on top (I prefer iBGP over OSPF, as BGP does *not* require a clear "link state" from the lower layers and keeps checking the connectivity on its own). You don't even need a PC for this, you can run OpenWRT on some SoHo router hardware, and theoretically Mikrotik HW/FW should also be capable of this.

Yeah right - I'm at the lowest end in terms of headcounts and bandwidth. It only starts to get interesting when you struggle with bandwidth and complexity (imagine multiple sites linked together in a massive VPN mesh).

I am told that there are off-the-shelf firewall boxes (no, not Cisco) that are miles ahead of my homebrew cobbled gateways. For the lazy folks it must be an excellent solution.

"Local Internet Breakout" - hell, I never knew it's got a dedicated name :-)

Outsourced VPN, outsourced security? God forbid, as long as I have a word... I used to work for the other side.

Uni of Manchester IT director resigns after chopping 68 people

Frank Rysanek

Re: Would love to hear some context...

Oh actually it's 60 000 students, 80 000 total. = even leaner IT.

Frank Rysanek

Re: Would love to hear some context...

Excellent, thanks for the insight :-) 200 IT staff to cater for 60 000 people actually sounds pretty lean to me... and indeed a good candidate for some uniform printing solution :-) among many other technical challenges...

Frank Rysanek

Would love to hear some context...

How big is the organisation being thus treated? How many in-house IT staff (before the layoffs), how many staff total, how many students? What does university IT mean nowadays, exactly? In the old days, it was a few computer "labs" (a couple dozen desktop PC's networked in a room), some file servers, some printing... The school I went to had a nice in-house client/server app keeping track of the students, lectures, allocation of seats in the courses to students... coded and supervised by a single guy. That was in the second half of nineties. I am told that much of the in-house DB software (then native MS-DOS apps) was later ported to web-based environment. I can imagine that a big enough university might appreciate something like an ERP business package... And then some departments might manage servers of their own, for special apps, HPC and whatnot. Technical schools are likely to have more of this arcane specialized stuff - but the chairs/dept.s also tend to have post-grad people / lecturers who take care of the high-end stuff as part of their highly specific jobs.

So... 68 people laid off. How many per cent is that of the total IT staff? What professions were laid off? Are they gonna outsource the grunts replacing broken keyboards, adding stolen mice, fixing broken Ethernet links, taking care of toner cartridges...? 68 people in university IT sounds like quite a lot... then again, my post-commie school catered for just about 1000 students per year, which might be a relatively low count, compared to universities of the western world...

Met Police: Yes, outsourcing IT to Steria has 'risks'

Frank Rysanek

headcounts

So... along with the outsourcing deal, they're downsizing in-house IT from 800 to 100 people. How big is the metropolitan police? How many people actually doing police work? How many PC's = user seats, how many user accounts in the system? How large is the "server back end" to take care of? With proper police IT, there tends to be a centralized system for "filing the records by individual cases". There will also be some in-house bookkeeping, purchasing, HR etc. - not unlike an ERP system, if the organization is big enough. Plus some printing, maybe e-mail... all of that with a police-style focus on security. And special systems and arrangements for access to various 3rd-party databases / registries that might be of interest to police investigations. 800 people to manage all of that... The number alone sounds like a lot, but was that in fact enough? Or was it too many?

Chip company FTDI accused of bricking counterfeits again

Frank Rysanek

Re: fed up with FTDI

[replying to my own comments is a sign of mental disorder ;-) ]

I recall one other encounter with FTDI - this one had a driver release engineering angle. I once bought a programming dongle for Lattice CPLD's. You guessed it: the dongle contains an (original) FTDI chip. Next, I needed XP drivers for the dongle. It was in February/March 2015 = admittedly pretty late for XP, but that's what I still run on some computers in the lab (and am happy that way). To this date, FTDI still list their driver 2.10 as compatible with XP - while in reality, 2.08 already failed to load in XP SP3. I managed to find 2.06 somewhere, and that did work in XP.

As for the opinion that "FTDI is no longer needed": in the "industry" and in the tinkerer community, RS232/422/485 are actually far from extinction. Note how simple the interface is - I wish that USB was so simple to use and debug, so universally compatible. And, RS232 doesn't force you to write your own USB driver (or work around writing your own driver by using some generic framework, libusb or some such). Especially writing drivers for Windows is a tad complicated by the required MS signature (that apart from the general driver writing complexity). Even the user space software authoring tools are more restrictive nowadays, than they were in the days of my optimistic youth... Corporations are helping each other to create a wall between how far you can get in DIY vs. what's possible with technology only available to corporations. Security measures against malware proliferation? Malware authors always find a way...

Frank Rysanek

fed up with FTDI

I've seen USB gadget chips, own designs from TW/CN companies, that were so crap that they just didn't work. Some of them were LPT and serial converters. I mean to say that some of the counterfeit FTDI chips (the ones carrying a fake logo etc) possibly don't need any deliberate bricking :-)

OTOH, I've been in contact with someone who purchased an RS422 converter board *straight from FTDI*. You know, one of those advertised straight on the FTDI's website.

http://www.ftdichip.com/Products/Cables/USBRS422.htm

And the board didn't work! FTDI's tech support admitted that they had some problems with a past batch...

Possibly the only famous alternative to FTDI is the Prolific PL2303. Not exactly a shining star. And, they also have a problem with counterfeiters.

YMMV.

It would be neat to have a "USB serial device class", with a class-based driver from Microsoft. There's the ACM CDC, but doesn't seem to be a perfect match...

Then again, we have the USB LPT device class, and many products of this class "just don't work" anyway :-( I mean - for printers, just taking print jobs from the spooler...

Eight budget-friendly 1TB SSD data packers for real people

Frank Rysanek

Re: Ever had an SSD fail?

Sure :-) We sell SSD's mostly as boot drives in "industrial process control" PC's. The endurance of an SSD depends greatly on how you configure your OS and apps. An SSD in read-only mode can last for ages. I have some firewalls booting Linux from read-only CF cards, running for almost a decade. Same thing for simple DOS-based systems that don't ever need to write to the drive (or scarcely). Same thing for Windows Embedded with EWF locked all the time.

But I also know cases where a SCADA app (configured to log data or keep a persistent image on disk) can thrash a decent 2.5" SSD in three months. Spinning rust still has its merits. Yes it can fail too - but it's not *prone* to fail in some deployments where SSD's *are* prone to fail pretty soon. And, in terms of spinning rust, you'd better shop for the *lowest* capacity currently available on the market = the simplest and proven construction, the lowest data density. The terabyte race is not a nice prospect in that context.

Ironically, most people still think that strictly nothing beats the endurance of an SSD in the role of a Windows boot drive... any SSD, in unmodified stock Windows, running Windows Update, an antivirus, a dozen self-updating apps etc.

You know - you install Windows on your shiny new expensive SSD, ohh the joy of how *fast* it is, then you go entertain yourself with something else... and a couple months down the road, when the SSD slows down noticeably, or fails outright, you tend to blame the piece of the SSD, or the early SSD model, or the brand... "Gosh, the SSD's were *crap* a year ago... must've been a bad batch or something... let me have a new one, that will surely last longer!" ;-)

It hardly comes across your mind that maybe the SSD thing is *principally* wrong for the position.

128GB DDR4 DIMMs have landed so double your RAM cram plan

Frank Rysanek

Re: Errr?

AFAICT all Intel single-socket desktop CPU's so far support 32 GB max. total, in 4 DIMM slots. Thus, you don't really need anything over 8 GB per DIMM at the moment, in the consumer segment = non-ECC/unbuffered :-( (Does AMD support more RAM?)

Printer drivers ate our homework, says NSW Dept of Education

Frank Rysanek

PostScript isn't all bad

Generic Postscript appears to be the vehicle that allows for large-scale networked printing. There are several vendors of centrally managed / networked / distributed printing systems for large corporations, that depend on Postscript as the "common denominator" = common printing language.

Using Ghostscript and its fellow RedMon (virtual printer device / redirector), you can turn a Windows-only GDI printer into a PostScript printing backend. Provided that there are still drivers for your old and cheap printer in your current Windows version - which may turn out to be your ultimate hard cheese :-) But if you do have a native Windows driver for the printer, the rest (Ghostscript+Redmon) is subject to some IT automation / scripting, if this is to be deployed on a more massive distributed scale. Yes there would be pitfalls, if the strategy so far has been "bring your own printer" :-)

Windows 8.1 / 2012 R2 still contain workable generic PS and PCL5e drivers. Ghostscript can produce PCL5 (and there's a flavour that can take PCL5e as *input* I guess). Typically, the "second cheapest" laser printer from a given vendor will take PCL5e (or at least PCL3). Not sure if inkjets are considered in the edu.au SAP project... they're a plague in their own right anyway.

I suspect that printers are just an excuse though.

Microsoft has developed its own Linux. Repeat. Microsoft has developed its own Linux

Frank Rysanek

Network switches have been running Linux-based firmware for ages

Ethernet switches have been using Linux-based firmware on the inside for ages - especially the lesser known brands / switch vendors. Cisco traditionally had their own in-house IOS, but I seem to recall that some more modern IOS strains on some HW platforms are actually linux-based too... Other popular operating systems to use for firmware are the various BSD flavours and various RtOS'es (QNX, VxWorks and the like). The CPU cores used in switching hardware (= what actually runs the firmware code) are typically PowerPC, ARM, or MIPS - Linux supports all of them. If the Ethernet switch chipset makers provide some reference firmware platform, it will most likely be Linux. So if someone like Microsoft possibly decides to develop their own firmware for some 3rd-party OEM switch hardware, Linux is a very logical choice. That's where they're likely to get the best technical support, needed to bootstrap Linux on the management CPU core, and in terms of drivers and API's for the specific hardware (L2 switch matrices, L3+ accelerators, DMA engines, individual mac/phy blocks, various misc IO such as I2C/SPI/GPIO). But I still consider it a little unlikely that they're going all the way from bare metal (Linux from scratch). I would find it more natural if they took whatever reference firmware (Linux) the chipset maker has provided, and port the Microsoft's own user-space tools / API's to it, while possibly bugfixing and modifying the reference firmware a bit in the process.

Does Linux need a new file system? Ex-Google engineer thinks so

Frank Rysanek

any improvements for "a myriad threads reading slowly in parallel" ?

There's one use case which traditionally used to be a problem for cheap spinning rust in servers: multiple (many dozen or hundred maybe) slow parallel threads, each reading a sequential file from disk drives. For optimum throughput, the FS and the OS block layer should minimize the seek volume required. With enough memory for read-ahead, it should theoretically be possible to squeeze almost the sequential rate out of a classic disk drive. A couple years ago, Linux wasn't there. Too many out-of-sequence seeks for metadata, read-ahead not aligned on stripe boundaries in a RAID (there were other unaligned things if memory serves), no I/O scheduler really tuned for this use... There was allegedly some per-flow read-ahead magic in the works, but I have no news. Not sure if a new FS even has a chance of improving this. Not that anyone has claimed any such thing, regarding bcachefs or otherwise.

Because the server room is certainly no place for pets

Frank Rysanek

Re: IRQ asmnts?

I only know the hardware side of this, never actually tried to use them in a host/hypervisor... so I cannot tell you how it's done.

The "virtualization" support in hardware comes in several degrees.

1) VT-x - this facilitates the virtualization of the CPU core. I understand that the host/hypervisor must provide the "virtual" devices (storage, network, KVM) via its own guest-side device drivers (decoupled from actual hardware). In other words, the hypervisor SW must mediate all I/O to the guests, the guest OS merely lives happily under the impression of having a dedicated CPU.

2) VT-d - essentially this allows you to assign/dedicate a physical PCI device (or even PCI dev function) to a particular VM guest instance. The secret sauce seems to have several ingredients, and IRQ's are just one part (the easiest one, I would say). I've recently found some notes on this (by no means exhaustive) in Intel 7-series PCH datasheet and in Intel Haswell-U SoC datasheet (vol.1). Interestingly, each doc explains it in a sligtly different way. I recall reading about the possibility to invoke a selective reset of a single physical PCI device (actually a PCI dev function), about delivering interrupts to a particular VM, about making DMA (several flavours) virtualization-aware (compliant) - and I must've forgotten a few more.

Only some selected on-chip peripherials lend themselves to VT-d (they're listed in the chipset datasheet).

3) SR-IOV - allows you to "slice" a physical device (peripherial) into multiple "logical partitions", where each "partition" appears as a dedicated physical device to its own assigned VM instance. It's like VLAN's on PCI-e, where SR-IOV aware peripherials (NIC's, RAID controllers) know how to work with a "VLAN trunk". SR-IOV can not only cater for multiple VM/OS instances through a single PCI-e root complex, it can actually cater for multiple PCI root complexes as well - allowing for multiple physical host machines to share a PCI-e NIC or RAID for instance (or a shelf of legacy PCI/PCI-e slots).

VT-x has been there for ages, in pretty much any modern CPU.

VT-d has been a somewhat exclusive feature, but becoming more omnipresent with newer generations of CPU's and chipsets.

SR-IOV needs VT-d in the host CPU and chipset, and most importantly, the peripherial must be capable of these "multiple personalities". Only a few select PCI-e peripherials are capable of SR-IOV. Some NIC's by Intel for instance. Likely also FC and IB HBA's. As for the multi-root-complex capability, this requires an external PCI-e switch (chip in a box) that connects to multiple host machines via native PCI-e. Or, the multi-root switch can be integrated in the backplane of a blade chassis. A few years ago, multi-root PCI-e for SR-IOV seemd to be all the rage. I recently tried to google for some products, and it doesn't seem to be some much en vogue anymore - or maybe it's just so obvious (implicit in some products) that it doesn't make headlines anymore...

As for IRQ's... IRQ's alone are nowadays message-signaled for the most part (for most of the chipset-integrated peripherials). PCI-e devices are per definition MSI compliant (MSI = one ISR per device) and most of them actually use MSI-x, where one device can actually trigger several interrupt vectors (ISR's), such as "one for RX, one for TX, and one global" with modern Intel NIC's. Even before PCI-e MSI's, the IO(x)APIC present in most machines since maybe Pentium 4 can route any IRQ line to any CPU core (any CPU core's local APIC). Considering all this, I'm wondering what the problem is, to assign a particular IRQ to a particular CPU core (running a VM instance). Perhaps the IRQ's are the least problem. Perhaps the difference with VT-d is, that the mechanism is more opaque/impenetrable to the guest OS (the guest OS has less chance of glimpsing the host machine's full physical setup and maybe tamper it). That's my personal impression.

IRQ's on PCI are, per definition, PnP (except for some *very* exotic exceptions, where you can specify in the BIOS, which GSI input on the chipset triggers which interrupt number in the IO/APIC, or where you can jumper a PCI104 board to trigger one of the PCI interrupt lines, one of your own choice). In a virtualized setup however, the IRQ routing must follow the admin-configured setup of "which VM owns which PCI device". PnP with human assistance, I would say.

Frank Rysanek

Re: Sustainable push forward

That upvote is from me, thanks for your response.

In my case, it's indeed a matter of being somewhat inertial and lazy. The scale is relatively small, hasn't pushed me in the right way very much. I'm not a full time admin, and the 4 servers that we have in the cabinet (2x Windows, 2x Linux) are not much of a problem to cope with. An upcoming migration of an important app onto new windows (2003 end of support) will raise that to 6, temporarily (read: until everybody moves their small internal projects to the new infrastructure, read: possibly forever). So far I've been approaching all this by keeping the hardware uniform and keeping the Linux distroes hardware-agnostic. I'm doing any revamps of the Windows hardware in "waves", to save a bit on the spare parts. We're a hardware shop ourselves, so I always have some old stuff lying around - all I have to hoard is an extra motherboard in every wave. There's a server or two elsewhere in the building - machines that I prefer to have physically separate from the main cabinet.

Other than the small scale, I'm a classic case for virtualization - I have Windows and Linux, and I'm too young to be a conservative old fart (which is how I actually feel in many respects :-) = I hardly have an excuse for my laziness...

Regarding potential virtualization, one possible consideration is the organizational environment. I'm ageing along with a stable gang of merry colleagues who are less conservative than I am, but more in the way of "if it's not click'n'done, like Apple, there's something wrong with it". On the job they're all well versed in managing Windows across different pieces of physical hardware, and are even ahead of me in terms of virtualization on a small scale (for testing purposes of Windows Embedded images etc.) but - they're not very good at debugging firewalls and heterogeneous tech. I'm wondering what an additional layer of indirection would present to them, if I get hit by a car someday... it's indeed a matter of documentation and deliberate internal sharing of knowledge. Or outsourcing the whole darn internal IT thing (in a house full of computer geeks).

After your comments, my impression of virtualization boils down to approximately "within a time span of several years, it will decouple your OS versions from the physical hardware and its failures, you will only have one physical machine to cater for, and yours will be the choice when to migrate the VM's to less archaic OS versions (which you have to do anyway, no escaping that) = at a time when it suits you."

Frank Rysanek

Sustainable push forward

So you've virtualized all your legacy boxes. You haven't just imaged the old versions of Windows, Netware or whatever have you - you've even installed modern Windows versions in the VM partitions, reinstalled/upgraded/replaced the apps etc. Instead of a 42U rack cabinet, you now have a pair of modern quad Xeon servers (because if it was only one server, that would be a single point of failure, right?). Now finally you can juggle the system images at a whim and Ghost has become a fading memory. Oh wait - for a proper virty orgasm, you need an external storage box to centralize your storage, of system images and data volumes. Heheh - or two storage units, to avoid the single point of failure... because disk drives, RAID controllers and power supplies are all eventually perishable. Fortunately the storage attachment technology doesn't matter much (SAS/FC/IB/PCI-e/iSCSI?) as long as you have a way of getting your data out of the old storage box a couple years down the road. To the hypervisor, they guest images are just files - so you only need to have a way of moving the files around (actually forward). Next question... will your system images of today, be compatible with a future hypervisor release 5 years down the road? What about 10 years? Will your colleagues 10 years down the road be able to maintain that old hypervisor, to restore the host OS from backups onto bare metal? Ahh yes - you can upgrade the host/hypervisor OS regularly / incrementally through the years. If you have a myriad OS images with non-trivial virtual network interconnects between them (just a LAN and DMZ with some servers in each, plus a firewall in another partition) - will your colleagues 10 years down the road be able to find their way around this? Yes of course - it's a matter of proper documentation, and passing the wisdom on... Will the virtualization make it any easier for your successors? Isn't it a matter of replacing one problem (supporting old OS on old bare metal) with the same problem in a more layered and obfuscated reincarnation? (supporting your host OS / hypervisor on the ageing bare metal, and supporting the old guest VM's in potentially new host OS / hypervisor releases?).

To me, the article is pretty disturbing. I do feel the old git taking over in my veins...

Post-pub nosh neckfiller: Bryndzové halušky

Frank Rysanek

Halušky with cabbage

Regarding the alternative recipe with cabbage - yes that's the less radical version, making Halušky more accessible to non-Slovaks :-) The cabbage is supposed to be pickled/fermented/sour (Sauerkraut), definitely not fresh and crisp. Not sure at what stage the cabbage gets mixed in - it's definitely not served separate and cold.

Frank Rysanek

Bryndza

Without Bryndza, you cannot say you ate the real deal. The gnocchi-like "carrier", athough some may like it alone (I do :-) is just a dull background to the incredible and breathtaking flavour of genuine Bryndza. Not sure if any British sheep cheese can rival the raw animal energy of the Slovak Bryndza. Unforgettable. I'm not a Slovak - to me, once was enough.

BAE points electromagnetic projectile at US Army

Frank Rysanek

the one thing I don't get...

How do you fire this, without nuking your own onboard electronics?

Gates and Ballmer NOT ON SPEAKING TERMS – report

Frank Rysanek

Re: to buy a failing company

To buy a company in trouble can be a successful strategy for some investors.

If it wasn't for the fact that Nokia was a technology giant, it might be a classic choice of Warren Buffett.

The Nokia phone business did have several steady revenue streams, several development teams working on some interesting projects, several good products just launched or close to a launch (which could get refactored in following generations, but didn't). As far as I can tell from outside, they might as well keep on going with a profit, if they had a chance to selectively shed some fat in terms of staff and internal projects, get more focused and "stop switching goals halfway there".

Microsoft's only plan with Nokia is to have an own vehicle for Windows Phone, which means that much of Nokia's bread-and-butter technology legacy has been wasted, and many legacy Nokia fans left in a vacuum.

Business is back, baby! Hasta la VISTA, Win 8... Oh, yeah, Windows 9

Frank Rysanek

why upgrade; OS maintenance over the years

In terms of the underlying hardware architecture, for me the last true HW motive that would warrant an upgrade was the introduction of message-signaled interrupts. MSI has the potential to relieve us all of shared IRQ lines. It required a minor update of the driver framework - and I'm sure this could've been introduced in Windows XP with SP4. Well it got introduced as part of Vista (or let's discard Vista and say "seven") - and was a part of a bigger overhaul in driver programming models, from the earlier and clumsier WDM, to the more modern and supposedly "easier to use" WDK. Along came a better security model for the user space. With all of these upgrades in place, I believe that Windows 7 could go on for another 20 years without changing the API for user space. No problem to write drivers for new hardware. USB3 and the like don't bring a paradigm shift - just write new drivers, and that change stays "under the hood". Haswell supposedly brings a finer-grained / deeper power management... this could stay under the hood in the kernel, maybe catered for by a small incremental update to the kernel-side driver API.

Linux isn't inherently long-term unmanned/seamless either. An older distro won't run on ages younger hardware, as the kernel doesn't have drivers for the new hardware, and if you replace an ages old kernel with something much more recent, you'll have to face more or less serious discrepancies in kernel/user interfaces. Specifically, graphics driver frameworks between the kernel and XWindows have been gradually developing, and e.g. some "not so set in stone" parts of the /proc and /sys directory trees have also changed, affecting marginal stuff such as hardware health monitoring. Swapping kernels across generations in some simple old text-only distro can be a different matter (can work just fine within some limits), but that's not the case for desktop users. Ultimately it's true that in Linux, the user has much more choice between distroes, between window managers within a distro, gradual upgrades to the next major version generally work. And, your freedom to modify anything, boot over a network etc. is much greater than in Windows. Specifically, if it wasn't for Microsoft pushing the UEFI crypto-circus into every vendor's x86 hardware, you could say that Linux is already easier to boot / clone / move to replacement hardware than Windows 7/8 (the boot sequence and partition layout is easier to control in Linux, with fewer artificial obstacles).

I'm curious about Windows 9. It could be a partial return to the familiar desktop interface with a start menu, and legacy x86 win32 API compatibility. Or it could be something very different. I've heard suggestions that Microsoft is aiming to unify the kernel and general OS architecture across desktop / mobile / telephones - to unite Windows x86 and Windows RT. From that, I can extrapolate an alternative scenario: Windows 9 (or Windows 10?) could turn out to be a "Windows CE derivative", shedding much of the legacy Win32 NT API compatibility, legcuffed to your piece of hardware using crypto-signed UEFI, and leashed to the MS Cloud (no logon without IP connectivity and a MS cloud account). All of that, with a "traditional desktop" looking interface... You don't need much more from a "cloud terminal". I wouldn't be surprised.

Moon landing was real and WE CAN PROVE IT, says Nvidia

Frank Rysanek

radiosity rendering? HDR?

When was the first time that I read about "radiosity rendering"? Deep in the nineties maybe? Though at that time, it was mentioned as "the next level after raytracing"... This seems more like texture mapping (not proper raytracing) but with additional voxelized radiosity-style upgrade to "hardware lighting". There are probably several earlier "eye candy" technologies in the mix - objects cast shadows, did I see reflections on the lander's legs? Not sure about some boring old stuff such as multiple mapping, bump mapping etc.

I.e. how to make it look like a razor-sharp raytraced image with radiosity lighting, while in fact it's still just a texture-mapped thing, the incredible number crunching horsepower (needed for raytracing+radiosity) has been worked around, approximated by a few clever tricks. Looks like a pile of optimizations. Probably the only way to render this scene in motion in real time, on today's gaming PC hardware. BTW, does the "lander on the moon" seem like a complicated scene? Doesn't look like a huge number of faces, does it?

I forgot to mention... that bit about "stars missing due to disproportionate exposure requirements for foreground and background" might point to support for "high dynamic range" data (in the NVidia kit). The picture gets rendered into an HDR raw framebuffer, and the brightness range of the raw image is then clipped to that of a PC display (hardly even an 8bit color depth). To mimick the "change of exposure time", all you need to do is shift the PC display's dynamic range over the raw rendering's dynamic range... Or it could be done in floating point math. Or it could get rendered straight into 8bits per color (no RAW intermediate framebuffer needed) just using a "scaling coefficient" somewhere, in lighting or geometry...

Seems that the buzzwords like HDR, radiosity or raytracing are not enough of an eyewash nowadays. The NVidia PR movie is clearly targetted at a more general audience :-)

BTW, have you ever flown a passenger jet at an altitude of 10+ km, during day time? Most of us have... at those altitudes, you typically fly higher than the clouds. There's always the sun and enough of your normal blue sky towards the horizon... but, have you tried looking upward? Surprise - the sky looks rather black! And yet there's not a hint of stars.

Three photons can switch an optical beam at 500 GHz

Frank Rysanek

Re: Awsome.

At this switching speed and gain... wouldn't it be an interesting building block for all-optical processors? Actually I can imagine why NOT: no way to miniaturize this to a level competitive with today's CMOS lithography.

Intel's Raspberry Pi rival Galileo can now run Windows

Frank Rysanek

The Galileo has no VGA

no VGA, no point in installing Windows on the poor beast.

Well you could try with a MiniPCI-e VGA, or a USB VGA... both of which are pretty exotic, in one way or another.

OpenWRT gets native IPv6 slurping in major refresh

Frank Rysanek

Re: So much better than original FW

A switch from original firmware to OpenWRT has improved signal quality and reach? Not very likely, though not entirely impossible...

Other than that, TP-Link hardware of the recent generation is a marvellous basis for OpenWRT. It runs very cool, has very few components apart from the Atheros SoC, this looks like a recipe for longevity. Only the 2-3 elyts could better be solid-poly (they're not) - I haven't found any other downside.

For outdoor setups I prefer Mikrotik gear (HW+FW) in a watertight aluminum box. And even the RB912 has classic Aluminum elyts... so I cannot really scorn TP-Link for not using solid-poly in their entry-level SoHo AP's.

Dell exec: HP's 'Machine OS' is a 'laughable' idea

Frank Rysanek

Re: no need for a file system

IMO the abstraction of files (and maybe folders) is a useful way of handling opaque chunks of data that you need to interchange with other people or other machines. Keeping all your data confined to your in-memory app doesn't sound like a sufficient alternative solution to that "containerized data interchange" purpose.

Frank Rysanek

Re: a game-changer

That's a good idea for handhelds. Chuck RAM and Flash, just use a single memory technology = lower pin count, less complexity, no removable "disk drives". Instant on, always on. A reboot only ever happens if a software bug prevents the OS from going on. Chuck the block layer? Pretty much an expected evolutionary step after Facebook, MS Metro, app stores and the cloud...

Page: