back to article 'Unikernels will send us back to the DOS era' – DTrace guru Bryan Cantrill speaks out

Some heralded Docker's acquisition of UK-based Unikernel Systems last week as the golden dawn of a post-container era. Others showed healthy skepticism. One person firmly in the latter camp is Bryan Cantrill, who typed up a long blog post on why he believes unikernels are "unfit" for production. Cantrill is chief technology …

  1. Anonymous Coward
    Anonymous Coward

    Do I miss something?

    Surely emdedded systems could profit.

    1. Mage Silver badge
      Mushroom

      Embedded?

      You, sir, are barking mad. Decent Embedded systems are even more secure and robust. They need to be with no user with a keyboard, mouse and screen to fix stuff, and maybe massive loss of equipment or life if it "crashes".

  2. Pete 2 Silver badge

    Hard and fast

    Cantrill seems to be promoting an idea that "proper" OS's, Like Solaris / Unix are more reliable because there is an interface that stops user-space mistakes migrating into kernel-space. This is obviously flawed, as anyone who's ever made a system call with incorrect parameters will know. Or anyone who's application sits, waiting on an I/O to a networked device can see - after that device (or the network) has gone away.

    In theory, what he proposes has merit. A reliable, resilient, impenetrable, wall between the two. However faults in device drivers and poorly written code, APIs or bad implementations mean we never get this in practice.

    And then there's the performance issue. Moving between kernel and user space takes time. The more checks, tests and privilege validaions you put in place, the longer it takes. (I recall that Sun moved their telnet server from user-space to kernel-space in the 90's for this very reason) and the slower your machine gets when you scale up to production levels of load.

    One area that he does flag up is the ability to debugger your applications. But isn't this just a function of the tools that (would) be built into a unikernel? If they aren't there now, that doesn't mean they couldn't be in the future. It might even bring about the return of hardware based debugging - which has the advantage of sitting outside the running system and therefore not affecting it's performance or logic flow.

    1. Anonymous Coward
      Anonymous Coward

      Re: Hard and fast

      So you go from the real-OS situation where it takes a flaw in something like a device driver, to a DOS world where it takes a flaw LITERALLY ANYWHERE. I take it you're one of those "what good are static types? I'm clever!" guys.

      1. the spectacularly refined chap

        Re: Hard and fast

        So you go from the real-OS situation where it takes a flaw in something like a device driver, to a DOS world where it takes a flaw LITERALLY ANYWHERE. I take it you're one of those "what good are static types? I'm clever!" guys.

        No, it has to be a flaw in the app that breaks through the built-in protection provided by the hypervisor. That needs to target some vulnerability in the hypervisor... so no different to running on a conventional OS then.

        Only of course if your app runs on a bare hypervisor rather than a conventional hypervisor/OS/app stack you only have a single layer of vulnerability rather than two. You also have only a single e.g. layer of memory management running rather than two, and that running mostly in silicon rather than needing another emulated software support on top - yes, even with the assistance of hardware virtualisation.

        No, it's not for everyone but for a VM that is only running a single app I can't see the issue.

        1. Anonymous Coward
          Anonymous Coward

          Re: Hard and fast

          Oh, did you want your single-purpose unikernel app to write to your production database?

          Where is your hypervisor now?

          1. the spectacularly refined chap

            Re: Hard and fast

            Oh, did you want your single-purpose unikernel app to write to your production database?

            Where is your hypervisor now?

            Precisely where it should be: staying out of the way.

            If you build access control into your clients come back when you have something meaningful to all.

            The more software you have involved the greater the vulnerable surface. How many 0 days are in your operating system? How many affect you if there is no OS?

    2. Anonymous Coward
      Anonymous Coward

      Re: Hard and fast

      Solaris in.telnetd in kernel space? Have you a reference for that? Never seen it as anything else than a plain old inetd daemon.

      Maybe you're thinking of the CIFS server, which was introduced in OpenSolaris and Solaris 11, but that's 5 years ago, not 20.

      1. A Non e-mouse Silver badge

        Re: Hard and fast

        Solaris in.telnetd in kernel space?

        Solaris implemented a system called streams which allowed some traffic handling to happen in the kernel rather than bouncing to & from user space. This was to improve performance and reduce overhead.

    3. Anonymous Coward
      Anonymous Coward

      "The more checks, tests and privilege validaions you put in place...."

      ... the more secure your OS and application are.

      Yes, it takes time. That's what fast processor are for. Security needs additional processing - you can get rid of it and obtain faster applicatios. DOS was very, very fast. Utterly unsecure, though.

      1. Anonymous Coward
        Anonymous Coward

        Re: "The more checks, tests and privilege validaions you put in place...."

        While I can't speak to how much more efficient using the unikernel approach might work, in all the software packages I wrote sanity and validation checks consumed a few percentage points of performance. Having survived the DOS world with some of my sanity and humor intact, I'll take a pass on the unikernel approach, thank you.

        [Actually I predate DOS, having used VMS and an introduction to AT&T Unix in my teens. Fun times. DOS was never fun!]

      2. Richard Plinston

        Re: "The more checks, tests and privilege validaions you put in place...."

        > DOS was very, very fast.

        No, it wasn't. Display calls to MS-DOS were very, very slow. Display calls to BIOS were passable fast. If you wanted very fast display you bypassed both and did direct screen writes, just like most professional software did.

        MS-DOS was also very slow on file access, in particular on large data files that required random access, due to the way FAT worked. Large ISAM files were particularly slow compared to other systems because in order to access a particular position within the file the OS, for each access, had to start at the directory entry and follow down the FAT table until it found the appropriate cluster. That is why defragging was required, by bringing all the FAT entries for a file together it reduced the number of data blocks required to be read. iNode systems, for example, could access any part of a large data file with many fewer block reads.

        DR-DOS had a feature that was not available in standard MS-DOS and that was the cluster size could be specified when a partition was formatted (other utilities could also do this). On a particular partition size MS-DOS would only give, say, 2KB cluster size. Using DR-DOS to give an 8KB cluster size would give an improvement of 3x for random access to a 1Megabyte ISAM data file with no other change. This was solely because there were 4x fewer FAT entries to access.

        The only reason that MS-DOS was perceived as being 'fast' was because it could be bypassed by the programs and didn't get in way.

    4. Dan 55 Silver badge

      Re: Hard and fast

      So, as these things will inevitably be connected to the Internet, a buffer overflow ends up trying to exploit the imperfect hypervisor and from there... the world...

      Perhaps it's not such a good idea.

    5. Destroy All Monsters Silver badge
      Flame

      Re: Hard and fast

      And then there's the performance issue.

      The last argument of the incompetent

  3. RIBrsiq

    I find that I need a "proceed with caution" icon, as this is a bit outside my area of expertise and thus what I am about to write might be less sage wisdom and more senseless drivel.

    Well, more so than usual, anyway:

    It seems to me that if a VM is running only a single application anyway, then a DOS-like approach might not be so senseless, if it gives a performance boost and/or makes life easier for the application and/or the OS and their developers. After all, it's now the hypervisor that's separating the different VMs and their applications, protecting the system (IE, the virtualization host and all the different running VMs) from those that misbehave.

    1. This post has been deleted by its author

      1. Bronek Kozicki

        That's my assumption as well - there is still separation between container and hypervisor, by means of switch from "Ring - 1" i.e. virtual kernel mode, to actual kernel in hypervisor more i.e. Ring 0 , and the approach offered by unikernel means that the user mode application (i.e. container) simply does not bother to exit from its "Ring - 1", but from the point of view of the hypervisor it is a user mode application which does not need to run as root etc.

        Id like to know if this assumption is correct, actually.

    2. Sil

      Hypervisors aren't inherently safe

      Hypervisors aren't inherently safe, even if they aim to be.

      Example: http://xenbits.xen.org/xsa/advisory-148.html, a bug that has made guest OSes vulnerable for 7 years.

      1. RIBrsiq

        Re: Hypervisors aren't inherently safe

        "Hypervisors aren't inherently safe, even if they aim to be".

        All code is inherently buggy, yes.

        Hell, even HW is buggy:

        http://arstechnica.com/gadgets/2016/01/intel-skylake-bug-causes-pcs-to-freeze-during-complex-workloads/

        So I guess there's a case to be made for "defence in depth" for Important Stuff™. But probably not applicable for most workloads, I think.

        1. Paul Shirley

          Re: Hypervisors aren't inherently safe

          ""defence in depth" for Important Stuff"

          Provided you remember to defend against the inevitable misjudgement of what is "Important Stuff" every time a dev builds something, repurposes something else or otherwise cocks up.

      2. Bronek Kozicki

        Re: Hypervisors aren't inherently safe

        OS kernels are not inherently safe, either. But I like the approach of "do one thing, and do it well" and the unikernel design seem to promote it (perhaps not directly). This is because you can have 1) actual kernel running the hypervisor with not much functionality enabled beside (e.g. no KEYS - see more recent advisory in Linux kernel) and then on top of it, and with (hopefully !!) proper isolation, 2) user processes running the application code as efficiently as possible and with little dependency on high-level services provided by the hypervisor. And these user processes actually happen to be unikernels, each dedicated to to running single application only.

        But, as with anything, I imagine you can also have bad system build from the same blocks - e.g. with unnecessary services provided (and possibly breached) in the hypervisor and/or unnecessary coupling of containers.

        I will be watching on the sidelines.

    3. Anonymous Coward
      Anonymous Coward

      The hypervisor could protect access to shared resources of the host, but would have no way to protect data inside the unikernel app. Unless the app is very simple, there could be different threads at different privileges running even inside a single app. Take for example a server that needs to control accesses to different resources depending on the user requesting it (be it SMB, HTTP or wharever). If it is the kernel to enforce processes/threads security, and the application has no way (but bugs/vulnerabilities) to modify them, security is much higher. Make everything run into a single security context, and with everything accessible, and enforcing security becomes much more difficult.

      It looks to me the simplistic model of HTTP requests with no security but some form of cookie just handled at the application level, and no mapping to the underlying OS security model, is the driver of these approaches - but many system are far more complex and with stronger security needs than a simple web request - and needs security enforced beyond the application code.

  4. MacroRodent

    The Wheel of Reincarnation keeps revolving...

    From my operating system classes, I remember the idea of running single-tasking operating systems inside a hypervisor was developed already by IBM in the 1960's as their solution for this new-fangled idea of timesharing. Wikipedia has a writeup here: https://en.wikipedia.org/wiki/CP/CMS

    1. Zippy's Sausage Factory

      Re: The Wheel of Reincarnation keeps revolving...

      You appear to have thought my thoughts before I thought them. Which is comforting, as that means I now have deja vu twice. Have an upvote.

    2. Anonymous Coward
      Anonymous Coward

      Re: The Wheel of Reincarnation keeps revolving...

      Hipster Web 3.0 Millennial: History (but not privacy) is for lusers.

  5. Infernoz Bronze badge
    Holmes

    I've used non isolated OS before and mistakes bite hard

    On a OS which doesn't isolate apps, it only takes a pointer mistake and the OS can crash hard and you have no idea why, so have to guess more!

    Passing data between app and OS does not need to be slow if you use lightweight queued message passing and minimise memory copying.

    1. Caoilte

      Re: I've used non isolated OS before and mistakes bite hard

      I wouldn't want to write a unikernel app in C for this reason, but MirageOS is written in OCaml and either that makes pointer mistakes impossible and the entire program simpler to verify correct before deployment or we're not ready for unikernels. Given how free Haskell/Scala code tends to be I can believe that OCaml is similarly reliably correct.

    2. Paul Shirley

      Re: I've used non isolated OS before and mistakes bite hard

      Even worse is when they just spray memory but don't crash. You might not even realise your data is corrupt for a very long time and finding those bugs a nightmare.

  6. Anonymous Coward
    Anonymous Coward

    Unikernel? New?

    This sounds like User Mode Linux which has loaded an executable into the UML VM kernel space.

    That is what? 17 or 18 year old tech which has been beaten into oblivion by now and has an extremely well known performance envelope.

    What it can:

    1. It can perform network-wise - replacing the stock network drivers gives you 3G+ per core forwarding speeds under pure paravirtualization and 6G unidirectional to "blackhole sink" - same as bare metal Linux. (been there, done that, patches are published, but not in mainline kernel).

    2. It can perform disk-wize - same, the stock drivers can be replaced to jack up performance 2x (been there, done that, patches are published, but not in mainline kernel).

    3. When you combine 1+2 you can perform app-wise in a monolitic single-threaded app (be it in kernel space or in UML emulated userspace) if it preallocates all memory it needs. If you feel very UNI just run the app instead of sysV init (been there done that, this requires no patching).

    4. You get all kernel APIs and/or userspace APIs depending on how you use it so you do not need to reinvent the wheel. You for all practical purposes can (ab)use the Linux kernel as a library this way.

    This works around the issue known as "UML is slow" which under the hood is that sucks rocks when it tries to exec a new process because that causes full memory synchronisation and TLB flush. That in the case of "uni-app" especially in virtualized kernel space is not a problem. It never execs and it can rock (and blast at stupid speeds per core which kvm, xen and vmware can only dream of)

    There is nothing new here, move along. Just classic case of Rabid Californication - one over inflated Californicating Entity overpaying a stupid amount of money for another Californicating Entity. Anywhere outside the Silly Valley distortion field the money exchanged for the Californnicating Cross Pollination will beggar belief.

    Posting anon - so people do not associate my El Reg non-de-guerre with my real name :)

  7. John Robson Silver badge

    Containers...

    If you are spinning up a set of instance for a specific task, then does the occasional instance failure matter?

    I mean I'm with him on the debugging thing - it would be nice to know that bad data was the cause of the crash - but in the case of spinning up instances is there not an argument that:

    a) efficiency of spinup is more important than normal

    b) reliability is less so

    1. Anonymous Coward
      Anonymous Coward

      "does the occasional instance failure matter?"

      Depends what is happening inside that instance. Does an instance serve a single user or transaction? If not, it will bring down all user activities or transactions. How much are people going to push the granularity of containerized applications?

      Take a simple instance of a web server - how many users will it be serving concurrently? Otherwise unikernel app should be trimmed down to single processers/threads to minimize impact - but then you have a single process/thread running on a kernel (the hypervisor...) - back to square one (with maybe even more overhead due to virtualization...)

    2. Anonymous Coward
      Anonymous Coward

      Re: Containers...

      > b) reliability is less so

      Works for a PhD paper but in the real world. you may wanna know whence these occasional problems with your hedge fund come from.

  8. Anonymous Coward
    Anonymous Coward

    Unikernel == reinvented process

    A Unikernel is just a process without the standard syscalls.

    It looks fast at the moment because most of the POSIX features are missing. In future these features will get re-added to form a bloated and slow mess, but effectively it will be recognisable as a Linux process...

    "They who misunderstand UNIX are destined to reinvent it badly..."

  9. HmmmYes

    Dtrace/Zone/Zfs ran into a couple of problems. They matured when Sun were falling to bits. Then Oracle bought them. Hard to come back from that.

    The people at SmartOs *ought* to be support as they will, hopefully, keep those technologies going in the open world.

    Docker is a bodge. I can see that. It allows you to run up an application sat on top of a socket. Its is not a general purpose OS - its an application spoof thing. Im investing time and (not much) money in as Google and Amazon are cutting their throats on competing and I can trial some stuff for free or at least, very cheap. Hey, if a company wants to give me stuff cheap so I can make money then who am I to complain.

    I was looking at Google'sAppEngine but there is to much of a technology buy-in there - python 2.7 + googles API. I want to avoid tying myself to a technology and a compant.

    If the stuff is successful then I will scale to a SmartOS hosting company when I need to get bigger.

    I understand the limitations of Docker. Im not sure a lot of people (hipsters mainly, and management) do.

    One prediction on Docker - tehy'll run into the same problems with UnionFS on Linux tha everyone does. They'll get 95% of th way and have to fudge. At least this will speed up the improvment of ZFS on Linux as thats the only mature filesystem that supports the features Dockers needs.

    1. Anonymous Coward
      Anonymous Coward

      ZFS is fine. It's not in Linux because GPL2 won't allow mixing with code that is differently licensed. Same with DTRACE. They've been in FreeBSD for yonks.

      Given that FreeBSD has a better network stack and ZFS, what exactly is Linux for these days?

      1. HmmmYes

        Linux for? Beardy hipsters?

        Seriously, running anyhting that needs a graphics driver with hardware less than 7 years old.

        I've found BSD graphic support to be more miss than hit than Linux.

        And you get those funny enet devices that crop up that are not supported by BSD.

        FreeBSD is greap - but you do have to be careful with the HW combos.

        1. Bronek Kozicki
          Pint

          ZFS is also fine when running on Linux, because there is no pleasure greater than building ones own kernel on every minor release (and ZFS modules on top of it)

          While the packages are being built -->

          1. HmmmYes

            Are you ZFS on Linux Bronek?

            Lat time I look, zfs on Linux was bit disconnected/loose.

            Is it stable now?

            1. Bronek Kozicki

              It's fine, been stable for long enough and performs well (I use small NVMe for ZIL device and L2ARC). There are always some open issues at ZoL but find me non-trivial project with no open issues :) Here's my favourite . The trick is to find the right recipe for your distribution - I use Arch with archzfs and my own fixes

            2. phil dude
              Linux

              zfs, stable....

              I have been using it for over a year using Debian and for my molecular modeling machine. The DKMS mechanism works very well.

              It is very stable, has snapshots (copy on write), and I can get 500MB/s (0.5GB/s) from the 5 enterprise disks it lives on, RAID-Z3. Granted it is SAS, and I have 256GB of main memory, but it is invisible IMHO.

              BTRFS has got much better, though Nov14 when I got this box, BTRFS nearly shredded my SSD...

              BTRFS is the long range vision (compatible license), ZFS is for right now.

              P.

      2. Lennart Sorensen

        Linux is for everyone that wants an OS that works on pretty much all hardware and is scalable. FreeBSD is way behind in both those areas. And then there is the horror of the BSD user space which is just intolerable.

      3. Down not across

        Given that FreeBSD has a better network stack and ZFS, what exactly is Linux for these days?

        Linux is probably bit more versatile (and does support rather obscene amount of obscure hardware) as a desktop/workstation.

        For a server I do prefer *BSD given a choice. That is not to say Linux doesn't work as server, of course it does.

  10. dajames

    Skeptical?

    As El Reg is a .co.uk site I would have expected the spelling "sceptical". That's how we do things over here.

    My dictionary, however, tells me that the root of the word is the Ancient Greek "Skeptikos" ("σκεπτικός"), meaning "one who observes", so the use of the letter 'k' has some history as a transliteration of the Greek letter Kappa.

    Were it applied to both Kappas in the word I could buy that argument, but then we should have the spelling: "skeptikal".

    1. Anonymous Coward
      Anonymous Coward

      Re: Skeptical?

      I'm not sure I'm convinced by that argument. If only there were a word for that ....

  11. Peter Gathercole Silver badge

    "Operating systems these days..."

    Really? These days?

    Real operating systems have had a user/kernel split for 40 years or more.

    It was a fundamental feature of UNIX since Version/Edition 6 (my earliest experience, possibly longer), and in other OSs like DEC RSX-11 and VAX/VMS, and probably a host of other OSs from the same era.

    Even in the Microsoft world, Windows/NT must be 20 years old at least.

    DOS was a retrograde step that should have been strangled as soon as the 80286 became the dominant processor, and MS should really not have compromised on the initial security design of NT.

    What would have been even better would have been a desktop UNIX on a suitable architecture at a cost that suited the industry! Linux just came along too late!

    1. roytrubshaw
      Linux

      Re: "Operating systems these days..."

      "... and in other OSs like DEC RSX-11 and VAX/VMS, and probably a host of other OSs from the same era.

      Even in the Microsoft world, Windows/NT must be 20 years old at least. ..."

      <pedant>

      Not surprising - really - that they all share this feature (among others), as Dave Cutler was responsible for RSX-11M, VAX/VMS and then moved to Micro$haft to create Windows/NT! (Cool or what?)

      </pedant>

      1. Peter Gathercole Silver badge

        Re: "Operating systems these days..." @roy

        Why is that pedantic? I was aware that Dave Cutler did all of these, and I nearly mentioned it myself. I listed UNIX and the DEC operating systems, as these were from my own experience.

        I'm pretty certain that PrimeOS, MPE, VME, AOS, VOS, MTS (just a list of other time-sharing OSs that spring to mind from this era) also had this feature.

        1. Down not across

          Re: "Operating systems these days..." @roy

          I'm pretty certain that PrimeOS, MPE, VME, AOS, VOS, MTS (just a list of other time-sharing OSs that spring to mind from this era) also had this feature.

          Hmm, should I worry that I have used all of the above?

          MPE V was fun to write for, and HP's manuals on the intrisincs were quite good (if my memory serves).

    2. Herbert Meyer

      Re: "Operating systems these days..."

      "DOS was a retrograde step that should have been strangled as soon as the 80286 became the dominant processor, "

      There was an attempt to do so. It was called OS/2.

      Did you every try to write segmented code in assembler for Intel 8086/80186/80286 ? Oh, the horror.

      Progress had to wait for 80386, with a big, flat memory model.

      1. bazza Silver badge

        Re: "Operating systems these days..."

        There was an attempt to do so. It was called OS/2.

        They should have waited for the 80386...

      2. Peter Gathercole Silver badge

        Re: "Operating systems these days..." @Herbert

        I quoted 80286, because it was the first Intel processor that made an attempt at privilege separation and an integrated memory management unit, even if it was only half-hearted.

        I know that it was poor in relation to the later 80386 systems, but it was a major step up from the 8086/80186.

        IMHO, it would have actually been better to have used the 68000 family of processors in the original IBM PC, because that family was designed with a memory management unit in the chip family right from the outset. But it was probably too expensive for IBM to consider in their original, stop-gap PC system which was never intended to spawn a whole sub-industry.

        I'm not sure about the security model of OS/2. As far as I remember, there was no real multi-user concept in OS/2, although there was some memory segregation between processes and probably the kernel as well.

        1. Mage Silver badge

          Re: 80386 vs 80286

          The 386 allowed more RAM. But the real advantage for PCs / MS was being able to to run DOS apps etc at same time. The 286 could only sensibly run a real OS.

          The Pentium Pro didn't have the switching of 386, 486, Pentium I & II, so ran NT with DOS apps in NTVDM fine and WOW for 16 bit windows. But it ran Win95 or any mix of code bases slowly as it didn't have that rapid switch of the 386. Win95 killed the Pentium Pro. The multi chip PII in a plastic box and plug in card ran NT slower than Pentium Pro, but mixed bag of Win95 or Win98 stuff faster.

          1. Anonymous Coward
            Anonymous Coward

            Re: 80386 vs 80286

            Thw 286 lacked the virtual86 mode (used to run DOS in protected mode by Windows) and couldn't exit protected mode once entered it (but forcing the CPU to a warm reset).

            The 286 could address 16MB of RAM, and it was a fairly amount then. Most 386s (and I guess a lot of 486s) never saw that much RAM - despite their 32 bit address space.

            What the 286 couldn't do easily was running both many real mode DOS applications at the same time of any protected mode OS. The tricks used brought the 286 back to the plain, single thread, real mode, while Virtual 86 still runs in protected mode and can "emulate" multiple, separate real mode processors.

            Thereby there was not much interest in developing an OS that couldn't run the large number of available DOS applications much better than DOS itself. A true 286 OS - even without a GUI - would have allowed multiprocessing, but application would have need to be rewritten for protected mode (and most DOS applications were written to directly access the hardware also). LAN Manager 1.x, IIRC, was written for 286 protected mode - but it was just a network OS offering no dektop capabilities, and no need to run DOS applications. Microsoft had Windows run on 286, but Windows itself took off really with version 3 and its applications, when 386 were already affordable enough, and there were really no reason to run it on the older 286.

            Your informations about the Pentium Pro looks largely innacurate.

            1. Richard Plinston

              Re: 80386 vs 80286

              > A true 286 OS - even without a GUI - would have allowed multiprocessing, but application would have need to be rewritten for protected mode (and most DOS applications were written to directly access the hardware also).

              There were several 286 protected mode OSs. MS even wrote one itself. MS-DOS 4.0 and 4.1 (not to be confused with the much later 4.01), also known as European DOS because Siemens and ICL (where I worked) used it briefly was a 286 protected mode version of MS-DOS derived from 3.1 and 3.2 respectively. This also had limited multitasking in background. It could run 'well behaved' DOS programs in protected mode and a single 'badly behaved' DOS program in real mode.

              The 'behaviour was mainly that of memory access. In an 8086 or real mode the program could do segment calculations. Usually this was required to access memory arrays larger than 64Kb. The program would calculate a suitable segment/offset pair to give 'tileing' over the memory. This would break the 8026. In principle the OS could create selectors every 16 bytes to cater for the program doing these calculations but there was a limit of 8000 selectors so it simply wasn't viable. I have a manual on DOS 4.0 here somewhere that describes how a compatible program can be written and also the additional features that can be used.

              It simply wasn't good enough and was dumped when MS moved to MS-DOS 5 (not the be confused with the much later MS-DOS 5) that was renamed OS/2 during development.

          2. Richard Plinston

            Re: 80386 vs 80286

            > 386 ... the real advantage for PCs / MS was being able to to run DOS apps etc at same time.

            Yes, that is an example of IBM PC HW & SW holding back the industry for several years. They needed to wait for the 80386 to be able to do that.

          3. Lennart Sorensen

            Re: 80386 vs 80286

            Then Pentium Pro was perfectly capable of running 16 bit code the same as any other x86 and is fully compatible. The issue people had was that it wasn't very fast at 16 bit code compared to 32 bit code (which is what it was optimized for with the new microcode pipeline). So 16 bit code didn't run any faster than on a Pentium, while 32 bit code was much faster, so if you ran DOS or Windows 3.1, then you might as well save your money and get a Pentium instead. The PII and later improved on the 16 bit performance again and were hence much better upgrades.

        2. Richard Plinston

          Re: "Operating systems these days..." @Herbert

          > As far as I remember, there was no real multi-user concept in OS/2,

          Not until Ed Iacobucci formed Citrix to add multiuser facility to OS/2. Later they did the same for Windows NT.

        3. Lennart Sorensen

          Re: "Operating systems these days..." @Herbert

          Actually the 68020 was the first version to support an MMU, and the MMU was an external chip. The 68000 and 68010 did not have any support for an MMU. The 68030 was the first to be available with the MMU built in (and many variants did not include it).

          But at least they were 32bit chips with a flat memory model (OK, only 24bit supported until the 68020, but the register were 32 bit even before that), unlike the horror of the x86 family.

      3. Anonymous Coward
        Anonymous Coward

        Re: "Operating systems these days..."

        The 80386 just increased the segment size from 2^16 to 2^32 - so the OS could use a single segment in a flat design (segments are still there, but just load the descriptor once, then forget) - it could work because it also introduced memory page management which allow virtual memory with page granularity instead of a whole segment - the 286 could only "swap" whole segments - swapping 64K was feasible, swapping 4GB was not.

        But most OS did make a bad use of the x86 protected mode - partly because of speed, partly because of laziness. Note that later they had to introduce the no-execute bit for memory pages to cope with the fact that a single memory segment with read/write/execute privilege is inherentrly not secure.

        Proper use of segments and their access rights would have yield more robust OSes.

      4. Mage Silver badge

        Re: "DOS was a retrograde..."

        Yes, the x86 was little more than a beefed up 8085 / Z80 with 64K segments. Intel had a translation tool so CP/M 86 was a REALLY fast port of CP/M. DOS was MS's bought in reverse engineered CP/M 86

        Wang and others sold 286 PCs with Xenix or UNIX.

        MS even owned and sold Xenix for 80286 for a while. I installed it once, in 1987, though I don't know if the pre-troll original SCO owned it by then.

        The 8088 and 8086 weren't real 16 bit CPUs at all. You could only do the same things as on a Z80. Actually later Z80s had MMU and 512K RAM. (The Amstrad PCW was years later). There were 3 or 4 real 16 bit CPUs before IBM even did the PC, which wasn't meant to be a serious project. Hence the catalogue parts and bought in DOS. That's apart from multichip solutions that ran UNIX before 1980. The IBM PC was simply a big metal version of Apple II with only text display (graphics later!) and only 320K floppy. Compare Victor 2000 / ACT Sirus 1 (same rubbish CPU) that was a year later in USA, but released about same time in UK, as IBM PC only really was available in UK in late 1981.

        So the ENTIRE IBM PC HW & SW, held back desktop computing for 5 to 10 years!

        1. Richard Plinston

          Re: "DOS was a retrograde..."

          > DOS was MS's bought in reverse engineered CP/M 86

          No. 'DOS' was SCP's reverse engineering of CP/M-80 - allegedly of version 1.3 because the very early versions had a bug in its FCB handling that had been eliminated in CP/M 2.x.

          > The 8088 and 8086 weren't real 16 bit CPUs at all.

          The 8086 was. The 8088 only did 8bit memory access. In some cases the 8088 was faster because to change a single byte the 8086 had to read the word it was contained in, change the byte and write it back while the 8088 only needed to do the write.

          > You could only do the same things as on a Z80. Actually later Z80s had MMU and 512K RAM.

          The Z80 had a 16bit address space, the MMU catered for bank-switching the memory into this address space one bank at a time. Actually, most catered for a fixed segment, say 16Kb, for the core of the operating system with a 48Kb bank above that which could be swapped. That could be done with the 8085 as well, such as on the ICL PC multiuser system running MP/M II.

          The 8086/8088 didn't have the limitation of a fixed segment reducing the effective segment size and also had direct access to 4 segments at any one time. This meant that with a Z80 or 8085 the effective immediate memory size for a program was 48Kb while with an 8088/8086 it was 256Kb. A Z80 program could use the MMU to page some of its data, but that added more complication.

          > IBM PC was simply a big metal version of Apple II with only text display (graphics later!) and only 320K floppy.

          It is correct that the IBM PC was designed to compete against the Apple II that was turning up in IBM mainframe sites. Most of these were running Visicalc or had Z80 Softcards with CP/M. The IBM PC model A* only had 160Kb floppy (vs 120Kb on Apple II), had the same BASIC (AppleSoft was written by Microsoft), and IBM had paid to have VisiCalc, Peach, WordStar and other software that was running on Apple (and Softcard).

          > So the ENTIRE IBM PC HW & SW, held back desktop computing for 5 to 10 years!

          That is arguable on the basis that IBM, as a major manufacturer, legitimised the whole 'microcomputer industry' and made it grow beyond the hobbiest group that started it. On the other hand the IBM PC was rather old technology even for the early 80s. The 8086 had been out since 1978, MS/PC-DOS 1.x was equivalent to CP/M from 5 years earlier and couldn't even support hard disks or even user areas or sub-directories.. DRI had multiuser MP/M since 1978 (using bank switching on 8085/Z80, later on 8086). When MS-DOS 2.x was released DRI was demonstrating Concurrent-CP/M-86 with pre-emptive multitasking and virtual screens.

          While much better hardware was available around the time of the IBM PC, it was, in my view, the Microsoft software that has held back computing for the last _30_ years.

          * The 5150 Model B (which I have here) is identifiable by a blue B in a circle stamped on the back panel). The main differences between the A and B are that the spacing of board slots is narrower on the B.

      5. Down not across

        Re: "Operating systems these days..."

        Did you every try to write segmented code in assembler for Intel 8086/80186/80286 ? Oh, the horror.

        Bastard!

        You know, some things are meant to be forgotten. Now I need to try to re-erase that from my mind.

    3. Brewster's Angle Grinder Silver badge

      Re: "Operating systems these days..."

      There's still a "user/kernel" split. But we've gone from user/kernel to user+kernel/hypervisor (via user/kernel/hypervisor). So we're just returning to the two state model, while redefining what counts as user and kernel.

      1. Anonymous Coward
        Anonymous Coward

        Re: "Operating systems these days..."

        There's still a "user/kernel" split. But we've gone from user/kernel to user+kernel/hypervisor (via user/kernel/hypervisor).

        This old/newfangled vmming is a botch that stems mainly from the fact that one cannot manage the system configuration in a reasonable way at the present time.

        So we russinadollize the system...

      2. Michael Wojcik Silver badge

        Re: "Operating systems these days..."

        So we're just returning to the two state model, while redefining what counts as user and kernel.

        Yes - Unikernel is basically just hoisting a subset of kernel facilities into userspace, and throwing the rest out (in a given container). The hypervisor provides isolation.

        Personally, though, I'm not impressed with the idea. I don't see any big advantage - some performance, but then the constrained resource is probably either one you can cheaply throw more resources at, or something that won't be helped by a smaller image without user/kernel context switches. And in exchange you're giving up features and flexibility.

        The research work being done on "library operating systems", which spin up customized OS-and-support-library collections based on the dependencies of the application that's going to run, strikes me as more interesting and more broadly useful.

    4. Mage Silver badge

      Re: Windows/NT must be 20 years old at least.

      1993. for NT 3.1 So 23 years old this summer.

      Why is first NT, 3.1? Dunno really, but after MS & IBM fell out on OS/2, MS did have rights to MS OS/2 (with built in Lanmanager.). I saw this once with a class room of Win 3.11 workstations via cheapernet (thin BNC ethernet) in early 1990s

      1. Anonymous Coward
        Anonymous Coward

        Re: Windows/NT must be 20 years old at least.

        "Why is first NT, 3.1? "

        16 bit 3.1 arrived in 1992 and was followed in due course by 3.11 in late 1993.

        NT arrived (after beta testing) in 1993.

        I think someone may have thought it would be a good idea to have similar version numbers so as to not confuse the market. Who knows...

        At that time (1993), a known-NT-compatible setup (e.g. Gateway 2000 486DX, 33MHz, 12MB of RAM, 400MB WD hard drive, proprietary CD-ROM and 1280x1024 monitor) was around £1400.

        1. Richard Plinston

          Re: Windows/NT must be 20 years old at least.

          > I think someone may have thought it would be a good idea to have similar version numbers so as to not confuse the market.

          Or to confuse the market. Nobody was going to buy a 1.0 version. Even then the market was conditioned to 'wait for version 3'.

      2. Richard Plinston

        Re: Windows/NT must be 20 years old at least.

        > Why is first NT, 3.1?

        Because it used the GUI from Windows 3.1 and they tried to make it appear to be a 'family' of operating systems. Anyway, with MS's history, no one was going to buy version 1.0.

      3. Lennart Sorensen

        Re: Windows/NT must be 20 years old at least.

        Well I think they called it NT 3.1 since it shared the look and feel of Windows 3.1 and could run Windows 3.1 applications.

  12. Nigel 11

    What's the difference?

    "The XYZ app's gone down" versus "The XYZ VM's gone down"

    "Relaunch it!" versus "Reboot it!"

    "A remote hacker could have executed his own code in the context of the XYZ account on the server because of a bug in the XYZ app" versus "... in the XYZ VM ..."

    "Someone's rooted the server via XYZ" versus "Someone's broken in to the hypervisor via XYZ".

    I think the main advantage of the VM is that you can migrate it from one CPU to another, even halfway around the world given enough bandwidth, without having to take it down at all. And that unlike the coding that would let you accomplish the same with an App running on a multi-site VMScluster, moving a VM is a purely operational matter that you don't have to code for.

    (OTOH, when TS hits TF big-time I'd expect the VMScluster to come out on top).

    1. Anonymous Coward
      Anonymous Coward

      I assume...

      It's the low handing fruit.

      The first place off attack is usually the weakest. In a split OS, you have APP->Kernal. An attack hits the app, then you hopefully, still have the Kernal as secure, or just an "arms race" ahead of the attack.

      With no split, any and all attacks can hit the Kernal as well as the App. So now there is more chance of a larger "hack" at your system.

      Though it may just be a moving of the walls, instead of an obliterating of them. :P

      1. Bronek Kozicki

        Re: I assume...

        Well, here is the crux of the matter : you are making an assumption that unikernel actually means removing the ring isolation of the kernel where the hypervisor is running. To me that seems like an unimaginably silly thing to do, hence I make an assumption that it is something else, i.e. removing the ring isolation inside VM only. That is, not much different from running ancient DOS or CP/M inside a VM (except with more modern syscalls)

        We both lack education here, and unfortunately Bryan Cantrill (and the article) do not seem to add to our knowledge much at all. PS. when I read Docker bags unikernel gurus – now you can be just like Linus Torvalds I heard a faint "woosh" sound, I wonder what could that be?

      2. Charles 9

        Re: I assume...

        I think that's what some of the commenters are saying: that in a containerized/virtualized environment, there's still a degree of separation in play: in this case between the guest and the host. And this degree of separation is more significant than the one between kernelland and userland; there's more abstraction between host and guest, which is why VMs can transport.

    2. Anonymous Coward
      Anonymous Coward

      Re: when TS hits TF big-time

      "when TS hits TF big-time I'd expect the VMScluster to come out on top"

      Either that or ex-Tandem NonStop, depending on application. Both still around, as seen (along with others) in the following (slightly dated) 6 minute disaster tolerance video:

      https://www.youtube.com/watch?v=qMCHpUtJnEI

      VMS is even being developed outside HP now - by many of the original VMS New England team. Talk about wheel of re-invention...

      Cutler afaik is still at MS and no longer connected with VMS; last seen working on Xbox.

      "unlike the coding that would let you accomplish the same with an App running on a multi-site VMScluster, moving a VM is a purely operational matter that you don't have to code for."

      Yes and no. There is a risk to blindly assuming that "moving a VM is a purely operrational matter that you don't have to code for.". In many cases it may work just fine, but if the VM has dependencies, uses shared resources, connects to other stuff, all the usual hallmarks of a non-toy application, then someone at least ought to think about (and, ideally, test) the implications of VM migration. If you've built your setup using an active/active VMScluster you already know what's going to happen.

  13. Mark Honman

    Cores are cheap, it's how to use them...

    Having taken the plunge into embedded software development, the unikernel concept doesn't seem very different from the standard embedded way of doing things.

    Perhaps a differerence in that in embedded-land, specs are much "harder" and clearly thought-through, and testing much more thorough than in web-service land.

    However there is a better way, that (of course) was first developed in concept more than 25 years ago.

    That is, a bunch of unikernels that are functionally separate and only interact by exchanging messages. This re-introduces isolation, both conceptual and physical, without corresponding overheads. The model is called Communicating Sequential Processes.

    CSP also includes a concept of low-overhead fork/join and yes, works best with languages that are amenable to high-quality static analysis (so that resource requirements can be computed in advance). XMOS microcontrollers and their C-derivative language XC are the modern-day examples; how I wish they'd produce some proper _fast_ processors!

    The key idea with the XMOS chips is that there is direct hardware support for all the CSP constructs, so message-passing overheads and task switching are very efficient.

    Back in the land of normal processors these kind of facilities are not available; but cores are now so cheap that it is possible in concept to dedicate a core to managing each peripheral. I'm not thinking of the x86 architecture here, but devices like the lowrisc chip, Zynq MPSoC, and especially TI's DaVinci media processors.

    What is still lacking is decent message-passing hardware in mainstream processors, so that hand-off of requests to lightweight coprocessors does not need to go through a memory-mapped interface (where the resource management handshaking always seems to get ugly).

    1. Anonymous Coward
      Anonymous Coward

      Re: Cores are cheap, it's how to use them...

      Wise words, sir, wise words. (Transputer and Occam not worth mentioning in the context of efficient message-passing?)

      Hoare's CSP book was legitimately freely downloadable for a while. It's still there but the site seems to have suffered a little:

      http://www.usingcsp.com/

      1. Mark Honman

        Re: Cores are cheap, it's how to use them...

        > Transputer and Occam not worth mentioning in the context of efficient message-passing?

        I wanted to emphasise that CSP is even more applicable now than 30 years ago. While Occam and the Transputer were so closely coupled that the CSP primitives had equivalents in the instruction set, the XMOS take on CSP is IMO very clever because it addresses the problems that limited adoption of the Transputer and Occam. Specifically, the computational performance advantage of a RISC architecture, and reluctance to write substantial software in new languages.

        And £10 will get one an XMOS devkit to play with...

        As for me, I'm very happy to be working for a "Transputer company" that _still_ has a modular multiprocessing vision; even more so that when we had a clear-out last year there were some TRAMs needing a new home..

        1. Destroy All Monsters Silver badge
          Paris Hilton

          Re: Cores are cheap, it's how to use them...

          Aren't we into actor-based languages today (i.e. Erlang and Elixir etc.) because "sequential" processes are nice but the demand for synchronous exchange makes them a bit less than real-world capable? See also What's the difference between the Actor Model of Concurrency and Communicating Sequential Processes

          1. Mark Honman

            Re: Cores are cheap, it's how to use them...

            Actor model - now that's interesting... from that brief description it looks like it's rather like the traditional RTOS message-passing model (now I will _really_ show my age - HP1000 RTE class I/O?).

            Synchronous vs. asynchronous message passing is one of those wonderful design trade-offs. On the one hand the predictability of synchronous interactions makes it easier to reason about what is going on in parallel software, and the very simple rules make direct implementation in hardware feasible.

            On the other hand in the real world one always seems to need a certain degree of decoupling between processes (easy enough to provide in a CSP environment via lightweight buffer processes, as long as the buffering depth is bounded).

            The most useful form of buffering seems to be a depth of 1, on the data-producer side of the data path. Enough to release the producer to get on and produce more data, but avoids dynamic resource management overheads and still allows one to reason about where producer and consumer are in their respective control flows?

  14. ForthIsNotDead

    Forth

    Forth has been doing it the exact way he says is terrible for 40 years. When writing an application on an embedded Forth system, the application and the kernal are at the same level. There's no protection whatsover. You're free to f**k up with your poorly written software in any way you want.

    Forth has been used in countless space experiments on the shuttle and other space systems for decades. IIRC 10 of the 12 CPUs on the Philae lander and orbiter were Forth CPUs. It's also been used in most of the worlds observatories (controlling radio telecsopes) for years.

    Forth is an amplifier. Badly written code shows up real fast as badly written code. However, you *can* write code right on the hardware and it can work just fine. It just takes discipline and good procedures and management. It can be done. It has been done.

    All that said; he has a point. These walls between OS and application software are necessary, because software *is* buggy, and software does crash. OS's are buggy too. Part of the problem is simply down to the complexity of modern OS and application software. When a Swing library in a Java program is rendering it's window on the screen and painting its buttons, putting text in a text box etc, how many levels of abstraction are there between it and the graphics hardware? A thousand? Two thousand?

    If we want more reliable software, we have to write simpler software.

    Forth, which is still around and still used, takes all that away. It is simple enough that (as in my case) the entire workings of the Forth kernal can be understood and held in the head of one person (I should, I wrote my own Forth system) and by extension, the applications written in it, too.

    To be fair, the applications written in Forth are vastly simpler than those written on contempory PCs. We tend to write on the metal, in deeply embedded or industrial control environments, where software can be much simpler, and the only code in memory is the code that *you* put there, because it is specifically needed for something that you understand. PCs have the entire kitchen sink in memory and anyone of them could go wrong.

    1. Charles 9

      Re: Forth

      So what happens when you get caught between a reliability requirement and necessary complexity? Does it become a case of A Bridge Too Far?

    2. Destroy All Monsters Silver badge

      Re: Forth

      Forth has been doing it the exact way he says is terrible for 40 years.

      That bullshit. You are mixing concepts. Forth is a language. The language runs on something. Free to you to run it on bare metal, but if it runs on Unix, your argument is invalid.

      You can run Java on bare metal if you write a language VM for that.

      1. Michael Wojcik Silver badge

        Re: Forth

        Forth is a language. The language runs on something. Free to you to run it on bare metal, but if it runs on Unix, your argument is invalid.

        Indeed. Might as well claim that because there are embedded systems running software written in various assembly languages, the existence of assembly languages proves that Unikernel is a good idea.

        More generally, the existence of embedded systems says nothing - nothing whatsoever - about the suitability of removing the user/kernel separation in general. Something may be appropriate in one domain and inappropriate in others.

        But based on the OP's handle, I'm guessing that he's not here to listen to reasoned arguments on the topic anyway. He has an axe, and this looked like a grindstone.

  15. Anonymous Coward
    Facepalm

    Unikernel, No-kernel, whatever

    This is OK in limited circumstances: stateless embedded devices, game consoles, audio/video workstations (maybe), and under virtualization -- ONLY IF the application has no access to sensitive data AND any data you wouldn't want to lose (savegames, recordings) is securely+reliably stored elsewhere, in a storage system that assumes the client is pwned. Essentially that means NEVER.

    Containers are a kludge for people trying to run a "system stack" (ie. Linux, Nginx, MySQL, PHP 5.3, Nodejs 0.10.x - note that those are insecure outdated versions) as an "application" in a single process under another Linux system where they can't easily install those old versions. Generally they're just trying to make it work, they don't know or care what's inside the container, and they don't know the risks of data leaks or privilege escalation. Dangerous.

    1. Charles 9

      Re: Unikernel, No-kernel, whatever

      So what are you saying? That all this will do is encourage hypervisor/Red Pill attacks?

      1. Anonymous Coward
        Anonymous Coward

        Re: Unikernel, No-kernel, whatever

        No, quite the opposite. All a hacker has to do is break into a container through an old unpatched hole, and look for valuable data or stepping stones to other systems. Unikernel is even better for hackers, if every exploit yields full access to the container.

        Also, containers are a poor abstraction for systems-of-programs. The Unix architecture doesn't have a good solution, so we're stuck with bolt-on kludges like this until we get a new architecture.

  16. allthecoolshortnamesweretaken

    Shouldn't have read all this, I'm having flashbacks of my DOS days now... well, one of them is at least funny, an instructor telling me that, if you apply the literal meanings of the words, every DOS system is also a RISC system...

  17. Anonymous Coward
    Anonymous Coward

    Those who don't understand Unix

    are condemned to reinvent it, poorly. -- Henry Spencer, programmer

  18. Steve Channell
    Happy

    Nothing new

    The idea of running a special minimal OS running within a production VM is 40 years old with VM/CMS and works well, but you'd never run it on the bare metal (no punch reader to start with!). On a VM/CMS mainframe you'd have numerous special "unikernels" to run VTAM (network IO) SQL/DS (database) all operating without a conventional OS.

    For decades it has been common for Oracle to bypass the OS for disk IO and write to RAW volumes directly without the benefit of kernel IO debugging.

    For at least a decade Solace and 29West have taken dedicated IO away from the kernel for higher performance with RDMA providing a standard for an OS to delegate remote DMA IO to the hardware.

    For a long time Linux has provided basic HTTP within the kernel to avoid context switch from TCP sockets, and a recent initiatives to use NetBSD's user-mode TCP allows Nginx (and others) to avoid the kernel for app specific NIC with higher throughput.

    Even memory heap management is moving to user-mode to avoid thread contention with heap allocation.

    In the context of so much traditional kernel work moving to user-mode, leaving the scheduler in a protected kernel is worth questioning if you don't need debuggers and DTrace.

    Where Bryan does have a point is when people run a unikernel on the metal where a hardware fault could corrupt the database or cause an algo to flood a market with orders. Rather than "unfit for production" a unikernel is really "unfit for development" where a protected kernel is essential for debugging.

    Where is really gets interesting is not rumprun but MirageOS type functional OS that don't need the kernel protection for C code and can run much faster..

  19. This post has been deleted by its author

  20. TeeCee Gold badge

    He has a point.

    The resulting condensed unikernel package cooperates with other unikernels running on the host for resources and processor core time.

    That's cooperative multitasking that is. Not DOS, but early Windows. The problem is when one of the tasks doesn't play nice and in the absence of a preemptive supervisor program to take its toys away, we get the return of the "General Protection Fault".

    The GPF was the poor old system tasks given the job of monitoring the shitfight saying; "Oh fuck, something went wrong.".

    1. Anonymous Coward
      Anonymous Coward

      Re: cooperative multitasking

      "That's cooperative multitasking that is. Not DOS, but early Windows"

      And also MacOS before OS/X? And doubtless others too.

      The 1990s called. They want their concepts back.

  21. 1Rafayal

    Stop me if I am wrong here, but isnt the point of a unikernel to load in an OS and not care if it falls over?

    Also, the OS's in question, they are not desktop operating systems, but are designed to one specific thing (more or less) so it really shouldnt matter if they topple on the hypervisor.

    From an administration point of view, unikernels require less configuration management because the OS is built and configured in situe prior to being deployed - stops devs from trying to fix an app on the box.

    From a realistic point of view, dev managers simply wont want to develop with them. From a PM point of view they probably wouldnt even register what this actually is.

    However, I would imagine that it will be quite hard to convince a CTO/CIO that this is a worthwhile investment, so soon after containerisation. That is until the marketing bods figure out a slogan or something...

    1. Anonymous Coward
      Anonymous Coward

      So they are processes?

    2. Anonymous Coward
      Anonymous Coward

      You're not wrong here-- that point of a unikernel is wrong. It shouldn't matter but it does matter because the instances must have been doing something important when they fell over; that was one of Cantrill's points.

      I can add nothing besides that this reminds me of an absurdly scaled-up variety of Worse is Better, and this time Worse is obviously Worse.

      1. 1Rafayal

        I am not sure if I like unikernels or not yet. And I agree with Cantril and yourself - if the instance is doing something, then whatever was being done is lost.

        However, in a production environment, you don't have one instance of anything, there is always at least two handled by a load balancer. Now for me, I would be more interested in running a service bus over multiple unikernels, or indeed redis, if one falls over it shouldn't matter. It would be hard to convince me that running a regular db or mission critical application on such an OS would be a good thing

        So for me, I would be interested in understanding if the administration of the instances would be manageable plus, how performant (or not) would such a solution be.

    3. Vic

      From an administration point of view, unikernels require less configuration management because the OS is built and configured in situe prior to being deployed - stops devs from trying to fix an app on the box.

      Not so - unikernels require *more* configuration management, for exactly the reason you give.

      Although your base OS might require fewer lashups, each and every unikernel build has a significant proportion of what would previously have been OS code - so each and every one needs patching, requiring a rebuild of the unit. Given that many of these changes will be the same patch applied across many instances, that means many rebuilds for a single change. This is an increase in configuration management...

      Vic.

  22. Long John Brass
    Flame

    total lack of operational empathy

    Welcome to DevOps

    Put the little fuckers on call.

    <rant>

    These retards are saying, get these heavy bloated OS syscalls out of our way!; They slow us down!

    Really? You create 100k objects templates & each has its own constructor/destructor baked in that does sweet fuck all except chew processor time, and they have the gall to moan about the ring transition time/context switch, the solution? Moar garbage collection, and get rid of the OS, No ... just no.

    My code is slow, blame the database, blame the OS, blame the network, blame the OPS guys, blame the sysadmins. any ones fault but my own....

    Fuckemall; put them on call and let them deal with the angry screaming hordes.

    </rant>

  23. Anonymous Coward
    Anonymous Coward

    Moore's Law spoiled developers

    They got used to doubling performance every couple years, and when that ran out for single threaded code they got extra cores.

    Moore's Law is still (mostly) working, but those extra transistors help less and less each generation as even the gains from extra cores have their limits.

    But these developers who grew up around that the last couple decades don't know anything about optimizing code, so something has to give if they can't get faster hardware. They'll get the smart guys in the room (the ones designing operating systems and hypervisors) to figure out a shortcut to get them more performance. Too bad it is only a one time boost, and the developers who need it the most are by definition the worst ones who will write code that will crash the 'unikernel' in no time flat.

    I anticipate that aside from reading The Register I will never encounter the word "unikernel"; certainly not in my world doing enterprise architecture.

  24. unicomp21

    This is misdirection on the part of Cantrill. Anyone who has written servers which multiplex thousands to tens of thousands of streaming concurrent users is aware of the overhead of the user/kernel context switch. This single fact, eliminating user/kernel overhead, is why products like mcorelab can multiplex/stream 100k users while pushing millions of messages/s. The unikernel guys are pushing in this direction. The problem comes when something faults, the debugging can be a mess w/o an OS present. And the idea of running multiple unikernel "processes" on a single box is also probably a bad idea. Unikernel is a concept for running a singular, very high throughput process directly on the metal of a single server box.

    1. 1Rafayal

      agree, however I dont think unikernels are here to stay.

      for example, if you have an existing redis implementation running on a regular server (I dont care if that is on Windows or Linux) then it is extremely easy to debug. Whack the same implementation across 100 instances of a unikernel split into two pools and you face a boring problem. Or an expensive one, you could probably use something like New Relic or AppDynamics to get a good bit of introspection on whats going on, but of course that would take a while to implement all the way up to production.

      Plus, many development shops are pushing things like application introspection onto their devops teams, again I will blame this on the generic dev managers and PM's.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon