For Windows guest - KVM or XEN and which distro for host?

This topic was created by Bronek Kozicki .

  1. Bronek Kozicki

    For Windows guest - KVM or XEN and which distro for host?

    The time is slowly approaching for me to rebuild my home PC. Since I've been using Linux at work for the past few years (ssh only, and happy with it), I want my new machine to be running Linux as a host system, and Windows on top of it. Not full migration because I've invested good money into software which is Windows only, also want to play games (which aren't ported to Linux yet). Also I find Windows GUI more ... attractive than Linux windows managers.

    Pretty standard so far. What's different:

    * I want to use PCI passthrough for AMD GPU; possibly also for LSI MegaRAID (unsure about this)

    * I want to be able to give exclusive access to selected USB ports to Windows (to use devices I do not need Linux to know about)

    * I want Windows to start automatically "as if" it was regular Windows-only machine, with the host OS not much different (for layman - i.e. family) from BIOS. In particular: no manual interaction needed to start Windows full screen with access to all the hardware I've given it.

    * I will be using Linux from Windows guest over virtual network almost exclusively, i.e. ssh and possibly X sessions (X-server being Cygwin)

    * I'm fine to put good money into hardware, 32GB RAM and 2x Xeon are not out of question. Also happy to split resources in half - Windows does not need to have all the goodies.

    * I will be using Linux for some performance sensitive stuff, although nothing critical. Almost always related to programming in C++ (I wrote some patch for gcc in the past, want to do more of this, and gcc build takes lots of resources).

    * I'd like to be able to buy support for my Linux installation, so distros such as RHEL or OpenSUSE are in

    * I'd like to be able to use btrfs with confidence (strong point for OpenSUSE but I can wait few months for official support in official release of RHEL7). Specifically, I want self healing on RAID with checksums.

    * I'd like to share filesystems between Linux host and Windows guest - if there are options better than SAMBA4 I will be happy to learn.

    * I do not want to run Linux full screen GUI. I dislike windows managers in general, and also prefer tools like ssh/tmux/vim for work; on the other hand running Linux full screen in text mode is just waste of space and I do not need it. Putty ssh is best tool for me, for actual physical screen I can put Linux in picture-in-picture, second input on main monitor (as long as its text only).

    * I do not want second keyboard, no space on desk for it, but how will switching of focus work? My keyboard has no special functions whatsever (but I love it because it's mechanical)

    * This is not for playing with or learning VMs (which is why I didn't add this to the other topic), this is for serious use where VM is supposed just to stay in background and let me do my work. I have some experience with VMware (and old license I could upgrade - but they do not support PCI passthrough last time I checked).

    Question: which combination of distro + virtual machine would work best for me? Thanks!

    1. IdeaForecasting

      Re: For Windows guest - KVM or XEN and which distro for host?

      At no time have I seen WHAT you are trying to accomplish. As a systems guy, first you spec your mission, then detail the pieces/parts needed to reach the mission objectives.

      What are you trying to do, and where the hell did you come up with the 'requirements' wish list?

    2. adobob

      Re: For Windows guest - KVM or XEN and which distro for host?

      Your spec is ridiculous, and your requirements are incredibly obvious and simple (your proposals though are mental).

      a) install Windows natively. As you will play games, you want the best performance there, so install that first.

      b) use a combination of cygwin, and or Debian under VMware (free version). Everything works fine and you'll be able to ssh to your system etc.

      c) for a more advanced Linux install that will give you everything that you want, redirect the home folder of Debian a location that is in the Windows filesystem, thus giving you completely integrated Windows and Linux. You can ssh to the debian environment and also downloads from Debian will be placed into NTFS, so Windows / Linux will work very cleanly together.

      Sorry, but your own ideas about system building are bizarre and illogical.

      1. Anonymous Coward
        Anonymous Coward

        Re: For Windows guest - KVM or XEN and which distro for host?

        Just do what I do - I run Win8,1 Pro and enable Hyper-V support. Then I install Linux (Ubuntu of various flavors) in a VM. Performance is a s good as the resources I commit to it, it's invisible for family users, I can control the devices that Linux sees... Works fine for me. Not sure it meets all of you desires, but it gets most of them.

      2. JEDIDIAH
        Linux

        Re: For Windows guest - KVM or XEN and which distro for host?

        My thought was a small headless box for the Linux work. It's only purpose would be computation and could be a smallish box with a decent (but not terribly expensive) CPU. It would be a compute node that sits in a corner or closet somewhere out of the way. Since you're only interested in connecting to it with terminal sessions it would just be a headless "server".

        Linux will play nice with Windows network protocols going either direction.

    3. Anonymous Coward
      Linux

      Re: For Windows guest - KVM or XEN and which distro for host?

      Opensuse and Xen.

      Btrfs - Hmm, I still consider that one in RC stage and the odd bad bug is yet to drop out of the woodwork but a highly promising FS. Give it another 12 months before relying on that one.

      I'm not a games player any longer as it nearly cost me a divorce so I had to kill that addiction. I use Opensuse full time and have for about the last 10 years. I think KDE4 is light years ahead of the windows GUI and just super customizable.

      If you want a games machine then run windows with Linux as guest, if you want to build a work machine then run Linux with windows as a guest. Better still buy a console and just create a work machine.

      However I don't fully understand what your objectives actually are and what you are really trying to build ?

      Final thought - ffs don't go AMD GPU if you are using Linux as host, the NVDIA binary is far superior to AMDs and they have very long support cycles to their products.

      What ever you do good luck, have fun and several large pots of coffee on the go.

      1. Bronek Kozicki

        Re: For Windows guest - KVM or XEN and which distro for host?

        Thanks for all the advices so far, even those "run Windows natively". Let me explain why I do not want to do that: I've been running Windows natively "always" but since I started using Linux, I've grown to respect its kernel. I personally believe Linux kernel is more robust and more economical than Windows one, esp. the scheduler and memory managment. I've been running Linux as guest on Windows both in VirtualBox and in (paid) VMware, and while the experience is pretty good for less demanding tasks, it is awful for full gcc build. Especially if one is actually hacking gcc, and thus has to run the build again and again - it takes an awful lot of disk IO and CPU power.

        What I want to do is to let the thrifty kernel own all the resources since it's less likely to waste them, and more likely to let me utilize all the cores, memory and IO I need to run full gcc build, even from within guest Linux. It does not bother me if it's headless, in fact that's how I like Linux best. The less economical Windows is to become a kind-of front end, started automatically, with full access to keyboard, GPU, USB devices etc., but in its VM configuration I decide how much memory and cores it will be allowed to use. The plan is to setup a Xeon machine with some 10 cores, of which 4 (number which I will tweak as needed) is given to Windows, and 4 or more to sibling Linux VM and rest to host for tasks such as SAMBA.

        Primary use of this is to let the family use Windows as usual (games, internet etc), while I build gcc in the bacground, having started the task over ssh from my laptop. Or while I do anything else either via ssh console or X session, from my underpowered laptop. Since games can potentially eat up all the CPU allocated to Windows, I really, really do not want Windows to be host for this. It makes much more sense the other way around. In a word, Windows is primary use only if all you can see is front-end. There will be much, much more going on behind the scenes and managed through ssh.

        1. Anonymous Coward
          Anonymous Coward

          Re: For Windows guest - KVM or XEN and which distro for host?

          Tip - don't let the family anywhere near your work machine, kids will always find a way to screw up or break the most secure locked down system you can imagine; childrens ability to test things to destruction never ceases to amaze me. The family use a windows box hooked up to the TV - my kids are not allowed in my work den on pain of withdrawal of all electronics privileges and saved game deletions.

        2. Anonymous Coward
          Anonymous Coward

          Re: For Windows guest - KVM or XEN and which distro for host?

          Wouldnt you be better to just build two computers?

          One can run Linux and be headless with a high clock speed CPU*, with an SSD and/or a decent amount of extra RAM for storage so you can compile from/to a RAMDisk.

          The other would be a decent gaming rig. More cores, more RAM, a decent GPU or two, SSD for storage and some spinning rust for bulk storage, that sort of thing. And no risk of the kids turning off the VM to get an extra couple of FPS when they're playing games.

          *AFAIK, multiple cores doesnt make that much difference to a compiler compared to increasing the clock speed; it's a difficult workload to split up into different bits... This probably varies greatly depending on your compiler though

          1. Bronek Kozicki

            Re: For Windows guest - KVM or XEN and which distro for host?

            @AC

            no space for two computers here, really. I could put second microATX sized machine somewhere in the corner but it would be constantly at the risk of being tripped over, disconnected from network or power, or damaged in other ways. Even if place is found away from people (I do not see how - this is London flat, aka "rabbit hutch") it would simply not have enough CPU power for things I want to do under Linux. Headless host and guest Windows taking over fair bit of peripherals (but not much of CPU/network/disk/memory) with help of PCI passthrough is what I'm aiming for.

            1. dogged

              Re: For Windows guest - KVM or XEN and which distro for host?

              > no space for two computers here, really.

              I don't believe you.

              http://globalscaletechnologies.com/p-57-smileplug.aspx

            2. Anonymous Coward
              Anonymous Coward

              Re: For Windows guest - KVM or XEN and which distro for host?

              Two seperate computers would probably be easier. Build a mini-itx Winbox and mount to the back of the monitor to elminate the u-atx, trip hazard in the corner. There are some AMD FM 2+ and Intel LGA 1150 mini-itx mobo's, but you haven't specified which type of games you want to play in the Windows environment. Size would restrict the video card you could use.

        3. Nigel Campbell

          Re: For Windows guest - KVM or XEN and which distro for host?

          I agree with the posts suggesting it would be easier to use Windows as the host. What you might do is to benchmark a GCC or kernel build on a native Linux build (i.e. Linux installed on the bare metal) and on Linux running in a VM. See what the overhead really is - if it's no more than (say) 20% then the gains from running Linux as the host might not be worth the trouble.

          1. Anonymous Coward
            Anonymous Coward

            Re: For Windows guest - KVM or XEN and which distro for host?

            "I agree with the posts suggesting it would be easier to use Windows as the host".

            I know just how you feel, Bronek, and I would want to have Linux as the foundation of my system too. But if you decide, on reflection, that for practical purposes it's necessary to run Windows native, just grit your teeth and keep reminding yourself "This is VMS with a silly GUI on top".

      2. That's it man, game over man, game over!

        Re: For Windows guest - KVM or XEN and which distro for host?

        I really want to like a Linux desktop and your post made me go and check the status of it as it has been a couple of years. I was really excited to see your thoughts on KDE4, but yep it still looks like a fisher price toy...such a shame.

  2. Manolo

    The other way around?

    Since you' ll be using Windows mostly, wouldn't you be better off running Windows with Linux in a VM? Especially the USB bit would get complicated to implement, but easy when running Linux in an VM.

    1. keithpeter Silver badge
      Windows

      Re: The other way around?

      Yes, I was thinking of a couple of boxes, Windows games machine and session client upstairs and Linux big box in the basement as OP likes his SSH and remote access.

      1. punga

        Re: The other way around?

        That shouldn't be a problem. If he uses a bridged network interface on his VM, he can just leave the machine minimized and access it via PuTTY

    2. Roo
      Thumb Up

      Re: The other way around?

      Agreed Manolo, as much as it pains me to say it I think it would be better to have Windows hosting Linux given the requirements. I would like to add one little tip though. Install Windows and Linux on *physically* separate hard drives, it makes upgrading/re-installing risk free (if you unplug the hard drive you want untouched). Two drives also means that if you suffer a drive failure you can carry on using the machine with the remaining OS. ;)

    3. St3n

      Re: The other way around?

      I agree too.

      If you are using Windows as the desktop and Linux in the shell, then it doesn't make any sense to run windows in a VM.

      I have never had any issues with linux vm performance on a Windows host under VMware. You can share directories between host & vm from a menu setting too.

      1. h4rm0ny

        Re: The other way around?

        Gah! There are enough people here who've already said this and it's probably doing the OP's head in that everyone is telling him the question is wrong rather than actually giving an answer. But I do the same - Windows 8 as the host and GNU/Linux (Debian in this case) as the guest. For me that gives the best of both - the friendly and well-thought-out design of Windows with the raw capability of GNU/Linux.

        I use Virtualbox and I can throw as many cores and as much RAM to the guest as I like and it works pretty bloody well. I would guess the OP (understandably enough) has the view that Linux is the solid foundation and also there may be a VM-tax on the efficiency of the guest. And as they want Windows for goofing around in the GUI and Linux for furious compilation, they think that way round is best. I've found Windows 8 to be a very solid host, I don't think there's any concern there. I can't answer with certainty about a performance hit because the machine is virtual, but I'll say it performs very well for me and modern chips have special functions to support virtualization which means the virtual machine can be quite close to running on the metal. It's virtualization, not an emulator and the days of clear distinction between bare-metal hypervisors and software hypervisors are gone.

        I wish I could give the OP more of the answers to their specific questions. I could using VirtualBox with Windows as host. For example, you can set up a shared disk space which is (probably) a better way of achieving what the OP wants than Samba. You can set a USB device to only be visible by the guest (though you'll need to leave said device plugged in).

        Honestly, I was going to hold off on joining the "Other way round" bandwagon, but this particular part of the OP's question persuaded me to just add my own opinion that it might be a good idea:

        "I'm fine to put good money into hardware, 32GB RAM and 2x Xeon are not out of question. Also happy to split resources in half - Windows does not need to have all the goodies."

        Reason that this persuaded me to join in is because it shows a misconception. If you're willing to do that sort of hardware then there is no way that you need to "split resources in half". Clearly if the OP wants to do heavy compiling, then all you would need to do would be to install Windows on that set up, tell VirtualBox to give the guest (Linux) 14 cores and 28GB RAM and you're going to have a mighty powerful compiling machine. It's not going to be held back by the Windows host doing the odd bit of USB port handling, et al.

        The Intel E3, E5 and E7 chips integrate the newer virtualization technology which is worth having. Really lets the host just get out of the way of the guest and the metal. It's available in some Haswell chips as well.

        Anyway, those are my thoughts.

        1. Schultz

          Windows 8 a solid host OS?

          I thought Windows 8 to be quite solid until it completely crapped out on me after some 1/2 year of use while traveling. The second, separate installation on another HD did the same thing a day later and I had to do a quick Linux installation to remain functional.

          I never had any Linux installation do anything like that and I would definitely prefer Linux as the host.

        2. Bronek Kozicki

          Re: The other way around?

          Thanks h4rm0ny; I've been running Linux under VMware and I'm unconvinced by this:

          Clearly if the OP wants to do heavy compiling, then all you would need to do would be to install Windows on that set up, tell VirtualBox to give the guest (Linux) 14 cores and 28GB RAM and you're going to have a mighty powerful compiling machine.

          If Windows kernel is given all the cores and memory, there is no telling how much it will actually consume, at the expense of Linux guest. This is not Windows fault, it is simply how some programs (games) behave. But if Windows is host, it simply has to be given all. I want Linux to have the primary access to resources, not the other way around.

          You are right that I will most likely give more resources to Linux than to Windows - I've given more thought to it and this seems inevitable. But again, running Windows under VM gives me an option to simply tweak its configuration and restart Windows with access to extra 2 cores if something a bit more demanding needs to be done.

          1. h4rm0ny

            Re: The other way around?

            >>"Thanks h4rm0ny; I've been running Linux under VMware and I'm unconvinced by this:"

            I see your point. I hadn't really appreciated that you wanted protection against other people using up resources to be part of this. I actually have a set-up similar to how I described and similar intense requirements (heavy database work in my case, however). However I'm the sole user of the machine so that aspect was a bit foreign to me. Happily I think Gordon has given you excellent answers.

    4. Gordan
      Boffin

      Apologies in advance since this is going to be long

      I have been running a setup like this for the past year or so. It is pretty easy if you get the right hardware. It is frustrating to the extreme if you have buggy hardware. When it works, it works fantastically well.

      The setup I use is triple-seat. EVGA SR-2 with 96GB of RAM dual 6-core Xeons, 3 GPUs, 3 monitors, 3 mice, 3 keyboards. EVGA SR-2 was a terrible choice - it uses Nvidia NF200 PCIe bridges which have broken IOMMU support. I had to write a patch for Xen to work around it, but now it works like a dream. If you are not averse of spending more on hardware I would strongly advise you to buy one of the (bare-bones) HP or Dell machines certified by Citrix for VGA passthrough use and build it up from there. Having a reasonably bug-free motherboard is essential if you want it to "just work".

      I use EL6, with the Xen and kernel rpm packages from http://xen.crc.id.au/.

      If you get a machine with on-board graphics, use that for your host (dom0). Once you have configured it all you can just not have a console plugged into it. Alternatively invest into whatever cheap GPU you can get your hands on for dom0 - it won't matter much what you use. My advice would be to get something like an Nvidia GeForce 8400GS since it is passively cooled.

      For your domUs don't even bother with ATI - they work right up to the point where you need to reboot domU, and then the whole host will need rebooting.

      Go with Nvidia. A Quadro 2000, 5000, 6000, K2000, K5000 or K6000 work beautifully, but if you are aiming for something more than a Quadro 2000 they are ridiculously expensive. Instead you can modify a GeForce card. My advice would be to pick one of the following three, according to your performance requirements:

      GTX480 and modify it into a Quadro 6000. This requires a minor firmware patch, no hardware modification required. Details on how to modify it are here.

      GTX680 and modify it into a Tesla K10. This requires a simple hardware modification, removing one resistor off the back of the PCB. Make sure you get a non-Gainward model (they exhibit a weird limitation in what video modes they can display in domU - my modified Gainward 680 and 690 only work in SL-DVI modes. My MSI 680 works fine with DL-DVI mode).

      GTX780Ti and modify it into a Quadro K6000. This requires a hardware modification, adding one resistor across specific two pins on the EEPROM. This mod is easy to reverse, but requires taking off the heatsink which on most models means voiding the warranty.

      For details on how to carry out the hardware modifications on the Kepler series cards (680, 780) see the thread on the forum here: http://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/

      Whether the Nvidia card will work in a domU is purely down to the whitelist hard-coded into the driver, specifying which device IDs to initialize if the driver detects that it is running on a virtualized machine. The modifications described above simply modify the device ID, which makes the driver agree to initialize the card in domU.

      Other than that, my setup is exactly like what you describe - pass specific PCI devices (USB, GPU, audio) to domU and it should all just work. With Nvidia GPUs you can reboot the domU as many times as you like and it will work fine. The only thing you will not get is the VM's BIOS POST and loading splash screen, but as soon as it goes into the GUI, you will get output on the monitor and it will work fine from there. As I said I run a triple setup, with two modified 780Tis for two virtual gaming rigs and they work beautifully even at 4K resolutions. The 3rd console is dom0 Linux.

      Good luck.

      1. Gordan
        Boffin

        Re: Apologies in advance since this is going to be long

        Forgot to mention - RH only support KVM, so you are out of luck support-wise with Xen. KVM support for PCI passthrough is nowhere nearly as mature as Xen's, so your chances of success with KVM may be diminished. If you really want support with Xen, you can probably get something from Citrix for XenServer (which recently went free / open source, and the most recent version is based on CentOS 6, i.e. EL6).

        Forget BTRFS - it doesn't matter whose distro you use, it is not going to make a turd into a diamond. If you want a similar FS that works, use ZFS (look at the ZoL port). I wouldn't touch BTRFS with a barge pole. If you use ZFS, you can put your VM's disk block device on a ZVOL and get a lot of advantages such as performance overhead free snapshots. Again, this is the setup that I use on my similar system.

        Finally - you would probably be a lot better off asking a question like this on the Xen users mailing list rather than here.

    5. Gordan

      Re: The other way around?

      @Manolo

      There are not one but two options for passing through USB devices in Xen. You can use PCI passthrough to pass the USB controller through, of you can use USB passthrough to pass a specific device through. The former is usually a little more efficient, but the latter is more flexible (e.g. if multipler ports are on the same USB hub and you need to pass different USB devices on the same PCI USB controller to different VMs. For example, I have 2 VMs with a mouse/keyboard passed to each one via PCI passthrough, and it works extremely well.

      1. Bronek Kozicki

        Re: The other way around?

        @Gordan

        many, many thanks - your post was the most helpful in the whole thread so far. It is very likely I will go with Xen. However, since I'm not in a hurry, I will spend some more time investigating KVM as well. Apparently there are some success stories with Radeons on PCI passthrough, even though it is fiddly and not really supported. PCI hot reset feature added to kernel 3.12 is promising, too.

        Also thank you for a link to xen.crc.id.au - this is exactly the kind of resources I was looking for!

        1. Gordan

          Re: The other way around?

          I'm not sure what features KVM has and supports, I use Xen because it is far more mature and performs considerably better.

          Recently the Xen guys have been working on adding an additional reset method - bus reset. This may or may not make it into the Xen 4.5 release, I'm pretty sure it isn't going to be in the upcoming 4.4 release, so you are looking at at least 6-12 months before the feature is in the release branch and available pre-packaged for your distro. That is a long time to be hanging on for something that might but is not proven to solve the problem. The Nvidia solution works perfectly now.

          It is also not the only issue I have had with Radeons - there are many others. For example, the XP drivers have utterly broken and you cannot glue together multiple monitors into a spanning resolution above 3200x1600, which is completely useless when I need to stitch together two 1920x2400 stripes to get my IBM T221 to work properly. There are other issues as well that I am not going to go into now since they are off topic, but suffice to say that Nvidia suffers from none of those problems.

          I would strongly advise you to stick with proven hardware and software. Anything that is bleeding edge and unproven is going to put you at a very high risk of running into bugs and regressions in hardware, firmware and software. This is another reason why I strongly recommend you get one of the Citrix approved workstations for VGA passthrough. In terms of software, something like EL6 + Xen RPMs from the source I mentioned is a good, stable, proven choice, and since you aren't going to get RH support for anything involving Xen you might as well go with CentOS or Scientific Linux, or see if you can get a vaguely reasonably support package on XenServer (based on CentOS).

          1. Bronek Kozicki

            Re: The other way around?

            Gordan, many thanks! I checked the article on modding Nvidia cards then checked the Quaddro prices, and my eyes almost popped out. Yes I would pay this sort of money for set of CPUs, lots of memory and server motherboard, but for GPU not faster than 780ti ... greedy b*rds!

            I am not sure I want to buy their hardware, but if AMD is so much harder to get to restart properly without physical reboot .... :-/

            1. Gordan

              Re: The other way around?

              That is pretty much the size of it. I wasted a number of days getting various ATI cards to work fully before I eventually caved in and bought a Quadro 2000 for testing. As if by magic, everything started to "just work". Modifying Nvidia cards isn't too difficult if you just want an Nvidia card that works in a VM.

  3. g e
    Facepalm

    DOH

    Just spotted KVM/XEN in the title.

    Deleted the whole Virtualbox post, LOL.

  4. Novex

    Dual Boot

    Personally I think that to get Windows to have the level of hardware control desired would require Windows to have direct access to that hardware. So my answer would be for either a dual boot arrangement, or to have two separate machines. If finances allow for that much memory and Xeon processors, then money for more than one machine can't be out of the question.

    1. Anonymous Coward
      Anonymous Coward

      Re: Dual Boot

      [Edit] Ah, got the wrong end of the stick entirely, it's not for family guests, but a Windows Guest. Oops! :P [/Edit]

      Family guest accounts can have a £80 Hudl or ex office PC, right? They only want email and internet? Why allow them access to the massive Linux Server and your vital programming files?!

      PS, the level of question in the article is way above my braincell count for me to answer though.

    2. Anonymous Coward
      Anonymous Coward

      Re: Dual Boot

      I agree; though for me, having had exposure to the healthy goodness of VirtualBox (I'm so behind the times, I only recently started playing with VMs), I think exposure to a Type 1 Hypervisor (e.g. Xen) would be an end in itself.

  5. Anonymous Coward
    Anonymous Coward

    Sounds like they just need a KVM hypervisor with a Windows guest (and I'd recommend making various Linux guests to keep the hypervisor 'pure'). If KVM is desired then I'd suggest RHEL if support is required and the money for license fees is not a problem, however VMware should be considered.

    Don't skimp on the back end VM storage.

    1. flokie

      A quick search seems to suggest that Xen would be a better choice. KVM graphics capabilities are quite limited compared to other hypervisors. I have a RHEL desktop, with a Win KVM VM, connecting via Spice, and there isn't any 3D support to start with.

      Another thing to bear in mind is that RHEL is built for stability, not the latest and greatest.

      eg. libgcc in fully patched RHEL 6.4: 4.4.7 20120313.

      Fedora will have all the latest features.

      As far as I'm aware, the VDI solution that has the best graphics support will be XenDesktop.

      But I think I'm with most here, it would make more sense of give Win direct access to the HW and have a Linux VM, or have two separate physical boxes.

      1. Anonymous Coward
        Anonymous Coward

        "Another thing to bear in mind is that RHEL is built for stability, not the latest and greatest."

        Which is precisely what you want in a hypervisor. By all means play with Fedora in a VM, but don't trust it with anything serious.

        OP also said they want a supported OS. Not going to happen with Fedora.

        1. Bronek Kozicki

          I may start with Fedora and then move to RHEL7

  6. Steve Graham

    I can't work out why the obvious 2-box solution doesn't suit you. It would even allow you to play processor-heavy games and do massive software builds at the same time. :-)

    (I speak as someone having 4 computers in the house which are on all the time, plus a netbook and a tablet for occasional use.)

    1. Gordan

      Maybe he just doesn't have the space (he already said he has no desk space for a 2nd keyboard). Seriously creating a setup like this is not difficult if you have hardware that isn't buggy. I have a 12-core (24-thread) physical machine, with two VMs given 4 cores (8 threads) each, and I can still run big software builds in Linux dom0 while having two L4D2 or Borderlands 2 gaming sessions on the go on the same physical machine.

    2. Anonymous Coward
      Anonymous Coward

      2-box configuration using Synergy and Portable-VirtualBox

      I use a 2-box configuration (Windows i5 Haswell laptop, Debian running on the AMD 6-core tower). I have a 3-monitor arrangement (23" monitor for Debian, 23" monitor for Windows, and my laptop screen). I use a program called Synergy (instead of a KVM switch) that works on Linux, Mac and Windows to share my mouse and keyboard between the two systems:

      http://synergy-foss.org/download/

      I don't like using VM's - much better and smoother performance for me on bare metal without any configuration hassles. With this setup, I'm only having to use one keyboard and one mouse (which was one of my primary goals - separate keyboards and mice are a major hassle), and I've got access to everything.

      When I travel with my Windows laptop and I need basic VM functionality for a Linux guest, I use Portable-VirtualBox and keep my Linux guest on a USB 3.0 external hard drive:

      http://www.vbox.me/

      Portable-VirtualBox via USB 3.0 gives me adequate performance for a Linux guest compared to running a VirtualBox instance on an internal hard drive. I can run streaming video, some simple games, compile from source in a reasonable amount of time, etc. But it allows me to keep my Windows and Linux hard drives separate.

    3. Nigel 11

      If there's no space for a two-box solution, what about a two- or more-disk solution? Most BIOSes allow one to press F10 or similar to select the boot device. All but the smallest desktop cases can accomodate two 3.5 inch or 4 2.5 inch drives. Most motherboards support at least 4 x SATA, many support six or more.

      (I deliberately don't suggest external USB drives even though USB3 is fast enough and most motherboards boot USB these days. My experience suggests that if a drive tests as low quality, it gets sold in a USB box. It's a good way to experiment with different multi-boot configurations, though. )

  7. Skrrp
    Go

    Dual box

    I'm basically running what Bronek is trying to achieve here, albeit at a vastly smaller price tag and want to add another voice to the 2 box setup.

    I use Linux in work and for work at home and have a gaming rig running Windows. The Win box is ageing but still suitable for my needs; 3Ghz quad core, 6GB RAM. I play games on it a lot and also need to do heavy lifting - in my case it's ffmepg and kdenlive rather than compiling but the use profile is similar.

    I have a Mint VM running in VirtualBox on the Windows machine and give it a stingy 1/2 core (1 virtual core) of the processor and 512MB RAM to work with. Its VDI disk image is out on the NAS box so the VM doesn't have any disk I/O overhead on the host. It runs like a slug but all I use it for is to ssh -X into my Linux laptops and get them to do all the video crunching work I need them to, all while running games as if nothing was happening. The increased network I/O for the VM doesn't impact the network and I can run MMOs and multiplayer FPS games with no lag.

    I did try this config using Cygwin but found the X server to be unstable. The first remote X program I ran was fine but then further programs would refuse to start up, complaining about X on Cygwin. I've found the VM way to be far more stable.

    1. Sultitan

      Re: Dual box

      You've been lucky to find Cygwin-X merely 'unstable', when in my experience it exhibits full-blown mania.

  8. Steve Medway

    In the past I've tried a 'headless' esxi 5.1 on a mac mini (intel gpu gfx) with a virtualised linux, windows + osx vm (with gpu passed through it it) as it would appear to be the best solution but never got it to work properly :( esxi 5.5 appears to have the same issues I had which is a shame:-

    https://communities.vmware.com/message/2330282

    So instead I'd recommend Xen & Debian Stable would seem to be the way to go:-

    http://linux-bsd-sharing.blogspot.co.uk/2012/10/howto-xen-413-windows-8-hvm-domu-with.html

  9. Phil W

    Games

    When you said you wanted to play games in Windows you essentially sealed the coffin on linux host and Windows guest.

    I am a long time lover of linux, but it simply isnt practical to use it as a virtualisation host and get decent gaming performance.

    As said above, Windows host and linux guest is a much better option.

    1. pPPPP

      Re: Games

      Along with others, I agree with this. I've got a games PC running Windows 7 because the majority of games I have run on Windows only and you can't virtualise for gaming as you need direct access to graphics hardware.

      You might be interested in the set-up I have on my laptop. I installed Windows 7 first and encrypted using Truecrypt. Booted off a Linux installation CD and took a copy of the boot loader using dd into a file. Installed Linux (I use Slackware but any flavour would do) and created a dual-boot set-up using lilo (yes, I still use lilo) to boot from the truecrypt bootloader, allowing me to boot into Windows where needed.

      I can also boot the Windows partition from within Linux using KVM, by pointing to /dev/sda for the HDD. This might sound frightening to many, but Windows cannot read the Linux partitions and Linux cannot read the Windows partitions. They don't touch each other. It works.

      Then again, if you're not interested in a Linux GUI you may be better of with Cygwin and sshd.

  10. Matt Bryant Silver badge
    Stop

    Cheaper and a lot simpler in the long run - two boxes.

    Seriously, having spent a fair few years trying to accomplish a similar Nirvana build, I realised it was just simpler to keep the two environments separate, especially as the tasks I actually wanted to run on Linux could be accomplished on an old P4, whereas the tasks on Windows (mostly gaming) meant heavy hardware and direct access to it (IME, Windows on KVM with games = serious hardware lag compared to native Windows). And that's before you start the fun of trying to find Linux drivers for all those top-end hardware items. The result - two environments that have zero interaction and therefore zero affect on each other - has proven much simpler to maintain and run. Sometimes the technically simpler option actually is the far better.

  11. Anonymous Coward
    Devil

    Apart from the above comments..

    First of all I agree with the previous comments; you're much better of setting up a Windows host and running the VM there. However, only because you mentioned that you wanted the option of commercial support, the use of BTRFS and the issue of performance related tasks I can't help suggest something else for that part as well.

    Ever heard of FreeBSD?

    Just like Linux it's a Unix-like environment (in my opinion it stands closer to Unix due to the heritage of the once available BSD Unix) but the the whole hierarchy is obviously a little different. Where Linux consists of a kernel and a whole userland around it (all the tools and utilities to make things work) FreeBSD does it differently.

    Instead it basically consists of a (relative) small base operating system which only provides the very basic means for a Unix-like environment (mail server, DNS resolver, several firewalls, remote access through SSH, NFS, FTP, or a VPN, etc.). Everything else you wish to install gets installed "on top". The (possible) advantage here is control. Updating the "3rd party software" is a task which is completely separated from updating the main OS. Ergo you'll never have to cope with software packages which might influence the way your OS boots and behaves. Not saying that this often happen on Linux, it doesn't, but it is a risk when updating the system.

    Now, I'm not merely spouting FreeBSD propaganda; there is a very specific reason why I do so. I believe there several points you mentioned which could be filled in by FreeBSD as well. And in my opinion maybe even better than Linux can at this point. But; this is purely a matter of opinion. There's nothing between the lines where I claim that one is better of worse than the other; that's not how this works.

    Performance

    You mentioned performance. Although it can be quite daunting at first everything which you use on FreeBSD can be build from source (but this is not a requirement). I'm not claiming that building your own software will give you spectacular results when it comes to performance. But it will give you more control over tuning and optionally configuring said software. A very simple (but not the best) example: Apache and all its modules. It will take the system time to go over a directory which has 60 modules in it. It'll take (slightly) less time if it only has 20 or 30.

    Depending on the software you're going to use you can gain performance results. For example; there is software out there which has been build with debugging information by default. It doesn't influence performance that much, but it will get you an edge if you rebuild the software without such settings.

    Needless to say; building (3rd) party software in FreeBSD is extremely easy, as well as maintaining said software.

    BTRFS

    I'm not going into a comparison here, but I do think it's safe to say that ZFS is more suitable for production work than BTRFS at the moment. Especially considering the heavy development which is still going on. Perhaps needless to say but FreeBSD provides full (native) support for ZFS. And this isn't a "simple" port which people were working on; back in the days programmers from the ZFS division within Sun Microsystems have actually helped the FreeBSD project with the implementation. So it's not simply a "wild hack" or something.

    Note that I'm also not saying that BTRFS is unusable. I'm merely referring to the issue of the file system still being under heavy development which brings in risks when being used in a production environment. I'm not making this up, simply check out the official wiki page yourself.

    Support

    At the risk of bordering on the edges of spam.. But let's just say that there are several commercial vendors out there which can indeed provide support for the FreeBSD environment.

    And there you have it. Once again I'd like to stress out that I'm not claiming that FreeBSD will be the solution for all your problems, world peace and a sure means of keeping your system completely in shape. But I do think you should give this a serious consideration as well.

    I honestly think you might be pleasantly surprised at what you're going to see.

    1. Bronek Kozicki

      Re: Apart from the above comments..

      Thank you for this. I've had some exposure to FreeBSD (or was it OpenBSD?) at the time when BSDi meant something, so it is not exactly a new operating system to me ;)

      I know ZFS and FreeBSD fit like glove to hand, but I cannot see any virtual machine which would work on FreeBSD and allow for PCI passthrough for Windows guest. This feature is absolutely crucial for me; FWIW I actually plan to run the host OS headless.

      1. Bronek Kozicki

        Re: Apart from the above comments..

        Now, this is really interesting, I just found out about FreeBSD bhyve , and on its FAQ page found this gem:

        Q: Does bhyve support VT-d PCI device pass-through?

        A: Yes, on Intel CPUs. See wiki.freebsd.org/bhyve/pci_passthru

        Q: Could bhyve support UEFI/BIOS?

        A: Yes, this is a priority because it would simplify the booting of non-FreeBSD OSs and implementing integrated video support.

        Q: Could bhyve support VGA graphics?

        A: Yes, this is closely related to UEFI/BIOS support.

        I understand that VGA passthrough is also being actively worked on. It is not ready yet, but I will want to try it when enough progress has been made :)

  12. dloughlin

    As others have said you should be running Windows natively and virtualising Linux. I wouldn't go for the two machine approach either.

    Buy your high spec machine, install Windows 8.1. Enable Hyper-V and setup a RHEL/SUSE virtual machine. Hyper-V's overhead is about 10% so with a couple Xeons your Linux VM is still going to be plenty fast enough!

    1. Roland6 Silver badge

      Re: Hyper-V Server 2012

      I'm a little surprised that this hasn't been mentioned before, particularly as it is a free download from MS.

  13. wbaw

    You're not going to play 3D Windows games in a VM, not very well.

    Apart from that KVM works well.

    Buy another PC & use a KVM switch.

    1. Bronek Kozicki

      With PCI passthrough I believe I can play 3D games quite easily and with native performance. Let me explain - Windows is only meant to be nice façade, but with full access to few devices including graphics card and USB controllers. The fact that PCI passthrough is only available on XEN and KVM is exactly the reason the topic is set as it is.

      Underneath this façade I intend to do other more demanding tasks, which will require heavy IO and CPU - and which make best sense to run under Linux. But also which can be headless, i.e. via ssh session.

      I am very familiar with double boot and it is not suitable here - the family might want access to PC while I'm in the middle of something I simply do not want to interrupt. With the desired setup I can simply migrate my tmux session doing the work over to ssh on my underpowered laptop, while allowing them to use Windows as-if no-one else was using the computer at the time (cheating, but useful). Or even mess with Windows without stopping that other task. I simply have no space for other box nor the desire to put the kind of processing power I'm after into a laptop. It would be very expensive, very heavy and not practical at all.

      Oh and one more thing. I do not actually play games much (my sons do, but wife and me control both time and content). It is not a request to help me put "gaming rig" in a VM.

      1. Gordan

        Just FYI - you can to PCI passthrough on ESXi as well, and many people have successfully gotten it to work with modified Nvidia cards.

        This, however, is probably not a particularly suitable solution because you would need another machine (e.g. a laptop) to run the VM management tools from, whereas if you run KVM or Xen the management can be done from the local machine.

  14. Len Goddard

    Unanimous, more or less

    I've never seen such agreement in a computer discussion involving linux & windows.

    Currently, most worthwhile computer games will need a native windows platform, so load host windows with hypervisor of choice and linux guest of choice. With a little care you could put all the userland linux stuff on a separate hard drive with a native linux boot/root partition and allow access to it both via a virtual client and via dual-boot if you want the full power of linux for some heavyweight task.

    Having said that, I would go 2-box and put linux on a Haswell NUC. You can mount this on the back of your monitor thus consuming no valuable desk real-estate.

    Finally, I have a particular dislike of Putty. You might consider installing Cygwin and OpenSSH. This will give you a linux-like command shell under windows from which you can use command line SSH to access your linux, be it virtual or on another box.

    1. PyLETS

      Putty - how best to use

      Putty works fine in X forwarding mode. Install X-Ming with the --multiwindow flag to provide X suppport on Windows. After starting X-Ming, Login with Putty in that mode and then launch and display a real Linux terminal (of your choice) on your Windows desktop from your Putty command line e.g. like this:

      xfce4-terminal &

      then you don't have to put up with the crummy Putty ssh terminal lacking Unicode character support.

    2. Bronek Kozicki
      Unhappy

      Re: Unanimous, more or less

      Right, it seems I failed to explain what I'm after.

      Added few more posts in the thread with explanations, hope it will start to make sense now.

  15. WibbleMe

    Install any version of windows and then just download this Ubunt exe you can have the best of both worlds.

    http://www.ubuntu.com/download/desktop/windows-installer

  16. Volker Hett

    I recommend windows as host, too. Add some SSDs to the setup and use ISCSI for your btrfs needs in a Linux guest.

  17. Arctic fox
    Thumb Up

    I have to say that I am enjoying this thread.

    The good manners and the experience in these posts is something that I am learning a great deal from. A big upvote to you all.

    1. Salts

      Re: I have to say that I am enjoying this thread.

      I will second that.

      Also agree with the "Windows host, Linux guest" as the best approach

      Virtualbox (I know the OP said XEN or KVM) with shared folders and enable bidirectional clipboard and drag&drop is probably enough in this use case, with as many different Linux VM's as needed or one Linux VM and Docker http://docs.docker.io/en/latest/installation/windows/#windows

      1. NotoriousPyro

        Re: I have to say that I am enjoying this thread.

        Yeah, if you wanna have to reboot every single guest OS every time you update the host... Or if you want worse performance for your guests...

        Any serious server admin does not use a GUI. Plastic admins.

        1. Lotaresco

          Re: I have to say that I am enjoying this thread.

          "Any serious server admin does not use a GUI."

          I'm sorry but that's the sort of thing that people who have never, ever in their lives administered a server in an Enterprise environment say. It's extremely common, and for good reason, to administer large numbers of servers via a GUI. Enterprise class storage is largely managed via a GUI and SDN relies on GUIs to make sense of what is going on. At the most basic level large enterprise networks are administered via Citrix clients because that's a good way to ensure that administration roles are separated and accountable.

  18. Ivan Fossitivsky

    Waste of time

    If you are shy about using Linux in lieu of Microsoft, then you should stick to Microsoft. If you ever do grow a pair, then you might consider using Linux exclusively.

    1. This post has been deleted by its author

  19. Morten Bjoernsvik

    vmware and centos makes my day

    Hi

    CentOS6 is the best host. Linux has way better firewall and routing capabilities with route, openvpn and ipsec. I've had all problems with routing when using windows as host. And it is 100% compliant with RHEL so all commercial linux software supports it. It also do not have the dreaded systemd (yet).

    I develop commercial software used by banks/processors requiring me to log into their systems via all kinds of vpn software and citrix. I mostly use windows xp as the guest for it. some of the guests have failed when running on a windows host, but never on a linux host.

    Our software is cluster based so I have lots of centos5/6 guests with dhcp along with database servers (DB2/Oracle express ) and websphere MQ guests for delivery. I use the express editions because they are compliant with the enterprise editions but I do not have to care about the licensing and lower requirements, so copying vm-images around has never been easier. I keep separate guest-vms for each customer emulating their setup. this way I can build rpms that require just a stop of application, rpm -Uvh new package and start. (usually 20sec) in production.

    When it comes to the virtualization I use vmware workstation9, it works fine with the rhel6 kernel and everything works. even hwrendering (I use the binary nvidia drivers on the host) of windows games like skyrim on win7 guests works excellent.

    Years ago I used the vmware server v2, and afterwards I was forced to use Xen and virtual box. (didn't want to pay for linux software) they work fine for stuff that do not requite performance. Our software is 25%-50% faster on a vmware guest compared to virtual box guest. So it is a very easy decision to use $179 for a vmware license. vmware handles io _WAY_ better than xen and virtual box.

    Personally I love opensuse, but their upgrade policy makes me have to reinstall every time there is a new relase, so I have some opensuse12.3 and 13.1 guests I build and run our software on to make sure it works. with CentOS it is just yum update(ex: to upgrade from 6.4 to 6.5)

    and all your binaries still works.

    I've created startup scripts which starts my database and websphere MQ guests automatically on reboot. I have a samba share on the host I share among all the guest.

    Here I have the source build and release directories and a webserver so I can get hold of all the software nomatter if the guests are down and I'm on a customer vpn guest and need to install software built on an other guest that now is down.

    Every guests except the database ones I keep under 20GB in size, that way makes it very easy to backup and move around to other vmware instances like my laptop, home computer or to ESX/vsphere servers. I have base guest images with the base software requirements so designing for a new scenario is just a clone away (with SSD cloning 20GB takes 20sec).

    Earlier I used to have several noisy boxes under my desk. they are now all in a single almost silent workstation. I have around 30 different virtual machines where up to 8 running at the same time on a 32GB Intel i74770 512GB SSD with a /shared sata disk of 2TB.

    1. Anonymous Coward
      Anonymous Coward

      Re: vmware and centos makes my day

      Very much agreed, Virtualbox is great for lab and anything that doesn't need performance but, for high performance production systems only VMware, Hyper-V/Similar (Xen?) or similar will do. I've used (and quite frankly love) VMWare; it replaces a lot of our legacy platform management software.

  20. John Doe 6

    If it is for "Serious server operation"...

    ...you should get ESXi (which is free when you don't use vcenter) and run everything from there, You'll need vsphere client (runs only on Windows) but that's all.

    Virtualization on top of some other OS is not for REAL serious operation, only bare metal hypervisors delivers. I'am runnin 7 OpenSuSE based servers and an Win 7 (for admin) on an old HP Proliant DL385 (Dual Core Opteron) for fun... oh yes, I run VMware vsphere for profit too. If you only run on 1 server you don't need the cool features anyway.

  21. DMon

    It's nice to dream...

    I love Virtualization but the GPU does not pass through like CPU does. I wish it did but it hasn't. Citrix has a very nice virtual desktop but it still suffers performance for wise. It can run video but cannot play high demand games.

    That all said if you are looking for a linux box that you can run games on then you still need to look at at two box solution. If you are willing to get 32GB of memory and a duel Xeon system you would probably actually spend less money on two Intel i7 boxes. Take one and run KVM on it and build yourself a nice little VM host and spawn your Linux VM for the tasks you need. Get one of the Intel Haswell i7-4765T. It's on chip gpu the H4600 has good performance and drivers for linux. That plus the 4765's power sipping makes for a good linux vmhost that isn't going to eat a lot of power.

    Since you are mostly using ssh this should work great. You can ssh into your kvm host and power on and off your vm. If you have a 1GB network you can run x across from the KVM host and load a full gui, or better yet. use Mobaxterm and run your gui's thrown moba. (NOTE Moba essentially does all the work you would do with cygwin to run X through ssh. You can load your X apps through it. I have not had any instability issues doing this.)

    Now when Steambox gets going more you may have some better gaming options but you are still in the two box world that way.

    1. Gordan

      Re: It's nice to dream...

      Actually, you can pass through the GPU better than the CPU. Unlike with the CPU which you virtualize, you can pass the GPU device completely to a VM. Inside the VM, the GPU driver loads as it would if it was running on bare metal and provided you have a GPU for which the drivers work virtualized - ATI technically works but is too broken to be usable in a serious way, Nvidia Quadro cards work without any issues, as do GeForce cards modified into corresponding Quadros, but unmodified GeForce cards don't work because the driver has a whitelist of PCI device IDs which it will allow to boot up virtualized.

      Trust me on this - I am typing this on a triple-seat virtualized rig, of which 2 seats are for heavy-duty gaming (with modified GTX780Tis) , and the 3rd is my Linux workstation.

      1. DMon

        Re: It's nice to dream...

        I stand corrected. VT-d/IOMMU equipped systems are good option. Xen HDX and VMware vDGA seem good for NVidia card systems. At present Xen seems to be one of the higher performing options.

        However that aside, if you are willing the shell out for a couple of Xeon processors, then my original post is still probably the cheaper and more power efficient option.

        So yes, Looks like a rather doable dream now. May have to rethink my own system. Do note that you may want to spring for some good cooling if you are going to be running two xeon's and a pair of high end graphics card.

      2. Lotaresco

        Re: It's nice to dream...

        "Trust me on this"

        I do, we have just specified something similar to support CAD workstations in a VMWare environment.

  22. Anonymous Coward
    Anonymous Coward

    JEOS host

    ShelLuser's comments on BSD are interesting, but I fear are irrelevant. My experience of people who use "Linux for work" (I'm one if them) is that their choice of distro or kernel has already been made for them. This guy doesn't sound like a tinkerer, but someone who has an end-point in sight (patching gcc FFS - it sounds like he cross compiles to embedded processors and will already have many thousands of man hours invested in existing dev).

    But BSD is a nice hairy-arsed environment, and not a toy - just doesn't sound like the right tool for this job.

    However, there was one comment Bronek made which caught my eye - he wants some devices to be visible to Windows, but not to Linux. This seems unlikely in an environment with a Linux host and a Windows guest.

    Which leads me onto an alternative idea. Use a Linux host but within that host use two guests - a Linux guest and a Windows guest. It's what we use at work, and it's not a toy. The host is JEOS Oracle VM manager (broadly built on RHEL 6 and Xen). I'm no expert, but you may be able to slice up your visible hardware between your two guests in just the way you like (USB).

    I have to say though, I can't vouch for Windows performance , but our corporate Exchange server is virtualised (twice) in this way, so this isn't a trivial VM environment. The Windows VMs I have access to are quite sluggish, but the memory and processor allocation to these is quite stingy. We use the same to host Oracle databases.

    Just an idea.

    1. Bronek Kozicki

      Re: JEOS host

      @AC thanks, I will investigate JEOS. It definitely seems like promising direction. Name "Oracle" is little off-putting, though (but hey, it is still FOSS so I'm not going to complain).

      And yes, I am definitely not going to mess with Linux hypervisor, any actual work or experimentation is going to be in a "sibling" virtual machine. This hypervisor is actually the only Linux OS I want to have an option for support.

      1. Eddy Ito

        Re: JEOS host

        Don't fret too much about it. JEOS is short for Just Enough Operating System which means that while it's an Oracle VM product, it's been stripped of the extra bits it doesn't need and you could easily do the same with CentOS or anything else.

        While I'm here, I'll also say I've been looking into SmartOS which has its roots in OpenSolaris and comes with neat features like ZFS, Dtrace, Zones and KVM. I don't know what it supports for hardware pass through as I've literally only just started toying with it but it might be worth a look.

  23. gc1

    It's easy to give a client VM access to a whole physical disk or disk partition. So you can have the best of both worlds - able to run Windows as a VM client and able to boot native Windows when you need the facilities that requires.

    1. Gordan

      Dual Booting Same Instance of Windows native and Virtualized

      This generally doesn't work particularly well. All the underlying drivers will be different, and it is akin to replacing the motherboard with a completely different one - Windows doesn't handle this gracefully at all, and you will more often than not find that instead of greeting you with the login screen it will greet you with a BSOD when it finds that the paravirualized SCSI controller it was expecting to be finding it's C: partition on doesn't exist on bare metal.

      1. pPPPP

        Re: Dual Booting Same Instance of Windows native and Virtualized

        Actually, it's pretty easy if you know what you're doing. If you change from an AMD to Intel CPU you'll get a stop 7E usually and the workaround is pretty simple: install CPU drivers for both.

        The other BSOD you'll likely get is a stop 7B for the disk controller. Again you need to install the right driver. You can often do this beforehand, but if you use KVM it will use a standard IDE driver by default. If you want to change to the virtio driver, which you should, start the guest with the option -drive file=/path/to/any/old/file,if=virtio and put the virtio ISO in the guests virtual cd drive. Windows will find a new drive and install the driver and you will now be able to boot the OS disk using virtio.

        Sometimes Windows is pretty straightforward. Granted, my grandmother probably couldn't do this, but this should make sense to the average techie. It would be nice if Windows let you install drivers from CD or USB during boot by pressing F8, but that would be way too sensible.

  24. fpmurphy

    btrfs with confidence? Good luck! btrfs on Linux is still at the early adopter stage. PCI passthrough is supported in KVM as is assigning a host USB to a guest.

  25. Robert E A Harvey

    Dual Boot

    My solution has been to dual boot between Linux and Windows, and use user-space file system access to read my music collection in ether system. You can also set up thunderbird to read the same mbox file structure from both, and the pan usenet reader ditto. By sync'ng firefox via the web the same stored passwords and favourites can be used in both as well.

  26. Richard Lloyd

    I multi-boot *and* use VMs :-)

    I've tended to always multi-boot (mainly between Linux and WIndows) and then use VMs (in Linux using VirtualBox) to try out new OS releases. My primary desktop is CentOS 6.5 and I boot into WIndows purely to play games, which is currently the only advantage Windows has compared to Linux really (though SteamOS/Steam boxes might make inroads into this).

    Some obvious hardware tips: max out your motherboard RAM (usually 32GB is the limit), get a fast CPU (i7 or equiv) with at least 4 cores, get one or more large/fast SSDs (as well as fast 3TB or 4TB hard drives - I like Seagate's 3TB model myself) and one or more large monitors (I just went to 27" 2560x1440 for my main monitor).

    Although you have to register for it, I can recommend the free Paragon Extfs for Windows at:

    http://www.paragon-software.com/home/extfs-windows/

    It's just about the only free software on Windows I've found that will handle ext2, ext3 and ext4 - very handy if you use any of those fs'es on Linux.

    The next "exciting" OS release for me will be CentOS 7, although I've some trepidation about this because I don't really consider systemd, GRUB 2 or GNOME 3 as improvements compared to their predecessors.

    1. Bronek Kozicki

      Re: I multi-boot *and* use VMs :-)

      Thanks for link to Paragon, it will come handy :)

  27. The Original Steve

    Contraversal but...

    ... Why not Win 2012 R2 server as the host, use NTFS Storage Spaces, can pass through disks as RAW/RDM's and have a guest Linux?

    Sounds like most of your requirements are on the Windows side. Can SSH into the Linux VM, plus you can snapshot it too.

  28. Sokolik

    Anecdotal Report

    First, I concede freely I am in *'way* over my head here. A mere glance at your criteria makes my head swim. However, I offer you these anecdotes: VMWare worked seamlessly with Debian years ago on what would be now an ancient Dell desktop who's specs I no longer remember. Also, as an alternative to virtualization, of course-- though I realize in my ignorance probably I am missing the point-- I had dual-boot. For that, I used GRUB. I had been a Linux (read, "Debian") fanboi for years. Best regards for success.

  29. This post has been deleted by its author

  30. William Higinbotham

    Have a good backup plan.

    Backup, backup. HDs are cheap these days. Mirror your drives to external. Why rebuild, when it is so much easier to restore (unless your MB goes bad and you do not have another of the same - get another board for insurance). I replace my computer about every 5 years - but I buy cheap low end ($200 Lenovo desktop). Duel boot. I had VM with OS/2 on it. OS/2 Lotus Suite. Was fun. Have fun with your new system. Wish I could afford such a nice unit.

    Billy

  31. Tank boy
    Facepalm

    I got it!

    Buy another computer.

  32. jake Silver badge

    Two boxes is the obvious option.

    Especially at home, where you have the space you might not have at work.

    Me, I run Slackware on the desktops & BSD on the servers and internet-facing systems. The one Windows system is Win2K, which I only use for ACad2K; she's air-gapped.

    Games? Not my cuppa. Follow your bliss.

  33. Anonymous Coward
    Anonymous Coward

    Lateral approach

    You have stressed that you do not want to access the Linux system other than by SSH etc.

    That said, *why* does it need to be on the same physical machine as the Windows setup?

    Build a headless Linux machine and chuck it in a cupboard or your loft.

    As long as you have a directory or volume shared via samba, you have the shared access from the Windows system.

    I've been using a setup like this for around ten years & it works fine - Presently Windows 7 on my main desktop machine & Centos on linux boxes in the loft - one Atom (Intel ITX board) for a firewall / router / network monitoring unit and a more conventional board with several TB storage attached by iscsi, available to the network using samba.

    Another one has a six core AMD CPU and 8GB of RAM for software development.

    As of a couple of years ago, they all have SSDs for boot / root drives, with a pair of 2TB HDDs in software raid on the development machine.

    1. Bronek Kozicki

      Re: Lateral approach

      There is no loft and there is no space in the cupboard. There might be space for microATX machine or NAS but that's it - not the kind of power I want and can put to my existing large E-ATX tower.

      1. JEDIDIAH
        Linux

        Re: Lateral approach

        > There is no loft and there is no space in the cupboard.

        Virtualization is no silver bullet. It doesn't matter what tech you are using. This applies even to high end Unix server systems. If you have high performance and responsiveness requirements, decoupling two competiing operating systems is a better way to go. There are plenty of viable options here. The Intel NUC has been all over the "blogosphere" lately. There's also the Brix. That has plenty of power.

        It's pretty easy to have a "CPU-in-can" setup similar to the new Mac Pro. Just put a suitable CPU on a mITX board and put it in as small of an enclosure as you want. If you don't care about the GPU in a Linux box you can make a LOT of compromises.

        I would tend to avoid virtualization for anything compute heavy or IO heavy that's bound to come in contention with other VMs or the host OS.

  34. John Doe 6

    Sorry, but I do not nderstand.

    Bronek wants to virtualize "for serious" and the machine needs to run "Windows boot on top for others to use" ?

    Frankly Bronek, that's not serious... It is not a setup you ever will see outside home, there is no market for this except in the spy business.

    Why do you want pass through ? because you want to hide the fact that the system is virtualized.

    Why do you want to hide it ? probably because you want to spy on someone.

    Now mister wannabe Snowden prove me wrong.

    1. silent_count

      Re: Sorry, but I do not nderstand.

      Tone it down a little on the paranoia, John Doe 6. You boot the machine to look like a Windows box to allow the kids/visitors to use the machine while ensuring that the important stuff, in the host OS, doesn't get trashed.

    2. Bronek Kozicki

      Re: Sorry, but I do not nderstand.

      @John Doe 6

      I do not quite see where did you get the spy aspect from. Serious use can be at home, that's what my children call "work" when I'm doing such things as trying to write nice but tricky lockfree data structure or write another gcc patch. Both tasks require extremely robust testing with access to as many cores as possible, later also access to all the disk IO I can get. Both are best done under Linux. I understand it is arguable how serious use it is, but when one gets involved into C++ standardization in his private time, then the lines get blurred.

      Since I only have space for one good PC at home and my family needs to access it too (I have a family here!), I want to make it suitable for the tasks they usually need it for (which all run under Windows), but without sacrificing the power I want for Linux based tasks.

      1. Roland6 Silver badge

        Re: Sorry, but I do not nderstand.

        Serious use can be at home, that's what my children call "work" when I'm doing such things as trying to write nice but tricky lockfree data structure or write another gcc patch.

        Finally starting to get some of the real needs and requirements out. So you have a family and live in a tiny London loft that doesn't have space for a large book. Whilst there is nothing we can offer about the space constraint other than to suggest you move, we can offer some guidance on the IT.

        Firstly, sharing a PC for (paid) work and family (ie. children) doesn't work long-term. Save yourself a major headache and determine how to get yourself the equivalent of a machine per person, so that ALL of you can do your favourite thing (or important stuff like homework) at the same time and be able to keep working (albeit at a reduced level) when something goes wrong. This obviously means that you will need to also address the keyboard/display issue, which comes back to your space problem.

  35. Tyrion
    Linux

    Dual Boot?

    Why not just dual boot? Use a pure Linux environment when you need it, and a pure Windows when you need that.

    Personally though, I'd ditch Windows altogether and go pure GNU/Linux. If you want to run Windows games, use Wine or something, or find a Linux port. Steam has a lot of games now on Linux.

    Running a VM is incredibly wasteful and slow...

  36. Ilsa Loving

    Absurdly complicated rube goldberg "solution"

    If you want a powerful work rig, but your family needs a general use windows machine, then get two machines and use a KVM to switch between them.

    While technically it is possible to shoehorn two competing uses like you are proposing, you are ignoring the practicality of such a set up.

    The solution you are proposing is absolutely begging for a huge variety for problems, the biggest one being that someone in your family (or a friend) hits the key combination to minimize the windows VM (which is a trivially easy thing to do by accident) and will then start futzing with your host.

    You can buy a perfectly functional pc in a micro case that you can sit on top of your main box. Infinitely simpler to setup. Infinitely simpler to manage. Infinitely simpler to maintain. Unless of course you have no problem abandoning whatever real work you are doing at the drop of a hat, and then spending an hour or two trying to figure out what part of this jigsaw puzzle broke.

    I used to do similar things to this. Then reality happened and I realized that I just don't have the time or patience for dealing with the inevitable nonsense. And trust me, with a set up like you are proposing, nonsense IS inevitable. And it will very likely happen at a time most inconvenient to you.

    1. Bronek Kozicki

      Re: Absurdly complicated rube goldberg "solution"

      If the host is running headless and Windows is given PCI passthrough to GPU (and some USB controllers), I do not quite see this much potential for problems. Yes a lot can go wrong, but not as much as you seem to think.

      On the other hand keeping one box on top of another, when the children are fidgeting just next to this stack of boxes, or even better trying to sit on it .... that sounds like a lots of fun ;) I simply do not have the space for two boxes and dual boot is also out of the question, period.

  37. rleigh
    Headmaster

    Data storage for shared systems

    I've used Btrfs a fair amount as a Debian developer, in order to take advantage of some of its features such as snapshotting and RAID. It has some nice features, but to be completely realistic, it's not anywhere near ready for production use, and isn't likely to be for seveal years at least. I've had unrecoverable data loss and multiple kernel oopses/panics (though for experimental stuff, so not serious for me). You can't trust the RAID code; a SATA cable glitch made it toast both the transiently failing drive *and* the mirror, turning both in to unusable messes which panicked the kernel when you tried to mount them. Coupled with the lack of a usable fsck tool, it would be foolish in the extreme to trust this filesystem with important data. Depending upon the usage patterns, performance can also be awful, though it can be very good. Now, I use it intensively for snapshotted source build environments (schroot), but that's transient stuff I can recreate in minutes should it get blown away. It may become trustable with time, but at present I don't consider it anwhere near that. I think SUSE may regret making it the default; if their users want to be guinea pigs finding all the remaining bugs, good luck to them!

    For Linux, there's the plain and boring ext4, or xfs, or zfs with the appropriate kernel patches. And others as well, but if reliability is the goal, one of those is a good choice.

    One of the most annoying things I've found with both virtualisation and multiboot systems is data storage. You inevitably end up with data spread over multiple systems, duplicating stuff and also wasting tons of space due to disk partitioning and having neither system being able to safely/efficiently access each others' filesystems, especially when using VMs and they may be mounted already. My suggestion for this is to move all the user data off to another system. I got a small HP ProLiant microserver, and put FreeBSD10/ZFS on it. Now all the data is available to any system via NFSv4/CIFS, which works nicely for all of: single native OSes, dual-/multi-boot systems, and VMs. ZFS also eliminates inefficiencies of partitioning wasting space: all the data is in a big pool split into datasets (which can have quota/size limits). The choice of OS for this was really down to native ZFS support, including in the installer, and a desire for something new to play with. Debian GNU/kFreeBSD can also do this.

    With the above setup I have EFI/GRUB2 booting of Debian GNU/Linux, Gentoo, Windows7 and BIOS booting of Debian GNU/kFreeBSD and FreeBSD10 on and amd64 system, and OpenFirmware boot of Debian GNU/Linux and FreeBSD10 on a powerpc system, all using the same user filesystems. There are also a fair number of kvm and VirtualBox images, all of which can also take advantage of the shared storage. The local storage can then be small: it's just the basic OS install and temporary scratch space.

    1. Bronek Kozicki

      Re: Data storage for shared systems

      I have MegaRAID 9265-8i and can buy some extenders to it - that gives me ridiculous amount of storage (and the single case I have here is large enough for at least 8 HDDs). So this is not going to be a problem, probably.

      You gave me good reason to consider ZFS again, thanks for that. Perhaps with enough RAM given to host it will work fine.

      1. Gordan

        Re: Data storage for shared systems

        Indeed, ZFS is very much the way forward.

  38. Daniel von Asmuth
    Holmes

    How about VMware?

    With WMware vSphere you can run several virtual machines side by side, and you get a nice graphical GUI with a thin client on your desk top. Memory and CPU power will be allocated to the OS that needs it. The alternative would be to run LInux in a VM on Windows 2012, but I am unsure what that will do to your performance. The members of your family can have a personal VM each.

    The best performance is when each VM gets its own disc volumes as needed. For shared data, you can use a separate Linux FS and use Samba, NFS, or AFS to export it to the other VMs.

  39. Woenk

    two boxes

    Would never ever use a box for working AND games, especially if somebody else is fiddling with it.

    Also I am not convinced that an virtual machine can run a game smoothly enough, no matter what hardware used.

    I think I would use a dedicated Linux box with the needed hadware, store it safely where no one can get to it and access it cheap and dirty via VNC...

  40. Mad Chaz

    I would suggest staying away from KVM and going for Xen instead. KVM a crap support when it comes to USB as far as my experience is concerned. KVM is the most resource efficient, but Xen is a lot more stable and mature. It also works a lot better for USB and PCI devices.

    Be very careful on what hardware you get and make sure you research it's support for PCI sharing, as this will be the most difficult bit and is what will make or break your rig, so to speak.

    ATI as the lousiest support for visualization in the world, so I'd suggest sticking to Nvidia cards for graphics.

  41. Kiwi
    Linux

    Have to say thanks..

    I love what you're trying to do and would love to see the result, or at least see a good write up on the finished product - and not just a how-to or basic list of configs like many are. Sounds like an interesting project.

    I've learnt a bit more about what's there VM wise myself. I've used them a fair bit over the last few years, but never really pushed them - only using them to run test/safely infectable(sp) installs of Windows and test/learn software and so on. But now am inspired to take a deeper look into what's there now :)

    Have to thank you for your questions, and for putting up with the people who haven't read your message, like the important bits where you say you don't have room for another machine and so on, and go on to tell you why you really only need a second box when you've made it clear that's not an option :)

    That said, if you're letting others (especially the kids) get on you really should think of ways to give them a totally seperate machine, even a laptop or cheap tablet. I've seen too many cases where kids manage to screw up machines quite badly - or at least get blamed for it. For now you may be quite secure, but they'll be watching and learning every little bit they can, they'll grow older and bolder/stupider, and given a chanve they will think they know everything, try something, and you end up in a world of hurt.

    Don't get me wrong, I really want you to succeed with your plan and I want to see how it's done in the end, but back up everything, and back it up well. External drives in another location (in case said kids screw up and try to recover screwing up the backup), and when you can manage to give the brats their own machine so yours survives :)

    Good luck with it :)

    1. Woenk

      Re: Have to say thanks..

      Not having room for a second box is really not an argument....

      The linix box will probably be a bit bigger with the stuff he wants to do, but a Windows box fits everywhere.

      And with a remote connect to the Linux box it does not matter where it is, could be in the cellar or in a cloud.

  42. Anonymous Coward
    Anonymous Coward

    Sounds to me like Robolinux may be your answer

    I don't know much about http://robolinux.org/, but I have installed it with the view of testing it out.

  43. Remus4271e

    Solution to the Space problems

    I Think you should opt for the two machine solution.

    Having kids have access to a production machine is suboptimal (to say the least).

    Regarding your no-space-for-a-second-machine-problem: Just strap it under the ceiling. Two U or Z shaped brackets should do the trick.

    The way you don't need to worry. Your kids can toy with your windows machine. Your linux-machine is safe and sound out of their reach.

  44. BinkyTheMagicPaperclip Silver badge

    KVM is your best option, if you insist on your requirements

    Whilst I'd suggest Linux on Windows is your best option overall, if you insist on your requirements KVM is the best option.

    Xen is a great piece of software, but in the region of VGA passthrough it is decidedly inferior to KVM. Yl not get any support if it does not work.

    Xen supports only Quadro devices for reliable passthrough. KVM supports AMD and Intel, but you will need a very recent Linux kernel (3.12+) and patches, plus a recent, patched Qemu. Google 'vfio VGA reset'

    Be very careful with your hardware choices and read around the subject first. Also, read the motherboard manual cover to cover before purchase. My motherboard, for instance, supports graphics cards at 4x PCIe speed in only one slot - all others are limited to 1x by the chipset.

    Also be careful of USB passthrough. In Linux/Windows it mostly works. In BSD it does not - you have to do a VTd passthrough rather than a single device passthrough in Qemu. You cannot usually pass through a single port due to iommu groups. USB works through allocating a pool of resources to a certain number of ports. Again, on my motherboard I have five USB ports. These can be passed through in two iommu groups - so 2/3 or similar? Wrong! 1/4, or 4/1!

    I'm doing this because I'm enjoying fiddling with low level Linux, virtualisation and I'm stubborn. It's still leading edge stuff and you can expect to encounter pain. Be very familiar with iommu groups, PCIe bridges, FLR, VT-d and if your cards support various types of reset before you buy any hardware.

    1. Bronek Kozicki

      Re: KVM is your best option, if you insist on your requirements

      Many thanks! Kernel 3.12 is what I have in mind. Could start with Fedora 20 (for initial setup/learning) and later migrate to RHEL7 when it's ready (and my setup is ready too).

  45. BinkyTheMagicPaperclip Silver badge

    It does work, incidentally!

    Just beware you'll have issues. On Windows Catalyst must be installed manually by selecting the driver, then installing the CCC MSI. Running the install always results in a blue screen.

    It is fast enough to run games. You'll have to fiddle to get the best disk performance - use the virtio drivers. If anyone is trying to run older OS, be aware that KVM/Qemu creates a VM which is quite similar to a Q35 chipset, but with differences. With ancient OS you may need to use the Qemu 'pc' architecture (440LX). It may also be necessary to use a CPU type of qemu64 or qemu32 in some cases rather than 'host' or enabling KVM

    Remember that the VFIO or pcistub driver is separate from KVM. Passthrough works without it. KVM only provides acceleration, which is usually (but not always) faster.

    The virtual PC that KVM, Qemu and Xen create is similar to a real PC but it is not the same. OS/2, for instance, does Weird Shit(TM) on install (to be precise, non mainstream OS may tickle VMs in a way that's entirely valid on real hardware but freaks out the VM).

    If I was doing this professionally, I'd use Xen and stick to a released version ideally without VGA passthrough. Xenserver is now free and a nice piece of kit.

    What I really should have done is to buy a dual Xeon system, with a quadro and run Xen. What can I say, I'm waiting for Haswell-E before upgrading and spending lots of money. In the meantime I'm running an unusual Core2Quad system with a 6950 (pre Nehalem VTd works, but has no interrupt remapping).

  46. Luke McCarthy

    Avoid btrfs

    It's not really ready yet. I tried it out for a while but went back to ext4 after and unrecoverable filesysten corruption (and there's still no fsck tool).

  47. Trollslayer

    No objective

    If you aren't clear about what it is to do how can you make a choice?

    It seems to be about trying this or trying that.

  48. Andrew Barr

    Maybe a simpler solution,

    If you have limited disk space, but are happy to spend on hardware, then the solutions is to have a box under the desk running a headless Linux configuration and then replace the keyboard, monitor and mouse with a laptop with windows running on it, and make sure that the laptop is good enough to run your games. If desk space is at too much of a premium move the laptop to somewhere else in the house with more space?

    always go with as simple solution as possible, as if there is a problem with hardware you could loose both systems!

  49. dansan

    By the way PCI passthrough works fine with ESXi. I'm using ESXi as my host machine, and passing through the Video Card to a Windows VM. Setup the VM to auto start when ESXi starts up, so as soon as it loads it automatically starts the VM and on screen you see the windows VM. Also passing through USB ports to plug in keyboard, mouse, and an external hard drive. All has been working perfectly for the last year or so.

  50. Fenton

    Cloud?

    How about build yourself a small gaming rig. Mid range CPU and a fast graphics card.

    Get yourself an account with your Cloud vendor of choice with a nice Linux build on there.

  51. Anonymous Coward
    Anonymous Coward

    2 Box solution

    A late entry but I'd support the two box solution. You say you don't have space for two boxes, you could easily fix a bracket under the back of your desk to mount a netbook running ubuntu server, it'll just sit there out of the way. Even wall mount it, it's only the space of a book. I've been running such a setup for quite few years. Your family get a full blown windows machine and you get a decent linux server. Quite honestly I don't understand why you would even consider all the faffing about trying to do this with one box.

  52. Robert E A Harvey

    Space

    If space for a 2-box solution seems problematic, how about something that will hang on the VESA mount on the back of the monitor:

    http://www.fit-pc.com/web/products/mintbox/

    http://www.solid-run.com/

    etc etc etc

    1. Bronek Kozicki

      Re: Space

      The size is the problem here. I want "non compromise" solution where Linux has lots of power (two socket motherboard size E-ATX) for serious stuff like building gcc very often. For Windows I also want "non compromise" solution with strong (and large by necessity) GPU card and some interesting peripherals. This would normally require two large boxes, for which I do not have space. But, I do not need much processing power for Windows and I do not need any interesting peripherals, not even a GPU, for Linux. Ideally Linux will run headless and will provide network services for Windows guest (mostly sshd and filesystem).

      These system are meant to complement each other, so why not use the parts in the same way and actually put them in one box? If this works, I will also gain something no two-box solution can do: flexibility to move resources between systems as I see fit, simply by configuration tweaks. And there is convincing evidence that such systems have been built and are known to work, also in this thread.

      1. JEDIDIAH

        Re: Space

        "Building gcc very often" is a batch job, not something to drive the spec of a box. Even so, there are plenty of single cores that will be quite respectable for that kind of workload. Even some older hardware (like my Hex core) would be respectable for that sort of thing.

        1. Bronek Kozicki

          Re: Space

          "Building gcc very often" is a batch job

          For me, making a change in a complex program with very little type safety in its design is a continuous process of making changes, building it and then running tests. And waiting, while build and tests are running.

          If you have better workflow for making gcc changes, I'd love to learn about it.

          1. Roland6 Silver badge

            Re: Space

            >If you have better workflow for making gcc changes

            If this is your main requirement then as this is a batch job then it would seem to be an ideal task to off-loaded to the cloud ie. off-site...

            Whilst this might at first examination seem to be expensive, remember the size of system you are intending to build won't be cheap and that is before we consider the heating and ventilation requirements and the noise of fans.

            Personally, if you have space for a tower system that can support 8 HDD's plus all the other stuff, you've got space for a small cluster of blades, which might be a better solution to your family's computing needs.

  53. Slim

    Specifics

    When you say you want to be able to play games on the Windows side of things, is there any chance you could give an example of the sorts of games being played, and how well (graphically,) do you want to run them?

    Also how big of a space have you got for the computer? If you do decide to go down the two computer route, how about this case? http://www.mountainmods.com/u2-ufo-duality-mirror-black-powder-coat-solid-top-p-390.html

    Would running Windows as the host with VMware being able to allocate physical memory and processors negate the issue you have with Windows taking resources away from the Linux install?

  54. Old Guy

    Are you looking at the problem the right way?

    Correct me if I'm wrong, (I know I probably am ;o)) but I have been intrigued by this discussion since I first read it 12 hours ago.

    What you basically want is:

    1. A PC which is capable of running some windows software and allowing the OS to see "decent hardware".

    2. You normally prefer to use Linux for a working platform but are happy to access it via SSL.

    3. You have a serious space limitation which apparently precludes multiple computers.

    4. Your budget for a perfect solution is potentially quite high.

    Looking at the replies you've received there area lot of brighter people than me out there with good ideas about how you can do what you want with the technology available, but many are basically saying "Virtualisation isn't there yet". Also some of them are really saying - you need 2 computers.

    I know this sounds strange but have you considered putting 2 complete computers in the same case? Even building them into your desk like the Power Desk concept from the early 90s might work.

    Think about it, get a couple of cheap cases cut them up and fit them into the space available. Build computers in those cases, including appropriate hardware, clad exterior in metal, or even wood, paying particular attention to really good airflow. Get a KVM for those rare times when you actually want to log onto the Linux box directly.

    Screw it to the desk to make it harder to nick, problem sorted.

    Or am I talking b#@*@~ks?

    1. Bronek Kozicki

      Re: Are you looking at the problem the right way?

      I think you captured my requirements almost perfectly and thank you for this summary. There is one more I assumed is implied (from subject, perhaps?). I do not quite believe that "virtualisation isn't there yet" unless my own experience tells me so. So, the implied requirement is : make the virtualisation do the work. I will share here later if it worked (or it didn't).

      PS Really, one average width tower case is as much as I can fit in here. And perhaps some tiny NAS or microATX in the corner (but this goes against Linux having CPU power I want it to have).

      1. Anonymous Coward
        Paris Hilton

        Re: Are you looking at the problem the right way?

        >one average width tower case is as much as I can fit in here

        The mind boggles, do you type standing up? Have you considered buying a bigger place, renting a lock up? I'm imagining something out of Extreme Hoarders here, or the four Yorkshirmen. Ee you wer lucky, ah used to dream a typing standing up. Well, when ah say standing up it wer really...Where do you put your coffee mug? Can you hang a second box out of a window? Someone suggested hanging one from the ceiling, dont' tell us that space is all taken as well.

        Seriously, do you have a printer? If so and it's not wireless and already in another room get a wireless one and put it somewhere else. What about two boxes side by side with a bit of audio insulation on top and then the printer.

        And I can't believe I've used the Paris icon but it's as close to being puzzled that there is.

        1. Bronek Kozicki

          Re: Are you looking at the problem the right way?

          If I were lone wolf then I would just put a server anywhere. As the things are, I must consider my wife and children. And yes, moving out is definitely part of a plan, but if you haven't noticed property market is behaving rather strangely, especially in London. So, this will take some more time and preparation. Printer is on a stack of drawers which sits on top of a desk next to 30" monitor, under which is large document shredder, subwoofer, my legs, spare toners and lots of cables. YES IT IS F*G CROWDED HERE. You have to come and see what builders call "flat" in this part of the world.

    2. Roland6 Silver badge

      Re: Are you looking at the problem the right way?

      >I know this sounds strange but have you considered putting 2 complete computers in the same case

      This was an option I've been considering. Back in the 80's there was a UK company that sold PC motherboards that were expansion cards - they sat on the EISA/ISA bus with some software running on the motherboard to co-ordinate disk and network access. Obviously each PC required a keyboard, mouse and monitor (not forgetting licensed software).

      Looking around the web there are companies that offer multi-seat solutions that provide additional hardware so each user has a dedicated graphics adaptor but share the host motherboard and OS (eg. Buddy B-680 Premium/Lite, NComputing, SoftXpand.

      Obviously with Windows MultiPoint Server, MS also have an offering in this space. This planning guide might be useful: https://www.microsoft.com/en-us/download/details.aspx?id=18482

      But I've not been able to find any recent products, although perhaps someone sells a small blade enclosure into which you could slot server blades...

  55. lunatik96

    There is an alternative.

    If you had an AIO- All In One (built into the screen, so no more space required) computer running windows, and connected via network (wireless if desired) to the Linux computer to run ssh in a window from the windows box. This way, you would have your Linux stack running natively and your windows machine running natively. If required, you can connect the Linux video via HDMI to your AIO monitor and use ur wiress keyboard and mouse. However, it does still seem the long way home (more expensive).

    Again, the specifications seem arbitrary. If you could state specific objectives (compile whole linux distro and build into package over night or in 4 hours.

    The more I read this thread the OP is hell bent on having linux run as close to bare metal as possible. This solution achieves that desire. What I would do and I agree with most posts saying run Windows (2000 - 8.1-depending on task) and then Linux in a VM such as Virtual Box. That is the easiest and most usable route. I like having an XP VM just in case. Since it doesn't get any more updates, it is very stable for private a network.

    The reason so many advocate this solution is that windows graphics run best on a windows machine. The VM video interface just works better for that combination with windows as the host. Most video cards/ devices are designed to run DX* (windows API). Using DX* to emulate OpenGL is easier as more info is available. Since the GPL conflicts with IP ownership to most graphics companies, they are not real keen on supplying chip level commands/access/ documentation. Without access to the hardware documentation, Linux drivers have trouble supporting the windows DX* graphic API. Yes I heard Nvidia say they are going to play nicer. Wake me up when you see a finished combination. Yes they support Open GL, which is great.

  56. swampdog

    You've got two mutually conflicting issues. By far the simplest way to solve it is to have two machines. Bit of lateral thinking.. literally. Bolt the linux box to a wall shelf, side on, out of the way.

  57. Conall

    http://www.overclock.net/t/1205216/guide-create-a-gaming-virtual-machine

    Personally,CPU wise the Core i7 3960X seems best [Supports VT-d (C2 STEPPING ONLY)], which is probs what i'd go for.

    6 cores/ 12 threads and 3.3/3.9ghz{boost}

    whatever you decide, Sandy Bridge-E/EN or EP is the way to go imo, If you opt for Ivy Bridge read up on the overclocking problems associated with the cheap TIM paste they used!

  58. Bronek Kozicki

    Update

    So, the hardware is ordered.

    All should work well with VGA passthrough and even if virtualisation turns out to be too difficult, it's going to be some nice Linux server (and a GPU for my old PC)

    * SuperMicro X9DA7

    * 2x Xeon E5-2630V2

    * Kingston DDR3 PC-1600, Registered ECC

    * Sapphire Tri-X R9 290X

    I guess, if I really have to make this Linux a separate server; I can always put in into 2U rack case and slide under a bed ;D

  59. Lennart Sorensen

    I have no experience with xen, but at least the kvm information says that while device passthrough is supported, video card passthrough is NOT. A few people have managed to get it to work with some patching work done.

    I do see some documentation on it having been done successfully with xen however.

    I agree with other people on BTRFS. The developers say it isn't ready for production use. Of course if your machine is just to run games and do some hobby work, then that might be good enough.

    I too would avoid AMD graphics cards. I also personally am a Debian fan, and have no interest or appreciation for the commercial linux distributions.

  60. Anonymous Coward
    Anonymous Coward

    what solution have you chosen in the end?

    Bronek Kozicki - I have registered on this forum only to ask this: I am very curious, what solution have you chosen in the end? googling 'xen vs kvm for windows' gives this thread the first result! :)

    Back in 2007 when I bought my current PC, having your exact identical needs (except that I don't patch gcc but run some programs on linux - proxy, firewall, other vms - and want a single box for energy costs) I wanted pci/vga pasthrough but it was in infant stage.

    Now that pci passthrough seems more mature and I'd like to try it again. From the thread I've collected most opinions pro-Xen-based solution (xendesktop, xenserver, xen hdx, oracle jeos, centos, debian), one esxi and one kvm. I'm quite interested in the final choice.

  61. MustyMusgrave
    Stop

    All I hear

    All I hear from this comments thread is:

    Linux - Linux - Linux....

    The differance between Linux & BSD?

    Linux is what happens when you get a load of PC hackers that want to port Unix to a PC.

    BSD is what happens when you get a load of Unix hackers that want to port Unix to a PC.

    You do the math!

  62. Bronek Kozicki

    lots of time have passed

    .... for those who wonder what I've chosen in the end : I've been successfully running for nearly a year following stack:

    • Arch Linux running as a headless hypervisor, where I configure, build & sign my own packages for software stack mentioned below, when and as I feel like upgrading them
    • kernel build closely following current version from www.kernel.org , only little behind for sake of ZFS on Linux (currently 4.0.9, waiting for ZOL release 0.6.5 before upgrade to 4.1)
    • ZFS on Linux, current release + occasionally a patch or two (currently 0.6.4.2 with single patch from pull request 3344)
    • kvm with vfio GPU passthrough, AMD GPUs passed to Windows 7 (two GPUs, two Windows 7 VMs, plus some more VMs without GPU, all have qemu-agent installed). Linux console on serial port only (and of course ssh access). Linux radeon drivers are blacklisted
    • qemu currently version 2.3.0 will upgrade soon to 2.4.0 (or perhaps 2.3.1, if I do not like it)
    • libvirt with libvirt-guests to start and shutdown the VMs at the right moments. Patched libvirt-guests a little to use --mode=agent when shutting guests down
    • VMs disks are setup as ZVOLs on ZFS, all VMs are snapshotted every now and then (alongside with user files, below)
    • A filesystem on the same ZFS pool is shared under Samba as fileserver for user files
    • Also using ZFS for Linux root, home and build directory (see top point)
    • Samba 4.2.3 running on a separate pocket-size ("next unit of computing", as Intel calls this format) PC as an AD controller, to which both Samba running on host and Windows 7 guests are attached as members. A second AD controller (also Samba 4.2.3) is running under a VM, just in case
    • zfs send | zfs receive, run occasionally to separate ZFS pool as backup (offline when not doing backup)
    There are small quirks, and one has to be careful with upgrades, but overall it works pretty well.

    1. Roland6 Silver badge

      Re: lots of time have passed

      Bronek, thanks for reporting back in such detail.

      Don't know why you got a down vote other than to suspect you probably didn't go with some fanatic's 'ideal' solution.

  63. auden12

    Right, it seems I failed to explain what I'm after. Added few more posts in the thread with explanations, hope it will start to make sense now.

  64. ikomrad

    Neither for your use case. Things that you can do that will make sense - Dual boot or run a hypervisor on a separate box ( which is what I do ) . I bought an Intel NUC PC which fits in the palm of your hand, and I installed VMWare ESXI (free version) on it, though you can install another hypervisor. This gives you a almost invisible box to run several VM's on at a pretty low cost. It's also very flexible so that you can erase the SSD and load windows/linux on it if you no longer need VM's.

  65. adrienne224

    I was thinking of a couple of boxes, Windows games machine and session client upstairs and Linux big box in the basement as OP likes his SSH and remote access.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon