1)install Linux on one drive.
2)install Windows on a second drive.
3)boot from grub on the first drive and add an entry to boot Windows.
4)on a 3rd drive format it ext3 or optionally dos. Mount this puppy at /home or even /home/user.
5)don’t let windows touch you Linux home drive ever. Fuck windows and Microsoft. Both can suck my entire ass. If you ever need to share files between these systems use a pen drive. Microsoft doesn’t deserve you. Just use it as a last resort, do your thing and GTFO ASAP.
Just a heads up to anyone reading this: Don’t format your home folder as FAT32/ntfs. Some stuff in there needs Linux specific permission bits and you might be limited in terms of maximum file size.
Consider mounting at /home/usename/shared or something instead if you want a shared drive.
I used to run Windows on an esata drive that I would only power up occasionally in order to game, and it still somehow – and I don’t remember how – managed to ruin my computer.
Step 4 is in my opinion the most clever and important part… Basically if you remove your home drive and boot, you get a vanilla computer. If you put it back, you get your computer back…ie, if you fuck up your Linux or windows install you just remove your home, reinstall blind and put your home back in…like you never left!!! Plus if your drive for the os dies, you can just make another! Or you can even take your home folder with you from one Linux box to a new one in the blink of an eye…a very slow blink… Hold on, I’m still pulling the drive…open slowly… Done! See? Easy!
What’s wrong with a VM? I set up a Win10 instance in VMM right after I switched to Linux full time 10 months ago, but I had to use it exactly once to configure the RGB on my keyboard, and haven’t had a reason to boot it up since.
From what I understood, it runs on ‘Bare Metal’ which means that it theoretically should preform just as well as if you booted into it, with the only overhead being the *nix which is minimal.
I’m not saying it’s better, I’m honestly asking because I have very little experience with it.
I used to dual boot back in the day, but that was when I was still on HDDs and the long ass boot times meant I usually just stayed in Windows if I was planning on gaming that day.
That’s not how that works. I think your confusing bare metal with bare metal hypervisor. The latter is meant to mean a Type-1 Hypervisor, which KVM isn’t anyway but that’s another story.
Without GPU pass through you aren’t going to get nearly the graphics performance for something like gaming. I’ve also had issues with KVM and libvirt breaking during sleep. It’s a lot more janky than you make out.
Well it does seem to be a somewhat confusing subject, so forgive me for getting it wrong. I must have misunderstood or misremembered the information I read when setting up the VM 10 months ago. As I said, I have very little experience with them and was honestly just asking if it’s not almost as good. I wasn’t trying to ‘make it out’ to be ‘not janky’.
According to Wiki, KVM " is a … virtualization module in the Linux kernel that allows the kernel to function as a hypervisor."
I wasn’t aware that there was a distinction between a Hypervisor and a ‘Type-1’ Hypervisor, but now I know so thank you for clearing that up for me.
Without GPU pass through you aren’t going to get nearly the graphics performance for something like gaming.
According to this wiki, it seems like GPU passthrough is possible with KVM if your system supports IOMMU, mine does. But it looks like you also need a separate GPU to do that, so that answers my question about is it nearly as good as dual booting.
Every game I have attempted to run has just worked and they seem to run just as good as they did in Windows, so I guess I’m lucky I don’t need to really worry about dual booting or VM’s. I was just kind of wondering if it would work if I did need it, since that seemed like it would be a lot simpler than booting into a different operating system.
Yes I know GPU passthrough is possible. Almost no one does it as consumer GPUs don’t normally don’t support the virtualization technologies that allow multiple OSes to use one GPU. It’s an enterprise feature mostly. There are projects like VirGL that work with KVM and QEMU, but they don’t support Windows last I checked, and are imperfect even on Linux guests. I think only Apple Silicon and Intel integrated graphics support the right technologies you would need. Buying a second GPU is a good option, although that has it’s own complexities and is obviously more expensive. Most modern consumer platforms don’t have enough PCIe lanes to give two GPUs a full x16 bandwidth. There is a technology in Windows called GPU paravirtualization to make this happen with Hyper-V, but you have to be using a Hyper-V host, not a Linux based one. It’s also quite finicky to make that work.
Out of interest what games are you running that don’t need GPU performance? Basically any modern 3D game needs a GPU to run well. Obviously 2D games might not, though even that varies.
All of the above is far more complex than setting up a dual boot. A dual boot can be as simple as having two different drives and picking which to boot from in the UEFI or BIOS firmware. I don’t understand why you think that would be less complicated than a high tech solution like virtualization.
There are basically three types of virtualization in classical thinking. Type 1, Type 2, and Type 3. KVM is none of these. With Type 1 there is no operating system running bare metal, instead only the hypervisor itself runs as bare metal. Everything else, including the management tools for the hypervisor, run in guest OSes. Hyper-V, ESXi, and anything using Xen are great examples. Type 2 is where you have virtualization software running inside a normal OS. KVM is special because it’s a hypervisor running in the same CPU ring and privilege level as the full Linux kernel. It’s like if a Type-1 hypervisor ran at the same time as a normal OS in the same space. This means it behaves somewhat like a Type-1 and somewhat like a Type-2. It’s bare metal just like a Type-1 would be, but has to share resources with Linux processes and other parts of the Linux kernel. You could kind of say it’s a type 1.5. It’s not the only hypervisor these days to use that approach, and the Type 1, 2, 3 terminology kind of breaks down in modern usage anyway. Modern virtualization has gotten a bit too complex for simplifications like that to always apply. Type 3 had to be added to account for containers for example. This ends up getting weird when you have modern Linux systems that get to be a Type-1.5 hypervisor while also being a Type 3 at the same time.
Out of interest what games are you running that don’t need GPU performance? Basically any modern 3D game needs a GPU to run well.
I think you misunderstood me. I said “Every game I have attempted to run has just worked and they seem to run just as good as they did in Windows, so I guess I’m lucky I don’t need to really worry about dual bootingor VM’s”
The games I play do need GPU performance. Cyberpunk 2077, Red Dead Redemption 2, No Mans Sky, The Outer Worlds etc. I’m not running them in a VM, I’m running them through Steam or Heroic Games Launcher.
I don’t understand why you think that would be less complicated than a high tech solution like virtualization.
Because once you have everything set up properly, all you would need to do to play a game that you couldn’t play in Linux is fire up the VM and play it. In a dual boot situation you would have to reboot your computer into a whole different OS and then play the game. It wouldn’t be a massive difference, but it would be more convenient. Plus it would be contained so there would be no way for it to mess with your bootloader or whatever. Clearly it’s more complicated that I had originally thought.
KVM is special because it’s a hypervisor running in the same CPU ring and privilege level as the full Linux kernel. It’s like if a Type-1 hypervisor ran at the same time as a normal OS in the same space. This means it behaves somewhat like a Type-1 and somewhat like a Type-2. It’s bare metal just like a Type-1 would be, but has to share resources with Linux processes and other parts of the Linux kernel.
Ok, now you got me curious. What is the distinction between that and how I originally described it?
“From what I understood, it runs on ‘Bare Metal’ which means that it theoretically should preform just as well as if you booted into it, with the only overhead being the (Linux OS) which is minimal.”
From my admittedly laymen understanding, it kinda seems like what you said and how I described it are pretty much the same thing.
An OS or a hypervisor can run in bare metal. If I have Windows running in KVM, KVM is running bare metal but Windows isn’t. Ditto with ESXi or Hyper-V. In the case of your setup Linux and KVM are both bare metal, but Windows isn’t. KVM, ESXi, Xen are always running a privilege level above their guests. Does this make sense?
The difference between KVM and the more conventional Type 1 hypervisors is that a conventional type 1 can’t run along side a normal kernel. So with Linux and KVM both Linux and KVM are baremetal. With Linux and Xen, only Xen is baremetal, and Linux is a guest. Likewise if you have something like Hyper-V or WSL2 on Windows, then Windows is actually running as a guest OS, as is Linux or any other guests you have. Only Hyper-V is running natively. Some people still consider KVM a Type 1, since it is running bare metal itself, but you can see how it’s different to the model other Type 1 hypervisors use. It’s a naming issue in that regard.
It might help to read up more on virtualization technology. I am sure someone can explain this stuff better than me.
Best setup ever:
1)install Linux on one drive.
2)install Windows on a second drive.
3)boot from grub on the first drive and add an entry to boot Windows.
4)on a 3rd drive format it ext3 or optionally dos. Mount this puppy at /home or even /home/user.
5)don’t let windows touch you Linux home drive ever. Fuck windows and Microsoft. Both can suck my entire ass. If you ever need to share files between these systems use a pen drive. Microsoft doesn’t deserve you. Just use it as a last resort, do your thing and GTFO ASAP.
I’ve got this setup, but optimized slightly:
LOL exactly!
That space at the end of 1) is doing some heavy lifting.
Time to install to OneDrive.
Just a heads up to anyone reading this: Don’t format your home folder as FAT32/ntfs. Some stuff in there needs Linux specific permission bits and you might be limited in terms of maximum file size.
Consider mounting at
/home/usename/shared
or something instead if you want a shared drive.I used to run Windows on an esata drive that I would only power up occasionally in order to game, and it still somehow – and I don’t remember how – managed to ruin my computer.
Yeah, isolated home drive is the way to go. You just nuke Linux and windows and restart but your stuff is safe.
Does this work to prevent Windows from fucking your bootloader in all cases? Also I dont quite get the importance of step 4?
Step 4 is in my opinion the most clever and important part… Basically if you remove your home drive and boot, you get a vanilla computer. If you put it back, you get your computer back…ie, if you fuck up your Linux or windows install you just remove your home, reinstall blind and put your home back in…like you never left!!! Plus if your drive for the os dies, you can just make another! Or you can even take your home folder with you from one Linux box to a new one in the blink of an eye…a very slow blink… Hold on, I’m still pulling the drive…open slowly… Done! See? Easy!
What’s wrong with a VM? I set up a Win10 instance in VMM right after I switched to Linux full time 10 months ago, but I had to use it exactly once to configure the RGB on my keyboard, and haven’t had a reason to boot it up since.
From what I understood, it runs on ‘Bare Metal’ which means that it theoretically should preform just as well as if you booted into it, with the only overhead being the *nix which is minimal.
I’m not saying it’s better, I’m honestly asking because I have very little experience with it.
I used to dual boot back in the day, but that was when I was still on HDDs and the long ass boot times meant I usually just stayed in Windows if I was planning on gaming that day.
That’s not how that works. I think your confusing bare metal with bare metal hypervisor. The latter is meant to mean a Type-1 Hypervisor, which KVM isn’t anyway but that’s another story.
Without GPU pass through you aren’t going to get nearly the graphics performance for something like gaming. I’ve also had issues with KVM and libvirt breaking during sleep. It’s a lot more janky than you make out.
Well it does seem to be a somewhat confusing subject, so forgive me for getting it wrong. I must have misunderstood or misremembered the information I read when setting up the VM 10 months ago. As I said, I have very little experience with them and was honestly just asking if it’s not almost as good. I wasn’t trying to ‘make it out’ to be ‘not janky’.
According to Wiki, KVM " is a … virtualization module in the Linux kernel that allows the kernel to function as a hypervisor."
I wasn’t aware that there was a distinction between a Hypervisor and a ‘Type-1’ Hypervisor, but now I know so thank you for clearing that up for me.
According to this wiki, it seems like GPU passthrough is possible with KVM if your system supports IOMMU, mine does. But it looks like you also need a separate GPU to do that, so that answers my question about is it nearly as good as dual booting.
Every game I have attempted to run has just worked and they seem to run just as good as they did in Windows, so I guess I’m lucky I don’t need to really worry about dual booting or VM’s. I was just kind of wondering if it would work if I did need it, since that seemed like it would be a lot simpler than booting into a different operating system.
Yes I know GPU passthrough is possible. Almost no one does it as consumer GPUs don’t normally don’t support the virtualization technologies that allow multiple OSes to use one GPU. It’s an enterprise feature mostly. There are projects like VirGL that work with KVM and QEMU, but they don’t support Windows last I checked, and are imperfect even on Linux guests. I think only Apple Silicon and Intel integrated graphics support the right technologies you would need. Buying a second GPU is a good option, although that has it’s own complexities and is obviously more expensive. Most modern consumer platforms don’t have enough PCIe lanes to give two GPUs a full x16 bandwidth. There is a technology in Windows called GPU paravirtualization to make this happen with Hyper-V, but you have to be using a Hyper-V host, not a Linux based one. It’s also quite finicky to make that work.
Out of interest what games are you running that don’t need GPU performance? Basically any modern 3D game needs a GPU to run well. Obviously 2D games might not, though even that varies.
All of the above is far more complex than setting up a dual boot. A dual boot can be as simple as having two different drives and picking which to boot from in the UEFI or BIOS firmware. I don’t understand why you think that would be less complicated than a high tech solution like virtualization.
There are basically three types of virtualization in classical thinking. Type 1, Type 2, and Type 3. KVM is none of these. With Type 1 there is no operating system running bare metal, instead only the hypervisor itself runs as bare metal. Everything else, including the management tools for the hypervisor, run in guest OSes. Hyper-V, ESXi, and anything using Xen are great examples. Type 2 is where you have virtualization software running inside a normal OS. KVM is special because it’s a hypervisor running in the same CPU ring and privilege level as the full Linux kernel. It’s like if a Type-1 hypervisor ran at the same time as a normal OS in the same space. This means it behaves somewhat like a Type-1 and somewhat like a Type-2. It’s bare metal just like a Type-1 would be, but has to share resources with Linux processes and other parts of the Linux kernel. You could kind of say it’s a type 1.5. It’s not the only hypervisor these days to use that approach, and the Type 1, 2, 3 terminology kind of breaks down in modern usage anyway. Modern virtualization has gotten a bit too complex for simplifications like that to always apply. Type 3 had to be added to account for containers for example. This ends up getting weird when you have modern Linux systems that get to be a Type-1.5 hypervisor while also being a Type 3 at the same time.
I think you misunderstood me. I said “Every game I have attempted to run has just worked and they seem to run just as good as they did in Windows, so I guess I’m lucky I don’t need to really worry about dual booting or VM’s”
The games I play do need GPU performance. Cyberpunk 2077, Red Dead Redemption 2, No Mans Sky, The Outer Worlds etc. I’m not running them in a VM, I’m running them through Steam or Heroic Games Launcher.
Because once you have everything set up properly, all you would need to do to play a game that you couldn’t play in Linux is fire up the VM and play it. In a dual boot situation you would have to reboot your computer into a whole different OS and then play the game. It wouldn’t be a massive difference, but it would be more convenient. Plus it would be contained so there would be no way for it to mess with your bootloader or whatever. Clearly it’s more complicated that I had originally thought.
Ok, now you got me curious. What is the distinction between that and how I originally described it?
From my admittedly laymen understanding, it kinda seems like what you said and how I described it are pretty much the same thing.
An OS or a hypervisor can run in bare metal. If I have Windows running in KVM, KVM is running bare metal but Windows isn’t. Ditto with ESXi or Hyper-V. In the case of your setup Linux and KVM are both bare metal, but Windows isn’t. KVM, ESXi, Xen are always running a privilege level above their guests. Does this make sense?
The difference between KVM and the more conventional Type 1 hypervisors is that a conventional type 1 can’t run along side a normal kernel. So with Linux and KVM both Linux and KVM are baremetal. With Linux and Xen, only Xen is baremetal, and Linux is a guest. Likewise if you have something like Hyper-V or WSL2 on Windows, then Windows is actually running as a guest OS, as is Linux or any other guests you have. Only Hyper-V is running natively. Some people still consider KVM a Type 1, since it is running bare metal itself, but you can see how it’s different to the model other Type 1 hypervisors use. It’s a naming issue in that regard.
It might help to read up more on virtualization technology. I am sure someone can explain this stuff better than me.