Language:

en_US

switch to room list switch to menu My folders
Go to page: First ... 14 15 16 17 [18] 19 20 21 22 ... Last
[#] Tue Nov 24 2020 20:59:18 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Tue Nov 24 2020 14:29:53 EST from Nurb432 @ Uncensored

Only gotcha i ran into was i didnt read the specs on a batch of 2 i bought, so i got the lower end I5.  Still an ok CPU and nothing wrong with them, but now they are mis-matched to my 4 others in the farm is all.  Ended up with 4570T on that 2nd batch instead of 4590T  Totally my fault, as i went back and read the description of what i ordered, and bigger than hell, it was correct and i was just careless. 

Also be sure the include power packs, that series uses a proprietary plug on the computer side.  They are as common as dirt, ( even on Amazon ) but it means you cant scrounge one out of the parts box if you forget.  

Not all modes have wifi/BT ( but they all have the socket ). So again, if you need it, just be sure to read closely or have a spare card somewhere.

I bought with minimal ram and added 2 16G modules, and bought them diskless so i could add 2TB SSD. I did not go for the ones with the CD caddy.. pointless for me.

For such a cheap machine, i have been quite pleased with them. 

 

I found someone locally selling off a bunch of NUCs they had hooked up to LCDs in their offices. $125 for an i5. Not sure what generation, how big the storage, what amount of RAM. Doesn't really matter. It'll be more powerful than the Pi without a much bigger physical footprint. 




[#] Wed Nov 25 2020 14:08:40 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

And any suggestions on a very inexpensive NUC that would be a good
replacement? 

If all you're running is Citadel you can use almost any of them. I have an old Celeron-based NUC that is absolutely atrocious as a desktop in this era, but it runs server workloads like Citadel just fine.

Also keep an eye out for Jetway fanless computers. They have a more "industrial control" vibe to them than the NUC but they are good computers too.

Or you could bring your game up to the state-of-the-2010s and run virtual machines :)

[#] Wed Nov 25 2020 15:19:43 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Wed Nov 25 2020 09:08:40 EST from IGnatius T Foobar @ Uncensored
And any suggestions on a very inexpensive NUC that would be a good
replacement? 

If all you're running is Citadel you can use almost any of them. I have an old Celeron-based NUC that is absolutely atrocious as a desktop in this era, but it runs server workloads like Citadel just fine.

Also keep an eye out for Jetway fanless computers. They have a more "industrial control" vibe to them than the NUC but they are good computers too.

Or you could bring your game up to the state-of-the-2010s and run virtual machines :)

OoOooh. 

With the i5 I set up, I could totally run a virtual machine running Linux and run Citadel within that. Which would make backups and moving the machine around dead simple. 

That isn't a terrible idea. 




[#] Wed Nov 25 2020 17:18:41 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

For what its worth, i run proxmox on my cluster.  The overhead is minimal, and its got a great enterprise ready web GUI. ( and other features like integrated ceph.. )



[#] Thu Nov 26 2020 04:42:06 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Wed Nov 25 2020 12:18:41 EST from Nurb432 @ Uncensored

For what its worth, i run proxmox on my cluster.  The overhead is minimal, and its got a great enterprise ready web GUI. ( and other features like integrated ceph.. )



I checked out their webpage. I'm going back and forth with the idea of running it in a VM. Lots of advantages... 

But a learning curve for me if I put it on a Linux box. If I put it on Windows - then if there is some sort of power outage or system crash/BSOD - I'm not sure how easy it is to get everything automatically up and running again. 

I start making it more robust - I start adding a lot of administration overhead to it. The problem with running your own servers is that you become your own IT department. 

One thing I can say about the Pi - it has been rock solid. If Citadel crashes, it just restarts - the OS is super stable - and it is *simple*. Not a lot of maintenance or administration necessary. 

 



[#] Thu Nov 26 2020 05:01:22 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

ProxMox VE was awesome when I used it, and that was about nine years ago.
I ran a five node cluster with shared storage. I can only assume it's gotten better since then.

If you want to dedicate a host computer to running Internet-facing servers on virtual machines, the setup is easy, just plug the VPN router into the computer and go. If you need to have it on both the Internet and the local network, you can do that with two Ethernet interfaces; just make sure to disable IPv6 on the bridge or it *will* pick up an address and use it, when you probably intended for your client-side web surfing to use the local network.

As previously mentioned, I run the Internet side on a separate VLAN instead of a separate interface. It works great but it requires enough knowledge of Cisco IOS to know how to configure subinterfaces and move L2TP tunnel endpoints around ... oh, and also how to reset the administrator password without wiping the config or bricking the router :)

If you do go the virtual route, make sure /var/lib/libvirt/images is on a filesystem formatted with btrfs. That way you get snapshots for free and you can backup your virtual machines at block level without shutting them down.

[#] Thu Nov 26 2020 09:40:42 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Thu Nov 26 2020 00:01:22 EST from IGnatius T Foobar @ Uncensored
ProxMox VE was awesome when I used it, and that was about nine years ago.
I ran a five node cluster with shared storage. I can only assume it's gotten better since then.

If you want to dedicate a host computer to running Internet-facing servers on virtual machines, the setup is easy, just plug the VPN router into the computer and go. If you need to have it on both the Internet and the local network, you can do that with two Ethernet interfaces; just make sure to disable IPv6 on the bridge or it *will* pick up an address and use it, when you probably intended for your client-side web surfing to use the local network.

As previously mentioned, I run the Internet side on a separate VLAN instead of a separate interface. It works great but it requires enough knowledge of Cisco IOS to know how to configure subinterfaces and move L2TP tunnel endpoints around ... oh, and also how to reset the administrator password without wiping the config or bricking the router :)

If you do go the virtual route, make sure /var/lib/libvirt/images is on a filesystem formatted with btrfs. That way you get snapshots for free and you can backup your virtual machines at block level without shutting them down.

Yeah... CCSE/CCSA was something I skipped. ;) 

But I've already got the Pi Multihomed. The NUC has built in Ethernet and WiFi... so that is simple to set up. 

I'll play around with it. I do like the idea of having things in a VM and having them take snapshots and having live backups. 



[#] Thu Nov 26 2020 19:01:29 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Yes, PVE has advanced much over the years. I moved to it a few years ago for both myself and clients when citrix started yanking features out of 'free' xenserver.   A 3rd party company that sells management tools did take all the bits and pieces and inserted all the features again, but as i liked PVE more anyway once i got into it, i didnt go back. ( built in web management, both fat vms and containers, and ceph native support won the day. AND is basically Debian under the hood )

They now have an integrated backup solution for PVE, was just released this summer. 

 

Thu Nov 26 2020 00:01:22 EST from IGnatius T Foobar @ Uncensored
ProxMox VE was awesome when I used it, and that was about nine years ago.
I ran a five node cluster with shared storage. I can only assume it's gotten better since then.

If you want to dedicate a host computer to running Internet-facing servers on virtual machines, the setup is easy, just plug the VPN router into the computer and go. If you need to have it on both the Internet and the local network, you can do that with two Ethernet interfaces; just make sure to disable IPv6 on the bridge or it *will* pick up an address and use it, when you probably intended for your client-side web surfing to use the local network.

As previously mentioned, I run the Internet side on a separate VLAN instead of a separate interface. It works great but it requires enough knowledge of Cisco IOS to know how to configure subinterfaces and move L2TP tunnel endpoints around ... oh, and also how to reset the administrator password without wiping the config or bricking the router :)

If you do go the virtual route, make sure /var/lib/libvirt/images is on a filesystem formatted with btrfs. That way you get snapshots for free and you can backup your virtual machines at block level without shutting them down.

 



[#] Fri Nov 27 2020 12:41:45 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Thu Nov 26 2020 14:01:29 EST from Nurb432 @ Uncensored

Yes, PVE has advanced much over the years. I moved to it a few years ago for both myself and clients when citrix started yanking features out of 'free' xenserver.   A 3rd party company that sells management tools did take all the bits and pieces and inserted all the features again, but as i liked PVE more anyway once i got into it, i didnt go back. ( built in web management, both fat vms and containers, and ceph native support won the day. AND is basically Debian under the hood )

They now have an integrated backup solution for PVE, was just released this summer. 

 

Thu Nov 26 2020 00:01:22 EST from IGnatius T Foobar @ Uncensored
ProxMox VE was awesome when I used it, and that was about nine years ago.
I ran a five node cluster with shared storage. I can only assume it's gotten better since then.

If you want to dedicate a host computer to running Internet-facing servers on virtual machines, the setup is easy, just plug the VPN router into the computer and go. If you need to have it on both the Internet and the local network, you can do that with two Ethernet interfaces; just make sure to disable IPv6 on the bridge or it *will* pick up an address and use it, when you probably intended for your client-side web surfing to use the local network.

As previously mentioned, I run the Internet side on a separate VLAN instead of a separate interface. It works great but it requires enough knowledge of Cisco IOS to know how to configure subinterfaces and move L2TP tunnel endpoints around ... oh, and also how to reset the administrator password without wiping the config or bricking the router :)

If you do go the virtual route, make sure /var/lib/libvirt/images is on a filesystem formatted with btrfs. That way you get snapshots for free and you can backup your virtual machines at block level without shutting them down.

 



So, does it install as a stand alone VM manager at the bare metal, or do you install Debian and then install it? 

I'm at an impasse right now. I've got to take my Cit down to back it up, then if the backup works I have to upgrade it, so that I can Citadel Migrate it from the Pi to the i5. 

Plus, I've got to get Linux installed, or this... and configured before I do all that. 

While relearning all the *nix things I've forgotten as I go. 

It is a bit overwhelming. I've got a lot of pre-reqs ahead of me that mean a lot of reading before I get to the doing. 

I imagine this might take a few attempts before I'm actually ready to go live on the i5 with the results. 

 



[#] Fri Nov 27 2020 12:55:13 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Its a bare metal installer. Its just they based their core OS on Debian, not that you have to take a Debian install and stick it on top of that. Most people dont care what the OS is, but i like that its based on something open, and something im familiar with if things go sideways. ( in the Linux world, i prefer Debian. Both technical and political reasons, even tho they did finally cave to the abomination of sysD .. grrr  )

With PVE, if you dont know its Linux, unless you care to know.  Nearly everything is done via a GUI, so its really just an 'appliance' and the core OS is totally hidden.  Might be a few obscure things that need CLI now, like dropping servers out of a cluster, or adding certbot, or changing repos if you dont buy a subscription, but daily life, does not.   Dropping hosts, was a slight pain, had to run 4 commands :) Changing repos, a simple edit of a text file and they tell you what to change.  

Really, dont let that it runs on Linux put you off. Most likely yo will never have to worry about it.



[#] Fri Nov 27 2020 16:25:46 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Fri Nov 27 2020 07:55:13 EST from Nurb432 @ Uncensored

Its a bare metal installer. Its just they based their core OS on Debian, not that you have to take a Debian install and stick it on top of that. Most people dont care what the OS is, but i like that its based on something open, and something im familiar with if things go sideways. ( in the Linux world, i prefer Debian. Both technical and political reasons, even tho they did finally cave to the abomination of sysD .. grrr  )

With PVE, if you dont know its Linux, unless you care to know.  Nearly everything is done via a GUI, so its really just an 'appliance' and the core OS is totally hidden.  Might be a few obscure things that need CLI now, like dropping servers out of a cluster, or adding certbot, or changing repos if you dont buy a subscription, but daily life, does not.   Dropping hosts, was a slight pain, had to run 4 commands :) Changing repos, a simple edit of a text file and they tell you what to change.  

Really, dont let that it runs on Linux put you off. Most likely yo will never have to worry about it.



Actually, I'm a Debian guy. I was at my peak back with Sarge and Potato - compiling and making my own kernals to add WiFi and sound support for Non-Free laptops and doing other things that got me pretty far into the internals of Linux. 

Here is the thing for me - a lot of the obscure commands - most Linux users are also programmers - and scripters - and they love all the advance programming oriented features of the various shells. But for someone more on the engineering side, like myself - those don't matter a lot and there aren't a lot of use cases for me to learn them. So a lot of time, as more of a system engineer/architect/admin kind of guy - I don't know where to even start looking in Linux to get what I want to need done, done. The other thing is - the kind of people who love Linux write the WORST technical documentation. They assume a ton of prerequisite knowledge and tend to use very technical formats and language. This has improved as Ubuntu has made Linux a more end-user accessible platform - and now it is easier to find YouTube videos and step by step "HowTo" files with screenshots that don't explain WHY, but just tell you HOW. 

But I don't hate Linux. I just don't look forward to how much you have to research on Linux before you *implement*. A lot of time learning how to do the thing, not as much time spent doing it - and usually, something else it was ASSUMED you knew and read about throws a monkey wrench in it, so it is back to the online searches to figure out how to get over THAT hump that you were supposed to know about, but didn't. 

I've already set up Debian on the i5 I picked up... but it sounds like I need to download and burn an image of this and install it instead. Backing up the Pi was a simple thing, and I realize that running Citadel on Linux on bare metal is going to make backups far more complicated. But if I'm running in a VM - that will resolve itself. I assume with two of these i5 NUCs, I could even set up a second machine and make the BBS highly available with failover. That would be pretty cool. 

Thanks for pointing me in this direction. At this point, I've got the Pi burned as an image, I've tested the image onto a different SD card - so I've got good redundancy if something happens to what I am running right now. I'm going to take the upgrade and migration pretty slow and methodically - and will probably try a few different test environments before I push one to production. 

 



[#] Fri Nov 27 2020 17:38:28 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

There is a way to install PVE onto an existing Debian. But honesty, its not worth the trouble.  Far easier just to do the supported thing and follow the bouncing ball from the installer ISO.

 

I suppose if you had some sort of setup you didnt want to lose so you *had* to retain your original OS its an option, but if that is the case, personally dont feel you really should be installing anything more than simple KVM on it..   In my case my desktop is setup like that, just 'raw kvm'. Reason being is I *have* to run a windows for work, so i run a VM.  Only have it on when i need it. And since its my daily driver desktop, I really dont want to create a server out of it at the same time. And if i ever go back to the office again, the VM just gets copied over to a laptop. ( been home since march.. last wild ass guess from management is perhaps late spring to return, on a 50% rotating schedule )



[#] Fri Nov 27 2020 18:05:39 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Oh, and really i'm an *bsd guy,  and have been, but i will always choose the best tool for the job.  Debian was/is still one of the more traditional Linux distributions out there, so when i need a penguin that is the one i choose.  However, since they decided to go down the sysD rabbit hole, i'm looking seriously into Devuan 

 

( and sorry for all the typos today, migraines do that to me )



[#] Fri Nov 27 2020 22:29:31 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Fri Nov 27 2020 12:38:28 EST from Nurb432 @ Uncensored

There is a way to install PVE onto an existing Debian. But honesty, its not worth the trouble.  Far easier just to do the supported thing and follow the bouncing ball from the installer ISO.

 

I suppose if you had some sort of setup you didnt want to lose so you *had* to retain your original OS its an option, but if that is the case, personally dont feel you really should be installing anything more than simple KVM on it..   In my case my desktop is setup like that, just 'raw kvm'. Reason being is I *have* to run a windows for work, so i run a VM.  Only have it on when i need it. And since its my daily driver desktop, I really dont want to create a server out of it at the same time. And if i ever go back to the office again, the VM just gets copied over to a laptop. ( been home since march.. last wild ass guess from management is perhaps late spring to return, on a 50% rotating schedule )



I'm just playing around right now trying to see what fits. So far, this is going fairly swimmingly. 



[#] Fri Nov 27 2020 23:10:24 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Fri Nov 27 2020 17:29:31 EST from ParanoidDelusions @ Uncensored

 

Fri Nov 27 2020 12:38:28 EST from Nurb432 @ Uncensored

I'm just playing around right now trying to see what fits. So far, this is going fairly swimmingly. 





The only problem I seem to have is that Proxmox doesn't want to see the non-free Intel WiFi in my Optiplex 3020. I added the non-free Deb repos - and attempted install with apt update && apt install firmware-iwlwifi but that errors out - and seems to have to do with the way Proxmox handles management of the NIC. To install iwlwifi wants to uninstall the managed NIC and Proxmox is warning that this is probably a really stupid idea and I shouldn't continue. 

 

 



[#] Fri Nov 27 2020 23:55:24 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Fri Nov 27 2020 18:10:24 EST from ParanoidDelusions @ Uncensored
 

The only problem I seem to have is that Proxmox doesn't want to see the non-free Intel WiFi in my Optiplex 3020. I added the non-free Deb repos - and attempted install with apt update && apt install firmware-iwlwifi but that errors out - and seems to have to do with the way Proxmox handles management of the NIC. To install iwlwifi wants to uninstall the managed NIC and Proxmox is warning that this is probably a really stupid idea and I shouldn't continue. 

 

 



I see lots of people having trouble with this - but everyone is approaching it from wanting to share the WiFi as a slave for use as VM interfaces - and evidently, Proxmox can't do this. 

But I just want it available to Proxmox for the management console. The VM itself can connect solely to my public network, as long as I can remote in to the Proxmox console over WiFi on my internal network. Does that make sense? 



 



[#] Sat Nov 28 2020 02:22:13 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

If you are trying to have multiple systems using the same WiFi interface as a bridge, its going to be problematic.  It requires a router that can support that sort of connection for starters, and a bit of funny business on the other side too.  Most commodity routers dont like promiscuous mode on incoming wireless i have found. Security risks and all i guess.

I was in a similar boat when i was doing some testing (  I dont have wire running to where my lab is back to my 'network closet'.  Long story. ). And after being frustrated for an afternoon I ended up just digging a WiFi USB dongle out of the bin and attached that to the VM.  it was just a short term experiment anyway, so i didnt need a 'full bridge',  and didnt care it was a slower connection, being an older dongle that pretty much everone has drivers for out of the box. 

Later i broke down and bought a couple of those Ethernet over power-line adapters for when i was wanting to do bench testing and WiFi wasn't available for some reason. ( but i figure your situation is different )



[#] Sat Nov 28 2020 05:27:06 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

 

Fri Nov 27 2020 21:22:13 EST from Nurb432 @ Uncensored

If you are trying to have multiple systems using the same WiFi interface as a bridge, its going to be problematic.  It requires a router that can support that sort of connection for starters, and a bit of funny business on the other side too.  Most commodity routers dont like promiscuous mode on incoming wireless i have found. Security risks and all i guess.

I was in a similar boat when i was doing some testing (  I dont have wire running to where my lab is back to my 'network closet'.  Long story. ). And after being frustrated for an afternoon I ended up just digging a WiFi USB dongle out of the bin and attached that to the VM.  it was just a short term experiment anyway, so i didnt need a 'full bridge',  and didnt care it was a slower connection, being an older dongle that pretty much everone has drivers for out of the box. 

Later i broke down and bought a couple of those Ethernet over power-line adapters for when i was wanting to do bench testing and WiFi wasn't available for some reason. ( but i figure your situation is different )



Ok... so... that isn't what I'm trying to do - it seems like what most people are trying to do, though. 

I'm trying to assign the management console for Proxmox to the Wireless NIC. The Ethernet will host the VM's public IP address. 

I might be unclear on how this would work. 

So... to understand this... I've got a regular ISP provided router. From one of the ethernet ports, it goes to an Cisco router. That Cisco router tunnels by a VPN to my Public IP provider (who is different than my regular ISP). An ethernet cable hooks the router up to my Raspberry Pi - and the Pi is configured with a public IP address. 

The Raspberry also connects to the WiFi on the REGULAR ISP provided router. That gets an internal DHCP lease. 

In this way, I can connect to the Raspberry Pi internally through the WiFi, or come in to it externally from the public IP address, and each IP is hosted on a separate NIC. 

So it isn't actually a bridge. The external IP address doesn't see the internal IP address, the WiFi doesn't see the wired. 

What I want to do is be able to connect to the Proxmox console on the Wireless, internal IP address... but have the VM visible on the wired, external IP address. 

Is this even possible? 
I've managed to get the wireless NIC up, and it is getting an internal DHCP lease from the regular ISP router - but if I try to ping that gateway, I get "network is unreachable" 

ip addr returns:

3: wlp3s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000

    link/ether hex

 

    inet ***.***.0.6/24 brd ***.***.0.255 scope global dynamic wlp3s0

With the internal IP address assigned - but state is DOWN, and ifup returns....

 ifup wlp3s0

 

ifup: interface wlp3s0 already configured



Actually, after an ifdown and then ifup - now I'm able to ping the gateway, but the vmbr0 bridge isn't able to see the wlp3s0 interface.  

Trying to remove it generates this error: 

And it can't be edited, either. I added wlp3s0 to Ports/Slaves in vmbr0 manually. 

Keep in mind, I know just enough to be dangerous here - as if you hadn't already figured that much out. :) 

 



[#] Sat Nov 28 2020 05:45:21 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Maybe I'm overcomplicating this. 
vmbr0 is currently... 

I'm too tired to think this through tonight. But basically... right now I'm building the Proxmox machine on an internal, non-routable subnet. 

I've assigned vmbr0 an ipp address, gateway, subnet and dns on that internal network. 

And what I want to do is put it on a different internal, non routable subnet where the Pi is currently connected, so I can do ctdlmigrate from the Pi to the VM on Proxmox. 

If it were bare metal, it would be a piece of cake. I'd set it up just like the Pi. The external wired network would have a public IP address, the WiFi would be connected to the same internal non-routable network as the Pi is connected to wirelessly. 

Is this what I need to be doing? 

3.3.5. Routed Configuration

Most hosting providers do not support the above setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface.

Tip Some providers allow you to register additional MACs through their management interface. This avoids the problem, but can be clumsy to configure because you need to register a MAC for each of your VMs.

You can avoid the problem by “routing” all traffic via a single interface. This makes sure that all network packets use the same MAC address.

default-network-setup-routed.svg

A common scenario is that you have a public IP (assume 198.51.100.5 for this example), and an additional IP block for your VMs (203.0.113.16/29). We recommend the following setup for such situations:

 

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
        address  198.51.100.5
        netmask  255.255.255.0
        gateway  198.51.100.1
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp


auto vmbr0
iface vmbr0 inet static
        address  203.0.113.17
        netmask  255.255.255.248
        bridge-ports none
        bridge-stp off
        bridge-fd 0


[#] Sat Nov 28 2020 06:00:48 UTC from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

I mean, ultimately what I want is a dual homed machine - with the wired NIC hooked up to the Cisco, acting as a switch... and the VM assigned a public IP address on that network... 

The wireless NIC hooked up to my internal network, for management from the console. 

I don't really want the wired NIC bridged to the wireless one, and I don't care if the VM is accessible from the wireless, internal NIC. 

I'd prefer that the Proxmox console not be reachable over the public, wired network, either. 

Just the Proxmox console via the internal WiFi. If I can connect to the Proxmox console over the internal network - then I can open a shell and telnet to the VM, or open a console on the VM and just connect to localhost. 


Would this work better if I just didn't use the WiFi and instead plugged in a USB ethernet adapter and hooked that up wired to the regular household ISP provider's router and plugged the built in ethernet into the Cisco? 

 



Go to page: First ... 14 15 16 17 [18] 19 20 21 22 ... Last