Ubuntu server running VirtualBox has globally-visible routable static IPs N->N+5 available to it. It is using IP N itself, and has lovely internet access via IP Q (on a completely different subnet, same ethernet card)
A virtual machine is created with Bridged networking to eth0, using IP N+1.
It is accessable from the internet via RDP to the *host server* because VirtualBox does that, and that means I'm connecting to the host, not the VM.
It is accessable via ping and SSH *from* the host server, via IP.
It can ping and ssh *to* the host server, via IP.
It can't go further than the host server and nothing from the outside can get to it.
"route -n" on the host and the VM produce sensible-looking results that perfectly match a working identical configuration (with different IPs, of course)
This is a fresh install, on a brand new machine.
Telling the host server that *it* is IP N+1 (or 2, or 3, or whatever) results in perfectly good network access to and from those IPs.
Any ideas?
EDIT: The problem is definitely routing of some sort. The host happily bounces out via IP Q, traceroute from the outside to N goes in to N via Q no problem - but the virtual machine can't ping Q. So when then VM is connecting to the outside world, it goes to N (it's gateway), then the host doesn't pass that along to Q and out to the intertubes.
EDIT2: IPv4 Packet Forwarding in /etc/sysctl.conf FTW.
Fixed it myself. I love you guys, sometimes just ASKING the question is enough to jog me through figuring out where to look.
A virtual machine is created with Bridged networking to eth0, using IP N+1.
It is accessable from the internet via RDP to the *host server* because VirtualBox does that, and that means I'm connecting to the host, not the VM.
It is accessable via ping and SSH *from* the host server, via IP.
It can ping and ssh *to* the host server, via IP.
It can't go further than the host server and nothing from the outside can get to it.
"route -n" on the host and the VM produce sensible-looking results that perfectly match a working identical configuration (with different IPs, of course)
This is a fresh install, on a brand new machine.
Telling the host server that *it* is IP N+1 (or 2, or 3, or whatever) results in perfectly good network access to and from those IPs.
Any ideas?
EDIT: The problem is definitely routing of some sort. The host happily bounces out via IP Q, traceroute from the outside to N goes in to N via Q no problem - but the virtual machine can't ping Q. So when then VM is connecting to the outside world, it goes to N (it's gateway), then the host doesn't pass that along to Q and out to the intertubes.
EDIT2: IPv4 Packet Forwarding in /etc/sysctl.conf FTW.
Fixed it myself. I love you guys, sometimes just ASKING the question is enough to jog me through figuring out where to look.
(no subject)
Date: 2011-06-29 04:58 pm (UTC)(no subject)
Date: 2011-06-29 05:06 pm (UTC)(no subject)
Date: 2011-06-29 05:14 pm (UTC)More to the point, trying to tell the virtual machine at N+1 that it needs to route via Q is... actually slightly awkward. I'm not sure what combination of network settings I would use.
It's definitely something to look into for the future (if only so I can get IP N back from the server and use it for another VM), but RIGHT NOW the challenge was "duplicate this other setup that works".
And in the other setup, the host server acts as a gateway to the VMs.
(no subject)
Date: 2011-06-29 05:17 pm (UTC)(Again, a hazard of copying a working setup on a deadline: You wind up copying the original guy's half-assed hacks)
(no subject)
Date: 2011-06-29 09:13 pm (UTC)http://en.wikipedia.org/wiki/Rubber_duck_debugging
(no subject)
Date: 2011-06-30 12:19 am (UTC)Which is a better result than the duck.
(no subject)
Date: 2011-06-30 05:35 am (UTC)