Sunday, November 3, 2013

Creating your networks

Goal is to create a Tenant Network where the Instances will be connected. Create an External Network that is connected to the Internet. Create a Router that connects both these networks to give your instances access to the Internet.



In the Admin tab, create a Network for the project/tenant. Do not select the External Network, since this is a Tenant Network.


Create a Subnet for the Tenant Network. This uses a private address space, so you can use any Network address you choose.


In the Subnet Detail Tab, enable DHCP for your VMs to get DHCP IP addresses, and also setup the DNS servers.


Create an External Network that will provide Internet Access to your VM Instances. Select External Network here since this is an External Network.


Create an External Subnet for the External Network. My External Subnet is 10.112.252.0/24. Although this is a Private Address, this address is controlled by my Enterprise Networking Team and Routable internally within the Enterprise Intranet. This address will be NAT'ed at my Enterprise Network Edge when going to the Internet. This is the Network address I am using on my eth0 interfaces. The External Bridge br-ex that we created in the previous post, is going to bridge the Openstack traffic to the External Network. The Gateway is which ever gateway you use in your network.



Configure the Subnet Detail for your External Subnet. Disable DHCP for your External Network. Most probably another DHCP server is running and your eth0s are getting the IP address from it. In any case you don't need to run DHCP server on this network. You will need to setup your IP allocation pool for this External Subnet. If you have some Static IPs requested from your Network Administrator, you can use that for your pool.


Create a Router for the Tenant.


Set the Gateway for the Router to External Network that was created before. Also, add an interface to the router that is connected to the Tenant Private Network that was created before.


Now, your network topology will look like this.


Your instance is connected to the Tenant Network. The router connects the Tenant Network to the External Network. The router does the NATing to give your instances the access to the Internet. In this case the 192.168.1.0/24 address is NAT'ed to the 10.112.252.0/24 address. Again this will be NATed at the Enterprise Intranet Edge to a real Public IP before it goes to the Internet. In my setup there is a double NAT happening. If your external network has Internet Routable addresses then there will be only one NAT happening.

Add rules to the Default Security Group to allow ICMP, TCP and UDP traffic to the instances.



Check if your instances can ping the addresses in the Internet.


Now your instances can access the Internet. If your instances need to be accessed from the Internet, because you are running a Web server that needs be accessed from the Internet, then you need to associate a floating-ip to the instance. The floating-ip's are allocated from the Static IP pool that was configured with the External Subnet. Allocate a floating-ip from the pool, and then associate it to the instance. Then your instance will be accessible from the Internet.


Now SSH to your instance from outside. In my case I am still within the Enterprise Intranet. If your External Network is part of your DMZ, and you have addresses within that DMZ range, then your instances will be accessible from outside the Enterprise Firewall.




Setting up the L3 Agent and getting your Virtual Machine Instances to talk to the Internet

The Rackspace Private Cloud cookbooks setup the Openstack deployment without setting up the L3 Agent. The L3 Agent is not installed. The L3 Agent is required to route the Network Traffic to the Internet. By default the Openstack Neutron Networking deployment will be setup with the Virtual Machine Instances in Isolated Networks, in all three modes: Flat, VLAN, GRE. You can create multiple private networks for projects and create routers to route traffic between these private networks, but the VMs cannot reach the Internet. This the Rackspace Private Cloud team did because the L3 Agent is not fully supported in High Availability (HA) deployment modes.

I am not setting up HA. GRE mode is quite easy to setup because the configuration required in the physical network is minimal, compared to VLAN networking.

This is my setup:



To setup the L3 Agent follow the steps below:

Step1: On the Network Node Install the L3 agent

# apt-get -y install quantum-l3-agent

Step2: Enable IP_Forwarding on the Compute and the Network Node as below

# sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf 
# sysctl net.ipv4.ip_forward=1

Step3: On the Network Node create the external bridge br-ex that will be used to access the Internet

# ovs-vsctl add-br br-ex

Step4: For Step4 you need to have console access to the Network Node. SSH will not do. This is because you will be reconfiguring the eth0 interface that is used for SSH and Management traffic. This is like chopping the branch you are sitting on if you use SSH.

Modify your eth0 configuration in /etc/network/interfaces file to look like this:

auto eth0
iface eth0 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down


Move your IP configuration you had on eth0 to the br-ex interface:

auto br-ex
iface br-ex inet static
address 10.112.252.245
netmask 255.255.255.0
gateway 10.112.252.253
dns-nameservers 10.112.116.138 10.112.116.139 10.112.64.1


Add the eth0 interface to br-ex bridge.

# ovs-vsctl add-port br-ex eth0

Reboot the Network Node. You should get your connectivity back to the Network Node for SSH.

Step5: As a troubleshooting step restart all Quantum services on the Network Node:

# cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done

Using GRE based Neutron Networking with Rackspace Private Cloud

If you have used the environment file that was given in the previous post, then we are setting up Neutron/Quantum Networking. By default Rackspace Private Cloud cookbooks setup VLAN based networks. To setup a GRE network, you must also add a network_type attribute in your environment file.

"override_attributes": {
    "nova": {
        "network": {
            "provider": "quantum"
        }
    },
    "quantum": {
        "ovs": {
            "network_type": "gre"
        }
    }

}

The environment file will now look as below:

root@chef-workstation:~/chef-cookbooks/environments# cat grizzly-quantum-1.json 
{
  "name": "grizzly-quantum-1",
  "description": "Chef-server Grizzly Quantum Multinode environment",
  "json_class": "Chef::Environment",
  "chef_type": "environment",
  "override_attributes": {
    "nova": {
      "libvirt": { "virt_type": "qemu" },
      "network": {
        "provider": "quantum"
      }
    },
    "quantum": {
         "ovs": {
              "network_type": "gre"
         }
    },
 

    "mysql": {
      "allow_remote_root": true,
      "root_network_acl": "%",
      "server_root_password": "fr3sca",
      "server_debian_password": "fr3sca"
    },
    "osops_networks": {
      "nova": "10.112.252.0/24",
      "public": "10.112.252.0/24",
      "management": "10.112.252.0/24"
    }
  }
}


After making this change to the environment file, upload the environment file to the Chef-Server:

# knife environment from file grizzly-quantum-1.json

Now run the chef-client on the nodes: Controller, Network and Compute. Once the chef-client runs on the nodes, the Neutron Networking will be setup to use GRE instead of VLAN for Networking.

Wednesday, October 30, 2013

Log into the Horizon Dashboard

If you had any errors and chef-client terminated, just try re-running the chef-client on the nodes. Sometimes due to network latency or other issues during installation of all the packages, the chef-client encounters errors.

Once the chef-client has successfully completed on all the nodes, then do the following step. Run the following command on the network and compute nodes to connect the nodes to the physical network ph-eth1 that is created by the Rackspace cookbooks.

# ovs-vsctl add-port br-eth1 eth1

Connect to the Horizon Dashboard. Point to the Eth0 IP address of the Controller Node. You should see the Horizon Dashboard. The default administrator account is:

Username: admin
Password: secrete

Sunday, October 27, 2013

Run chef-client on the nodes

Check the configuration of the nodes:


# knife node show setup1-controller
Node Name:   setup1-controller
Environment: grizzly-quantum-1
FQDN:        setup1-controller
IP:          10.112.252.244
Run List:    role[single-controller]
Roles:
Recipes:
Platform:    ubuntu 12.04
Tags:



# knife node show setup1-network
Node Name:   setup1-network
Environment: grizzly-quantum-1
FQDN:
IP:          10.112.252.245
Run List:    role[single-network-node]
Roles:
Recipes:
Platform:    ubuntu 12.04
Tags:



# knife node show setup1-compute
Node Name:   setup1-compute
Environment: grizzly-quantum-1
FQDN:
IP:          10.112.252.246
Run List:    role[single-compute]
Roles:
Recipes:
Platform:    ubuntu 12.04
Tags:



Now we are all ready to run "chef-client" command on the nodes. Run the "chef-client" command on the controller, network and compute nodes one-by-one in that order.

Add the roles to the Chef nodes runlist and assign the nodes to the environment

Use the "knife node run_list add" command to add the role to each of the Chef nodes:

# knife node run_list add setup1-controller 'role[single-controller]'
# knife node run_list add setup1-network 'role[single-network-node]'
# knife node run_list add setup1-compute 'role[single-compute]'



Use the "knife node show" command to see the node configuration:

# knife node show setup1-controller
Node Name:   setup1-controller
Environment: _default
FQDN:        setup1-controller
IP:          10.112.252.244
Run List:    role[single-controller]
Roles:
Recipes:
Platform:    ubuntu 12.04
Tags:



We see that the Environment is "_default". We need to change it to "grizzly-quantum-1" environment we created before.

Use the "knife node edit" command to edit the node configuration and change the environment from "_default" to "grizzly-quantum-1".

The node configuration will be in environment "_default" as here:

{
  "name": "setup1-controller",
  "chef_environment": "_default",
  "normal": {
    "tags": [

    ]
  },
  "run_list": [
    "role[single-controller]"
  ]
}



We need to replace "_default" with "grizzly-quantum-1". As here:

{
  "name": "setup1-controller",
  "chef_environment": "grizzly-quantum-1",
  "normal": {
    "tags": [

    ]
  },
  "run_list": [
    "role[single-controller]"
  ]
}



The same needs to be done for the other two nodes: setup1-network and setup1-compute.

Setup the Chef Environment

Under chef-cookbooks/environments directory setup the environment json file. For example create a grizzly-quantum-1.json file with the following contents:

{
  "name": "grizzly-quantum-1",
  "description": "Chef-server Grizzly Quantum Multinode environment",
  "json_class": "Chef::Environment",
  "chef_type": "environment",
  "override_attributes": {
    "nova": {
      "libvirt": { "virt_type": "qemu" },
      "network": {
        "provider": "quantum"
      }
    },
    "mysql": {
      "allow_remote_root": true,
      "root_network_acl": "%",
      "server_root_password": "fr3sca",
      "server_debian_password": "fr3sca"
    },
    "osops_networks": {
      "nova": "10.112.252.0/24",
      "public": "10.112.252.0/24",
      "management": "10.112.252.0/24"
    }
  }
}



You need to mention the Network you are using on your eth0 as the "nova", "public" and "management" attributes. Change these attributes to the network address you are using:

"osops_networks": {
"nova": "10.112.252.0/24",
"public": "10.112.252.0/24",
"management": "10.112.252.0/24"
}


Upload the environment to the Chef Server as below:

# knife environment from file grizzly-quantum-1.json


You can verify that the environment file has been uploaded to Chef Server using the following command:

# knife environment list
_default
grizzly-quantum-1