I thought i’d do a short write up on my experience with Solaris 10 Zones. So here goes.
Solaris Zones, or Containers, are a type of virtual machine, created on top of a Solaris 10 host operating system. That’s right, it needs a host operating system, so it’s not something like ESX that installs on bare metal with VM’s on top of that. Zones are one of the two types of virtual machines that you can do with Solaris; the second being Logical Domains. The difference between the two is that LDOMs can only be installed on pure SPARC hardware, and can utilize the hardware in a different way (such as the built in crypto chip of the new T1 and T2 chips). The specifics are out there, so i won’t get into that here.
Of the two, i like Zones because of their ease of setup. Both are easy, don’t get me wrong, but zones just feel right.
The basic steps for setting up a zone are as follows:
- Install a machine with Solaris 10
- See that you have enough disk space locally, or map appropriate network disks
- Get network settings down before you start (your hostname and ip, gateway and dns etc.)
- Create Zone, configure it, boot it, and configure some more
- Install any additional software
So let’s get started with the “cheat sheet”, “shorthand”, “whatever you want to call it”:
Choosing Zone Type
You’ve got your Solaris 10 installed on a machine, right? It’s time to choose what kind of zone to set up. There are two basic types. A Whole-Root Zone, or a Sparse-Root zone. The main differences are, that a Whole-Root zone does not share any of the directories of the Global Zone. This means that you can configure the system fully, like any physical machine with Solaris 10 on it. If you want to be running a lot of services that rely on stuff happening in /usr/local or you need to configure host specific files that can’t be shared with other systems, or just want the system isolated for other reasons, go with the Whole-Root zone. It’ll take up more space (around 3-4 gigs for a fully configured Solaris 10 system, plus anything you install on top).
A Sparse-Root zone has stuff shared with the Host OS, meaning if you change something on the Host, the change is immideately reflected in the sparse-root zones. This can be handy if you are running a system that needs constant changes, or are short on diskspace. But it’s not good, if you need to have the zone work on it’s own, with it’s own configuration. It’s all shared, or well most of it.
So back to your host OS! Make sure it’s patched up and so on. Install any applications that you might think you need on every Zone you plan to install. This would be stuff like..favorite editors, wget, sudo.. whatever you fancy. Don’t make it anything massive like postfix or so, because it probably won’t translate too well into your Zone, and will require extensive reconfiguration anyway.
Start out by deciding on a zone name, and initializing it:
zonecfg -z myzone
Zonecfg is used whenever you want to play around with the settings of a zone, or do something fundamental with a zone, like delete it. The -z switch is followed by a zone name, and is used almost always, when you want to specify which zone you are fiddling with.
After running this command, you’ll be thrown into a new prompt, the zonecfg prompt. It’ll look something like: zonecfg:myzone>. Here, you can run commands that set the specifications of the zone, such as devices passed through from the host OS (also known as the Global Zone), filesystems you want to mount, network configuration and much more. I’ll go through the most important here.
Next, we’ll tell it to create a base configuration. For a whole-root zone, use
For a sparse-root zone, use:
This creates a baseline configuration, which you can look at with the view command. The main difference is, with the create command it’ll make sure the correct directories are inherited, and create -b makes sure nothing is inherited.
After this, we’ll add the zone path. This is where the operating system of the Zone will reside. Make sure you have space, so you don’t run out later. If you want redundancy, make sure you have all that sorted out at this point. If you notice something is missing or not right, you can always quit from zonecfg, and return to it using the same commands as above.
This places the myzone zone under the /zones directory in your host OS.This step is performed in both whole and sparse-root installations.
Next, we choose whether we want the zone to boot when the host OS does. This is usually a good idea, unless you have special requirements that i don’t know of. If the server goes down, and your friendly neighborhood sysadmin boots it, the zones will come up if the host does. This is good, and saves your solaris guys from doing a bunch of manual work.
Then we add a network interface, since you probably want the zone to talk to other hosts, and not just sit there.
This gets you into the network configuration. prompt will say something like zonecfg:myzone:net. While in here, any commands you enter are directed toward the network configuration. Let’s add an address, a network card type, and a default route.
Self-explanatory, but make sure your physical card type matches that of your host system. In solaris, interface names are according to manufacturer, so use ifconfig -a to check it out in the host os if you are unsure. Zone network adapters are made as virtual interfaces under your host interface. The first zone in my example will get an interface of bge0:1 and so forth. Defrouter is not mandatory, but you can specify a default route here if you so choose.End netconfig with the end command. You are now back in the main level of zonecfg
After this, you may want to add an extra filesystem. It’s easy:
Okay, so first we enter the filesystem config with add fs. Then, we set the directory under which the mount is visible inside the zone. This can be confusing due to the next command, set special. This command is where to point on the host operating system. Why it’s called special eludes me. There’s nothing special about it.The last command sets the type as local filesystem. Others are available, look them up using help inside zonecfg if necessary.Once again end to exit back to the main menu.
We should now be good to go. If you want to pass anything else through to the zone form the host, you can do so. For instance, adding a device is as simple as:
This would give you access to anything under /dev/device/ on the host machine. You can get freaky with wildcards here.
We can now use the view command to look at what we’ve done. After that’s done, it’s time to nut up and shut up:
Verify that settings are sane, commit them (save them), and exit. We now have a zone that is in a configured state. You can view the states of your zones using:
zoneadm list -civ
Very useful command, but for now, we want to install the system. This means copying the OS and packages over to your zone. Whether you did create, or create -b significantly affects the amount of stuff to be moved over, but it shouldn’t take more than 10 minutes on any modern system.
zoneadm -z myzone install
And then for some waiting….. after you are done, you can do a listing again, and see that the status has changed from configured to installed. Now all that’s left is to give the system a swift boot, and an identity.
zoneadm -z myzone boot
This boot’s the zone. Notice that it takes like 2 seconds to do so, which is neat compared to many other solutions. After this, we have a system without the final configuration and identity. Do this by issuing
zlogin -C myzone
Capital C for console, since that’s all we have over in the zone right now. It’ll ask you a bunch of questions, and want the answers immediately! Nothing you can’t handle if you made it so far. Mostly about hostnames, DNS, languages, timezones and such.When you are done, it’ll save and reboot, and then get you back in a console login prompt, where you can type in your root login and password that you just set.
Shazam. Your zone is up and running. Configure SSH as you see fit, and access your system that way from now on. Create some users to avoid using root, etc. etc. This isn’t a best practices post so i’ll skip that shit now. Pretty simple, and once you have it routined, it’ll take you 15 minutes to set up a new zone that is ready to use. Not bad, eh?
Oh yeah and the cheet sheet, quick reference for zones:
zonecfg -z zonename
create (for sparse root) or create -b (for whole-root)
set physical=bge0/hme0 whatever you have
zoneadm -z zonename install
zoneadm -z zonename boot
zlogin -C zonename