OpenIndiana vs Solaris 11 zones

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
I plan to move all our native Solaris setups to zones. It will sit on a computer that will be isolated on the network, without direct internet access. Our company doesn't plan to pay for support from Oracle for the operating system. I know Chewy told me to run from this situation, but I want to learn these things so I accepted the job anyway.

Since OpenSolaris is dead, I'm weighting to move either to Oracle's Solaris 11 (without patches and unsupported) or OpenIndiana (which is fully supported by a UNIX community with a lot of free time on their hands). Both OSes support the hardware I plan to use for the host system. I'm leaning towards OpenIndiana because of the community support, but the bigest factor will be if zones are as efficient in OpenIndiana as they are in Solaris 11. From what I read, a zone only adds 1-2% performance hit over a standard installation. That's a lot better than what VMWare can claim.

Does anyone knows if zones in OpenIndiana are as good as they are in Solaris 11?
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
I'd also like to use LXDE instead of Gnome, but it looks complicated. Compiling stuff on Unix is not my force.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
They are both built from the same source code, except OI is slightly older than the S11 version. (OI is built on code base version 151a and S11 is built on code base version 175a). So as far as Zone support and performance, I would expect that they be the same in regards to hosting S10 Zones. However I'm unsure of the state of Crossbow support in OI, which is a very nice feature when dealing with Zones. (http://hub.opensolaris.org/bin/view/Project+crossbow/ )

AFAIK, LXDE doesn't build on S11, but may only require a few patches to build/work. (Note: A lot of the admin tools that come with LXDE are designed for Linux, and not Solaris in general, so won't work, so you may end up bringing in quite a few GNOME apps anyway). However there is a SFE package for LXDE for OI, but requires a full blown CBE (Common Build Environment). BTW, why are you worried about desktop environments on a server? It's all done via command line, or remote X server (via SSH) if you need a GUI. Most Solaris servers don't run X at all... there's simply no need for it.

My only concern with OI is patches? Since the latest OI is based on Illumos, and with Oracle cutting off access to the source code, I don't know how rapid patches are produced for OI in the event of a bug/security issue being found. (despite having a very good community around OI and Illumos).
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
PS. From the Solaris 11 License agreement.

You may not:
- use the Programs for your own internal business purposes (other than developing, testing, prototyping and demonstrating your applications) or for any commercial or production purposes;

So that might make your decision very easy... ;)
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
BTW, why are you worried about desktop environments on a server? It's all done via command line, or remote X server (via SSH) if you need a GUI. Most Solaris servers don't run X at all... there's simply no need for it.

Me and the command line are no friends. While I do almost everything on the servers in the Terminal, I like having a windowed environment to open several Terminal windows simultaneously.

I'll look into crossbow, even though it looks like I'll have no choice but to opt for OI. Thanks for the feedback.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
BTW, do you have an idea how much it would cost to get a right to use Solaris 11 on two different 4 sockets systems, each socket having 8 real cores and 16 threads? I assume it would be tremendously expensive?
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
From what I see on their website, it would cost us ~6700$ for a 3 years support contract (that we don't need) for the two servers we plan to deploy. No thanks.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
BTW, Coug you may want to check the current licenses you have for the existing S10 installations. Be aware that most legacy licenses for support are tied to the hardward (if the license came with the hardware), otherwise you will need to pay for your existing S10 installations if moved away from the original hardware (eg migrated into VMs or Zones on another machine).

S10 U9 has the same clause as S11, free for non-commercial use, $$$$ for the rest... S10 U5-U8 licensing was different due it being defined by Sun, and not Oracle, but double check what you have and the licenses associated.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
After an entire day of swearing and cursing, I think I can safely say that I'm fairly knowledgeable about Solaris zones : how to configure them, how to run applications within them and how to resolve issues preventing them from starting. I've finally been able to setup my test server to replicate several of our old systems and tomorrow we begin testing how much requests it can process before failing. Damn I feel good right now!
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
Is it possible to assign two VNICs to a single zone? I've tried and so far, I've not found a way (it's only been 10 minutes). When in the zone configuration mode, you can "add net" and then set the physical NIC (or VNIC), but when you try to add a second NIC, it replaces the first one instead of adding up to the other.

Ex :

Code:
[I]#global>[/I]zonecfg -z zone1
[I]#zone1>[/I]add net
[I]#zone1/net>[/I]set physical=vnic1
[I]#zone1/net>[/I]set physical=vnic2
[I]#zone1/net>[/I]end
[I]#zone1>[/I]verify
[I]#zone1>[/I]commit
[I]#zone1>[/I]exit
[I]#global>[/I]zonecf -z zone1 info

zonename : zone1
zonepath : /export/home/zone1
[...]
net :
        address not specified
        allowed-address not specified
        [B]physical:vnic2[/B]
[...]
Damn!

I'm thinking about trying to make a VNIC on top of the VNIC from inside the zone, but I doubt it'll work.

This is on OpenIndiana, in case it wasn't clear.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
One last issue I have to make my config work. Within a zone, I need to open the port 443 to allow our program to communicate, but apparently, ports below 1000 are blocked by default. How can I open just one? I haven't configured a firewall on that machine. Anything that filters the ports is configured with the default settings.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
I wasn't aware ports were filtered by default in zone setups, unless you've enabled the firewall? Also have you setup any flows within the virtual switch?

But what you are describing about being under 1024 not accessible, by default only root (or root equivalent) users or applications running as root are able to open ports below 1024. To confirm this as the problem, just setup the service above 1024 and if that works then it's most likely a permissions issue, not a IP filtering issue.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
Also have you setup any flows within the virtual switch?
I haven't setup any virtual switch. I've only created vnics and assosciated them to the different zones. Is it bad to do so? I have 6 vnics and two physical nics. 5 of thevnics need to be assosciated with the same physical nics.

But what you are describing about being under 1024 not accessible, by default only root (or root equivalent) users or applications running as root are able to open ports below 1024. To confirm this as the problem, just setup the service above 1024 and if that works then it's most likely a permissions issue, not a IP filtering issue.
So maybe the problem is related to the fact that my program needs to be performed by a non-root user. Otherwise, it won't work and I don't have the programing skills to modify it. We also don't have the source code. I must play on port 443 as a non-root. If I can't, I'm screwed. I play with ports 31xx and 600x on all the other zones without issue.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
It won't help me if this is a permission issue. I'll see about this a little after 8am when I'll be at work (in ~2h11m).
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
I haven't setup any virtual switch. I've only created vnics and assosciated them to the different zones. Is it bad to do so? I have 6 vnics and two physical nics. 5 of thevnics need to be assosciated with the same physical nics.
Not at all, the simple solutions are often the best when dealing with zones.

So maybe the problem is related to the fact that my program needs to be performed by a non-root user. Otherwise, it won't work and I don't have the programing skills to modify it. We also don't have the source code. I must play on port 443 as a non-root. If I can't, I'm screwed. I play with ports 31xx and 600x on all the other zones without issue.
RBAC to the rescue...

# usermod -K defaultpriv=basic,net_privaddr <username>

where <username> is the user that the process runs under.

This will give the user access to any port below 1024, not just 443. I assume you don't that much fine grained control over the port security? (Otherwise see the manual on RBAC and "net_privaddr" permission set).

PS. Some light reading: http://www.c0t0d0s0.org/archives/40...es-RBAC-and-Privileges-Part-3-Privileges.html
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec

Solved! Even after I copied your line within the zone, it still blocked the access to ports lesser than 1024. All pissed, I followed the link you provided. The command that did the trick was (from the global zone, with the zone being halted) :
Code:
#>zonecfg -z zonename
#zonename>set limitpriv=default,net_rawaccess
#zonename>verify
#zonename>commit
#zonename>exit
Once I restarted the zone, everything worked flawlessly. Well, in appearence. I still have to test how well it can cope with load, but at idle, it works.

Next time I drop by Brisbane, I'll pay you a beer.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
Well, I'm highly disapointed now. Network communication is choppy. "Top" tells me the RAM and CPU are turning their thumbs most of the time, but my clients connected to the server are receiving data in chunks and then nothing for a second or two. Data should normally be flowing smoothly.

Maybe it's because I've used four VNICs on the same physical link. I don't know. The only thing that I know now is that I don't get the performance I was hoping for from this setup.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,737
Location
USA
Well, I'm highly disapointed now. Network communication is choppy. "Top" tells me the RAM and CPU are turning their thumbs most of the time, but my clients connected to the server are receiving data in chunks and then nothing for a second or two. Data should normally be flowing smoothly.

Maybe it's because I've used four VNICs on the same physical link. I don't know. The only thing that I know now is that I don't get the performance I was hoping for from this setup.

If CPU is idle, can you check for IO Wait times. I've seen in numerous virtualized environment where an over-worked disk subsystem is enough to cause a nightmare. It's usually that or overcommitment of RAM causing paging on the hypervisor side.. Rarely have I seen CPU contention or network contention in virtualized environments cause dramatic impacts. This may not apply in your setup, but figure mentioning it might help troubleshoot different areas.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
If CPU is idle, can you check for IO Wait times. I've seen in numerous virtualized environment where an over-worked disk subsystem is enough to cause a nightmare. It's usually that or overcommitment of RAM causing paging on the hypervisor side.. Rarely have I seen CPU contention or network contention in virtualized environments cause dramatic impacts. This may not apply in your setup, but figure mentioning it might help troubleshoot different areas.
I'll check tomorrow, but the system has 16GB of RAM for only three zones plus the global zone. When I checked the RAM usage using "top", 14GB out of the 16GB total where free. IO wait times can be viewed in OpenIndiana...how? Netstats maybe?
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
It finally worked and even better than I expected. I ran the tests on my toaster, a 600$ mini-ITX box with a single, unoverclocked, Core i7 2600. With the zones fully configured and a bit of tweaking, it is almost able to match the overall performance of our production group. The problem is, our production group is composed of 23 servers from 2 to 5 years old and they cost ~40000$ when they were new. I'm pretty sure two machines like my 600$ toaster would either match or equal our 23 production servers while three would almost certainly wipe our current production group clean.

The other issue I've found is that seven of our 23 servers are quad-cores and the version of Solaris running on them supports the creation of zones, so we could easily add a 50% to the performance of our current setup simply by using containers. Our main program is more than ten years old and is limited to single threads, so in order to gain more performance, we need to run additional instances on separated machines. That's the main reason for the need of containers. What I cannot understand is that my collegue, who has a lot more knowledge than I have on Solaris, never thought about using zones to boost the entire setup's performances. He's the architect of our current configuration...and the one before it too. Since the second week I've been working there, I've told him we need to find a way to run this thing in Solaris zones, but it has never rung a bell. I've been there for 6 months and I never configured a Solaris system before...and I end up configurating a 600$ box that can keep up with his 40000$ half-rack. The comparison is pretty ridiculous when you view both side-to-side (picture the Silverstone Sugo SG-05W with the busted front panel that I've shown here before versus 23 1U servers). I guess the explanation is that he doesn't have a Chewy to help him...(and a timwhit and an Handruin)

Our boss has been pressuring us to find solutions to enhance the production's group capacity and asked us how many servers we would need to buy and how much it would cost. Wait until I tell him we don't need to spend a dime.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
Congratulations!

Well, you could replace the 23 servers with 2-4 dual-socket 6 or 8 core Xeons based servers, and absolutely blow away performance (over the single i7 setup), and not to mention SAVE $$ on power and cooling. (If they are intent on spending money). It certainly sounds like they also need a migration plan to get rid of the older 3yr+ servers onto something new anyway. (I'm typically shy of running server hardware not under warranty if it's for a business critical function).
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
I thought they needed all those servers to impress potential customers?
Well, they can always laminate those that are no longer used and hang them on the walls... Honestly, I don't consider the sales pitch to be my problem.

Oh and I did some more calculations and I realized that I will be able to double the overall capacity of the production group instead of simply adding 50% to it. This is getting better and better.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
Out of interest, the other guy, has he done any of the Solaris 10 training courses? I know when I did my Solaris 10 System Administration Track, even the basic level certification includes a whole module on Zones/Containers. Whilst Solaris 10 doesn't have crossbow (which was introduced with OpenSolaris), Zones were a very powerful concept even without the extra VNIC support that crossbow brings to the table.

FYI, For those that don't know, crossbow support basically lets you build entire virtual LANs and even WAN setups (including complete routing/firewalls) all within the virtualised environment. I think it was on Joerg's** blog, he actually mimicked an entire WAN setup for distributed cloud computing including backend servers (IIRC each server was running Glassfish and Oracle DB) all on a single T2 based server, and was able to show what would happen when servers fell over in such a setup! (Crossbow also lets you set throughput limits on VNICs as well, so you can simulate WAN connections quite well). The virtualisation options that Solaris 11 bring to the table, is very impressive.

** Joerg is a senior Solaris specialist at Oracle Germany.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
What CPU do these servers have? I hope they aren't T2s.

Well, if they are running purely single threaded applications on a T2, someone didn't do their homework!

PS. Oracle announced that the T4-4 server can now configured with up to 2TB of RAM, in 4 socket configurations. (4 sockets = 4 CPU w/8 cores /w8 threads per core = 32cores or 256 hardware based threads). A fully decked out Sparc T4-4 server (4x T4's @3GHz, 2TB RAM, 3.2TB of 10K SAS drives, quad port 1GB LAN + quad port 10GB LAN option, will set you back just under US$300K without discounts applied).
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
No, not T2. Most are either Dell or HP single socket 1U servers with dual or quad-core x86 CPU, mainly Core 2 Duo era. There are 4 Xeon X3460 too. I don't know if the other guy followed a seminar or not. All I know is that he's worked on Solaris for nine years and it took me 6 months from scratch to find a way to double the efficiency of his setup, which in my book is pretty lame.

So you think 2006 version of Solairs 10 doesn't support the use of VNIC in zones? That's unfortunate. We'll have to install OpenIndiana on them then. Still a lot cheaper than replacing the whole server group (an entire 0$ expense!). The company is low on cash these days, so being able to delay a major expense will be welcome. Of course, I'd really like to replace the entire 23 servers with a single 1U Supermicro 6017TR-TF (for instance), but it will probably have to wait.

BTW, 300K$ for 256 hardware threads doesn't impress me all that much. Fill a Supermicro 720E-R75 chassis with 10x SBI-7227R-T2, each with 4x Xeon E5-2680, 8x 8GB DDR3 ECC Reg low profile 1600MHz RAM modules, 4x Intel 520 Series 240GB SSD and with a 20 ports switch and KVM module will only set you back ~133000$. You'll have 640 hardware-based threads, 2.56TB of RAM, ~9.5TB of SSD storage and 20 gigabit ethernet ports. All of this fitting in a 7U rack space.

You can also beat the thread count per $ of the latest Sun server with Supermicro's Fat Twin servers or a bunch of Intel's H2216JFJR, although both aren't as compact as the blade solution above.
 
Top