Saturday, May 29, 2010

LDOM 2.0 and next Solaris update

It looks like LDOM 2.0 support will be included in the next Solaris update, update 9 probably planed for release around september. This will in practice mean that it will finally support dynamic reconfiguration of memory. It will also include cooperative domain migration support, which is a foundation for live migration, which at least early this year was scheduled for LDOM 2.1. This will be built on the next LDOM firmware, 7.2.10.

LDOM 2.0 will also support the next Niagara chip, the UltraSPARC T3 with 16 cores. When put together with the announcement that the LDOM team is hiring it sure looks like Oracle is investing in SPARC and LDOM technology.

Solaris Virtualization for Voice of Customer Tour (PDF)
FWARC/2009/452 HV APIs for cooperative guest migration

Friday, May 28, 2010

Putback and a new build for 2010.05

The DTrace TCP/UDP providers discussed in this post have now been integrated into the OpenSolaris source. Another useful enhancement also made it's way into the source, PSARC/2010/181 PRIV_SYS_RES_BIND privilege. This will make it possible delegate permission to bind processes to specific processor sets from within the zone.

A new build of what is to become the next release of OpenSolaris is also probably finished or at least very close to finished, the second respin of build 134, 134b:

Author: david.comay@oracle.com
Repository: /hg/pkg/gate
Latest revision: 48706bcc893fc2c3ed76528eb4bc4b5dcb940f95
Total changesets: 1
Log message:
16087 resync repository to snv-134b

Wednesday, May 19, 2010

What's new for OpenSolaris 2010.05

Finally some more information on the upcoming OpenSolaris release, which also seem to have changed name from 2010.03 to 2010.05.

There is a new draft of the "Whats new" document for the OpenSolaris release available online. It lists the major new features thats available with the upcoming release. Hopefully it will be released in the next few weeks.

Update: The document was removed but it's now available again. This is basically the same document as the old 2010.03 version, with some minor modifications.

Thursday, May 13, 2010

Oracle RDBMS on ZFS white paper

Storage stop enlightened me that Oracle have released a white paper with recommendation for deploying Oracle RDBMS on top of ZFS in Solaris. It have some details regarding version of Solaris (latest of course), Oracle RDBMS (any) and some basic setting for both the database and the datasets. It have some recommendation and suggestions on kernel tuning also. This is of course a good document to start with if you are looking at deploying Oracles databases on ZFS.

A direct link to the document: wp-oraclezfsconfig-0510_ds_ac2.pdf

Monday, May 10, 2010

Solaris 10 and "upgrade on attach"

The next update of Solaris 10, probably due this fall, will include support for the new option for the update on attach function for zones discussed earlier. All packages that where to be installed in a new zone can be updated at attach instead of the bare minimum to get a supported zone running. The option of doing this today is to include all zones in the initial upgrade, this can take a very long time with many zones, even with the recent enhancements to the packaging system in Solaris 10 10/09. Using this feature the global zone can first be upgraded and then the zones can be attached one after another, possibly moved from another global zone to minimize the downtime.

Read my previous posts regarding the other updates to update 9, or Solaris 10 9/10 which is the planned name and release for the update. In short, large ZFS update, JDK 1.6, iSER and Firefox/Thundebird updates.

Solaris 10 9/10 and second ZFS refresh
First hints of Solaris 10 update 9
Desktop update for Solaris 10 update 9

Sunday, May 9, 2010

zil synchronicity and zpool status updates

A few interesting ZFS related enhancement have made it's way into the OpenSolaris source the last week. First PSARC/2010/108 zil synchronicity was integrated which made it possible to control how writes are handled per ZFS dataset. Previously if you wanted to disable the ZIL and in practice disable synchronous writes in favor of speed you had to do it for the whole system. It can now be set per dataset and you also have the ability to force all writes to be synchronous. This can be particular useful for speeding up writes to temporary NFS shares since NFS writes are synchronous. This should however only be used if speed is more important than that the data is guaranteed to be safe on disk if something fails, even if ZFS always is consistent on disk all data might not have been written to disk before failure.

The second enhancements was in the scrub code, among many things it should now be more accurate when estimating the time for a scrub/resilver, resilver progress per disk and zpool should now remember when the last scrub was performed even after reboot. Here is a more complete list of these changes.

The ZIL synchonicity enhanchements where made possibly thanks to fellow blogger Robert Milkowski, good work!