Cool thing happened on Twitter today…

A neat thing happened today on Twitter. While I admit that I don’t necessarily “get it” as fast as some of my “web 2.0” friends do, I haven’t seen this happen too much on Twitter since I’ve been following it in the past several months. I’m sure it probably happens all the time to cool people, but I was lucky enough to cross over for a few minutes and that’s notable.

Basically, the “thing” was that someone needed help understanding how to get started with an OID installation for managing TNS connect descriptors. He wanted (and needed) to use an existing database since he was resource-constrained and wasn’t sure what the installation process looks like for such an installation.

Here’s the combined thread between @fuadar, @topperge and me (@dannorris) just a few minutes ago:

fuadar: looking for someone or some document to install oid in an existing 10.2 database need only names service resolution
dannorris: @fudar It’s much easier to just have it install its own DB. If you use existing DB, you must run metadata repos creation asst first.
fuadar: @dnanorris out of space already have a database out there for other functions. trying to setup oid to solve our tnsnames issues
dannorris: @fudar Issues? Honestly, OID usually introduces more issues than it solves when it comes to TNS. It’s a lot more complex than a text file.
fuadar: @dannorris true but i’m trying to come up with some way to manage acouple of hundred servers and a couple of thousand clients
dannorris: @fudar It’s definitely the right direction to head–just need realistic expectations about complexity and manageability–not easier!
fuadar: @dannorris agree just looking for better documentation
topperge: @fuadar fudar, all you need is RepCA and install the identity repos, http://tinyurl.com/yweyr8
dannorris: @fuadar Better free up some space first–you’ll need a gig or two I’d expect. (ps sorry for misspelling your handle)
fuadar: @topperge so what you are saying is just go thru the oid software install process and then so the repca manually
fuadar: @topperge i am using the Oracle Identity management dvd’s 10.1.4.0.1
dannorris: @fuadar Install RepCA first, run it, then install OID from IdM and tell it to use the repos you created.
dannorris: @fuadar be sure to check DB prereqs (version, pkgs, options, etc.). Follow section here http://snurl.com/1xzda
fuadar: @dannorris thanks reinstalling the software now
topperge: @fuadar There is a 10.1.4 MRCA with the DVDs, install from that first , then install from the OIM Infrastructure CD second
topperge: @fuadar then make sure you patch to 10.1.4.2 which is patch 5983637 on metalink (doing the same install right now)

Even patch numbers! Posting that same question to a forum would likely have taken several hours to get responses–and precise responses as well. Now, I don’t want everyone to believe that @topperge (Matt Topper) and I sit around all day looking for questions we can answer on Twitter. However, I am on Twitter most of the time (even though I don’t tweet that often) and occasionally will throw a response or post in when I think of it. Matt is usually there and seems to behave similarly most of the time.

The bottom line: today, Twitter helped someone solve a real technical problem much faster than they were likely to solve it via other means (web 2 dot oh or otherwise). I don’t know that it happens every day, but we can only save one life at a time :).

You can follow me (@dannorris) on twitter, but as I don’t say much, you won’t likely be impressed. After all, I’m no Jake Kuramoto.

Interesting Metalink findings

I generally don’t spend a lot of time surfing around Metalink. Normally, I get in, find the bug or patch or whatever thing I need, and get out. However, my current project involves a database upgrade for a very performance-sensitive application (they have an SLA that they actually have to honor–or honour for my UK friends :), so I’ve been doing a bit of research. Coincidentally, a posting to Oracle-L recently allowed me to mention one of my research findings and several subscribers (one publicly) there responded that they had never seen the document and that it was great. Well, it really is great and Oracle Support or whomever it is that supplied the content deserves a great round of applause for putting it together.

The document of which I speak is the Oracle 10g Upgrade Companion. This document contains more than just the upgrade steps, but starts with a list of recommended patches, then goes on to include sections on Behavior Changes (this is especially valuable and absent from most other upgrade plans), Best Practices for the upgrade process, and Documentation references. While I am reporting that this is a great resource, I have two general suggestions for improvement Continue reading “Interesting Metalink findings”

Oracle 11g dbhome broken…oh, wait, nevermind.

I’ve been doing a lot of testing with Oracle Database 11g lately and I’m a big fan of using oraenv to set the environment. For many releases, it seemed that Oracle had completely ignored oraenv and dbhome, but they’ve made some changes in 11g that aren’t quite so helpful it seems. I’ll probably file an SR on this stuff soon, but it’s easy to fix.

The issue I encountered was that the dbhome script (which is called by oraenv to determine the ORACLE_HOME for a given ORACLE_SID) failed to return the proper ORACLE_HOME in some cases. After reading dbhome (it’s less than 100 lines long), I realized that the issue was…

Oh, nevermind. I started writing this from memory of one of the beta versions and when I went to check (right where I left off typing in the previous paragraph), I found that the issue had been fixed in the production release. So, apparently that bug did get fixed.

To summarize, the bug in dbhome in beta 5 was particularly interesting since it only came up when the first character in your ORACLE_SID name became a special metacharacter when preceded by a backslash (\). So, everything was going along fine until I created an instance named “rac11g1” and then dbhome failed to work, which also caused in oraenv becoming ineffective. All fixed now, nevermind. Kudos to Oracle for improving the oraenv and dbhome scripts in 11g to now also look for the ORACLE_BASE setting. As many of you have noted or will find out, ORACLE_BASE is becoming increasingly important to Oracle installations.

Oracle Clusterware & Fencing

I was just catching up on my reading and found an excellent post on Kirk McGowan’s blog discussing Oracle Clusterware’s fencing mechanisms. As Kirk details, there are many theories regarding the effectiveness and safety of Oracle’s fencing approach and he provides his usual no-nonsense responses to those theories.

Incase you are lost, a little background may helpful. Fencing (generally speaking) is a mechanism employed by clusterware software to force one or more nodes out of a cluster in the event of a problem. The problems can be, and usually are, serious ones and if fencing algorithms weren’t included, it is likely that most clusters would implode and be very unstable. There are many different approaches to fencing. Some vendors provide I/O fencing which works with the storage to stop any I/O from the node being evicted from the cluster and therefore, prevents corruption to the cluster filesystem and/or database files residing in non-filesystem storage (like ASM or RAW). Oracle performs fencing at the node-level and it uses a modified algorithm known as STONITH (Shoot The Other Node In The Head). As Kirk explains, since there are not easily-accessible APIs to do remote power-off for other cluster nodes, Oracle Clusterware instead uses node suicide where instead of kicking the other node out of the cluster, it removes itself by rebooting. Presumably, when the node restarts, if there is some persistent failure, the node won’t be able to rejoin the cluster and administrator intervention will be required to resolve the problem.

Anyway, Kirk’s treatment of the topic is great and I learned a lot (as I often do when listening to Kirk). Thanks for a great article (and your usual wit) Kirk!

The Best Oracle Database 11g New Features

Oracle Database 11g was officially launched today. As a beta tester for the product, I can say that this product has some very interesting new features that really make me want to recommend the upgrade to Oracle Database 11g.

Here are my thoughts on a few of the new features in Oracle Database 11g.

  • Database Replay (Real Application Testing): This feature allows you to capture the actual workload on one system and then “play back” that workload on another database. It acts sort of like a load testing tool, but better because it actually uses the real workload from a live system to generate the load on the secondary system. The capture will include all queries, DDL, DML, and all other activity in the database. It also includes the actual timing for each event so that concurrency is also kept the same. For me, this is the most compelling new feature in Oracle Database 11g and I think it will ultimately have the most impact. If the capture can be gathered on a 10g or 9i database, the feature will be even more compelling. Rumors abound, but I’ve heard that a 10g capture may be coming in the future. Just imagine–what if you could actually test your real application workload on a new database release before doing the upgrade…awesome!
  • SecureFiles: I didn’t put this new feature through any performance tests, but from the technical descriptions I’ve received, it will certainly have a positive impact. Basically, SecureFiles are the next generation of LOBs. Syntactically, you can almost miss the STORE AS SECUREFILE in the CREATE TABLE syntax. However, you won’t likely miss the performance impact of using SecureFiles–some testing has shown peformance comparable to filessytem access.
  • Invisible Indexes: Ever have one query that could use that extra index, but that index causes severe problems for the other queries accessing that object? If so, then an invisible index may be the answer. Basically, an invisible index is one that the optimizer only considers when it is hinted to consider that index. In all other situations, it is ignored (because it is “invisible”).
  • Partitioning Enhancements: You can use just about any combination of partitioning and subpartitioning schemes together in 11g. The restrictions from previous versions are lifted. The SQL Access Advisor now also includes the ability to recommend partitioning for an object if you’re not sure whether or not it will help.
  • PL/SQL Enhancements:
    • PL/SQL Fine-Grained Dependency Checking: This will enable PL/SQL stored code to remain valid if the object doesn’t require invalidation. For example, if you add a column to a table, the PL/SQL package that depends on that table shouldn’t become invalid in most cases.
    • PL/SQL Automatic Native Compilation: Native Compilation has been available for the past several releases, but it had significant prerequisites including a C compiler. This new feature includes the necessary compiler and automates the steps involved so that PL/SQL can be natively compiled automatically.
  • Results Caching: I’m a skeptic, but if this really does work well and gives current, non-stale data, it will be a very, very big deal.
  • Flashback Data Archive (“Total Recall”): If you liked the flashback table and flashback transaction features, you’ll love this. It basically takes the undo information that is used to provide the flashback table feature and archive that data so that flashback table can be performed for as long as you have disk space to support it.
  • Segregation of Ownership: One of the important features for larger organizations is the ability to segregate ownership of the Oracle software. For Oracle RAC clusters, there are typically three separate installations: Clusterware, ASM, and DBMS. With Oracle Database 11g, the beginning of support for separation of duties is visible. Oracle has acknowledged that some customers have system administrators that care for the Clusterware, but don’t know (or really care about) the database. The storage administrators are very interested in ASM and how it works so they can configure and support database storage better, but they don’t really know much about the database. And finally, while some DBAs are fluent in Clusterware and ASM, many know a little about Clusterware, a little more about ASM, but mainly focus on the database. Oracle’s new release will include documentation arranged in a manner that supports this segregation of duties.
  • Rolling Upgrades: This new feature is what you think it is, but it won’t apply to upgrades to 11g. It will, however, apply to many of the patches that will be released on top of the 11g database. That’s another big motivating factor to upgrade–so that future patches and upgrades will incur less downtime.
  • Automatic Partition Adds (Interval Partitioning): This is the automation that many people have done via a custom process for years. Basically, if you have a partitioned object that regularly requires you to add new partitions (commonly, this is when s date is in the partition key). With this new feature, Oracle will automatically add the new partition on the first insert that should go into this new partition. Obviously, you can still create new partitions by your own methods, but you might consider doing that by just running an insert and rollback instead of through a custom process as many customers do today.
  • Managed Recovery Physical Standby: Finally! You can apply logs to a physical standby database while it is open read-only. There’s some black magic that makes this possible that I’m sure will be the source of much speculation until it’s guts are exposed.

Besides these highlights, there are many other features that deserve mention. Many of those features are related to lifecycle management. Some very interesting advancements related to query tuning, testing those tuned queries, and rolling the new execution plans into production in a controlled, straightforward manner are among some of the most interesting to me. In another area of lifecycle management, managing less-frequently-accessed data, Oracle provides methods to migrate that data to less expensive storage to use the storage budget most efficiently without taking data offline.

Oracle Database 11g Launch

db 11g logoToday, July 11th, was the launch of Oracle Database 11g. From a technologist’s point of view, it’s a somewhat anticlimactic day since you can’t actually get your hands on the bits yet. However, there was some technical information posted on OTN and a nice overview presentation that was webcast online live from New York City.

I’ve got a lengthy article prepared on some of my favorite features of 11g, but I’m not sure if I can post it yet. I need to sort out what information they’ve made public and which parts haven’t been disclosed yet. My company and I are participating in the 11g beta program, so I want to be sure I don’t let the cat out of the bag too soon with respect to some of the new features that may not have yet been disclosed. In fact, as we’re reminded often, some of the features we tested may not be in the final product if they aren’t ready or mature enough.

So, watch for an article from me either here or on OTN in the next week or two after I make sure it’s properly censored. In the meantime, I encourage everyone to read the whitepapers on OTN.

Licensing continues to “uninterest” me

I am spending more and more time lately reading the writings of others in the Oracle technology space. Many of those readings start by following a link posted on the Oracle-L list.

Today’s linkfest led me to a great Open Letter to Larry Ellison on AWR and ASH Licensing by Mark Brinsmead. I first had to understand the issue as I’ve made it a high priority to learn as little about Oracle Licensing as possible. Right or wrong, I’ve continued to contend that it seems to change week-to-week and there’s at least a full-time job to just keep up with the changes.

Anywho, the issue is that in order to have any interaction with the Automatic Database Diagnostic Monitor (ADDM), Automatic Workload Repository (AWR) or Active Session History (ASH), you must license the Enterprise Manager Diagnostic Pack. (Don’t believe me?) This pack is licensed on top of your database license and currently lists for $60 per named user or $3,000 per processor.

As you’ll find linked in the open letter posting above, I found more interesting reading in this area in a few articles, one by Jared Still on DBAzine.com, and from last year, another by Jonathan Lewis.

While there was no interest in licensing that could be “sparked” by this new finding, I do like to help customers (and my own employer) stay in compliance with licensing restrictions, so this is good to know. I’d encourage you to add your name to the list of signatories on Mark’s open letter.