Archive Page 2
Just in case you were like me and did not tune in for Oracle’s quarterly earnings concall, there were some interesting highlights. As many of you (well, there aren’t that many of you that read this, but…) know, I’ve been very interested in Exadata since its announcement at Oracle OpenWorld 2008 in October. While some observed that Larry’s introduction keynote was rather brief, I didn’t take it as a sign of disinterest at all. According to the concall earlier this week, quite the opposite.
Here are some choice excerpts from the transcript that I find telling about the future of Exadata:
“So, that’s looking back. Now looking forward, I think the most exciting product we’ve had in many, many years is our Exadata Database Server.”
“Exadata is 100% innovation on top of our very large and very strong database business. And the early results have been remarkable. Charles Phillips will go into a lot of detail but I’ll just throw a couple of numbers out there.
One of our customers, and Charles will describe this customer, one of our customers saw a 28x performance improvement over an existing Oracle database. Another customer saw a monthly aggregation drop from 4.5 hours just to 3 minutes.
When compared to Teradata, a competitive database machine that’s been in the market for a very, very long time, another customer saw that we were 6x faster than their existing Teradata application, when using Exadata versus Teradata.
Another customer saw a batch process fall from 8 hours to 30 minutes. Charles will go into more detail on all this, he will repeat those numbers, because I think they’re worth mentioning twice.”
“So now just a few comments by area. On databases, Larry mentioned, we’re very excited about how the HP Oracle database machine is performing. The increases have just been stunning and so we are getting great feedback from our customers and the pipeline is the largest build I’ve ever seen in terms of a new product.
And as he mentioned, the numbers are just stunning. The major European retailer who reduced the batch processing time from 8 hours to 30 minutes did not believe the process had completed. We had to convince him that’s actually how it’s done.
And so, as Larry mentioned, this is the reminder that this is an internally developed technology in the midst of all the discussion of acquisitions. People forget that we’re actually spending $3.0 billion a year on research and development and this is why we do it.”
From these snippets, you can see that the top executives at Oracle are excited about Exadata. If you’re a techie (if you’re not, how’d you get to this blog?), you’ll probably already know about Kevin Closson’s popular blog on all things related to Oracle and storage. Kevin is giving a webcast next week on Exadata where we expect he’ll discuss some of the technical workings of the product–deeper than the overview information many of us have heard before. If you’re interested, I strongly encourage you to sign up for the event and attend. There is no better authority on Exadata than Kevin and this is a great opportunity!
I’ll be the first to offer a large congratulations to Jeremy Schneider on being the most recent appointment to the Oracle ACE program. He certainly deserves it (I nominated him, so I suppose I would think so) and I continue to look for great things to come.
Jeremy is the main creator of the IOUG RAC Attack! event that was held for the first time back in August 2008. He (with help from others) will also be putting it on as a half-day session at Collaborate 09. It’s a University Seminar on Thursday morning. All hands-on, all RAC, all the time. I’m looking forward to the event (I’m volunteering as a staffer). You should sign up now before it’s full! I can almost guarantee you’ll learn something.
Besides his work on this hands on lab/class for RAC, Jeremy has many other community contributions. His blog is full of excellent technical bits that always seem to come from a significant amount of research. He contributes occasionally to the Oracle-L mailing list. He also contributed some code to OCFS (v1) several years ago, so you can guess he understands a thing or two about programming and Linux, too.
His ACE Profile isn’t posted yet, but look for it to arrive soon. In the meantime, read some of the good stuff he wrote on his blog and look for him (and me too) at Collaborate in early May!
Those of us that have dealt with RAC environments for a while are familiar with the behavior of Oracle Services in an Oracle Cluster. Services are an essential component for managing workload in a RAC environment. If you’re not defining any non-default services in your RAC database, you’re making a mistake. To learn more about services, I strongly recommend reading the definitive whitepaper by Jeremy Schneider on the topic.
In an Oracle RAC cluster, services can be started, stopped, and relocated from one instance to another. However, if you have multiple services for your database, then it becomes difficult to start them at a cold start. Due to dependencies in Oracle Clusterware, Continue reading ‘Start Database Services automatically after instance startup’
This has been an interesting week, but not really that surprising.
I was called back to a previous client site where I had previously helped with some Oracle Application Server (10.1.2.2) post-install configuration. In that previous visit, I got oriented to the environment they use and the packaged application they were deploying. The packaged application uses JSP, Oracle Forms, and Oracle Reports (possibly also Discoverer). The deployment environment is all Microsoft Windows servers with two Oracle Application Server homes per application server since the vendor’s deployment requires that JSPs be deployed in a separate O_H from the Oracle Forms and Oracle Reports environment (that’s the first eyebrow-raise I did, but whatever). Continue reading ‘Install to go-live, 3 days’
It seems to everyone that I travel a lot. I guess I do compared to most people, but I enjoy traveling, seeing new places, new people, and old friends about as much as I enjoy anything. It’s usually part of my job anyway. So, with a once-in-a-lifetime chance to visit a place I’ve never been and may not have much reason or opportunity to visit again plus do some scuba diving, I couldn’t pass it up.
That’s right, in June 2009, I will visit Iceland and willfully plunge into the +2 C water that is the clearest body of water in the world. The reasons it is so clear have something to do with the fact that the water is the runoff from melting glaciers, filtered by volcanic rocks, and is very, very cold. It supports no wildlife (another reason it’s so clear/clean). Rumor has it that visibility is over 300 feet–that is something I really do have to see to believe.
The trip is being arranged by my friend Mogens Nørgaard who may very well be completely crazy. If you ever get a chance to meet and engage in conversation with him (a.k.a. “Moans Nogood”), do it. You won’t regret it, guaranteed.
The trip is highlighted on DIVE.is, Iceland’s (probably only) dive shop website. Oh, I forgot to mention that the lake bottom is where two tectonic plates (the North American and Eurasian plates, to be precise) meet up (!), so you’re essentially diving on or in one of the continental divides.
Of course, I’m very excited about this trip and hope that Ice, land can continue to function as their economic issues seem to be a little worse than everyone else’s. In the small world department, I have made contact with an Iceland native that I worked with back at Tandem (acquired by Compaq -> HP) in the late 90s. Hopefully, I can meet up with Leifur while I’m in the country. There are only about 300,000 people in the whole country, so he shouldn’t be *that* hard to find. On the other hand, it is possible that Leifur is like “John” is in the US. We’ll see.
The second day of the RMOUG Training Days event was just as good if not better than the first. I took some notes for some sessions, so before my head explodes from all the information overload, here’s my brain dump of the day’s events. Continue reading ‘RMOUG, Day 2, ++1′
RMOUG Day 2 has started, but there was so much great content yesterday, I don’t know if I’ll remember it all unless I write a few notes here on my learning.
My first session of the day was Graham Wood’s session on adaptive thresholds for monitoring in 11g. I didn’t know very much about these new methods for setting alerts, but they are certain to be useful. Some of the methods for adaptive thresholds were available in 10g, but many enhancements were made in 11g. Most importantly, the cyclic nature of workload has periods that can be automatically determined in 11g whereas in 10g, they had to be specified manually. Graham talked briefly about using DB Time and Average Active Sessions as important metrics for tuning, but for alerting, adaptive thresholds make the most sense. Setting a hard limit means that you will likely miss many issues. If your system is normally 10% utilized overnight, but spends all night at 60% utilization, you would like to know about it. However, during the daytime, if the system normally operates at 75% utilization, a hard alert limit at 80% would have miss the 6x utilization increase from the overnight hours. Adaptive thresholds wouldn’t miss the aberration and would alert you of the 60% utilization in the overnight period. That would allow you time to attempt to resolve the problem before daytime hours start.
I arrived in Denver yesterday afternoon for the RMOUG Training Days event. As I’ve written before, this is the first conference I’ve attended (and paid for!) as an attendee in at least 6 years, maybe the only one ever. The coolest part was that the small amount I paid for an attendee registration ($285) has already been totally worth it, and the sessions haven’t even started yet. Many people would pay that amount just for an opportunity to visit with some of the people I got to talk with last evening.
After all, I know of no other conference where you can have meaningful, interesting conversations with all of these fine individuals in a single evening: Mogens Nørgaard, Debra Lilley, Graham Wood, Kevin Closson, Daniel Liu, Gaja Krishna Vaidyanatha, Jeff Needham, Christo Kutrovsky, Mike Ault, John King, Joze Senegacnik, Tim Gorman, Duncan Mills, Lynn Munsinger, and Peggy King. And those are just the people I got to talk to (there were many other well-respected technicians and sharers of knowledge around that I didn’t have time to speak with. I haven’t even gotten to see Cary Millsap, Robert Freeman, Craig Shallahamer, Riyaj Shamsudeen, Tanel Poder, Jeremiah Wilton, Tom Kyte, Iggy Fernandez, or Daniel Fink yet.
This is my first time to RMOUG and you may be thinking that this is some sort of fluke to have all these great researchers and presenters at one event, especially a relatively small event where there are ample opportunities to network with them directly. I’ve reviewed the RMOUG agenda for the last 3 years and this agenda is representative of the quality that they’ve managed to schedule for the event every year. Plus, unlike other conferences, the RMOUG attendee tuition is priced to help them break even, not make a large bankroll. My biggest regret is that I couldn’t manage to get here before this year!
I love attending technical conferences for Oracle. I guess that’s obvious since many of you have probably seen or met me at a conference. The best parts for me are meeting so many of those that I’ve connected with on mailing lists, forums, or other online communities. Of course, conferences are a place to share what you know and I find that especially rewarding too. To that end, here are some of the sessions I’ll be sharing in the 2009 conference agendas.
Continue reading ‘I can haz conferences’
I’ve got a (always-growing) list of product, features or configurations that I’d like to experiment with, but sometimes they aren’t practical to test on my local virtual machines. So, I planned to roll a new virtual machine on the development ESX server that we had at my office. All was going along fine with the Linux installation (OEL5U2) screens until I got to the end where it starts actually installing. For whatever reason, our little server was sick (likely a storage problem) and it hung for hours.
Rather than debug the storage issue, I wanted to get on with my testing. I consulted my usual list of experts, and my friend Matt suggested that I spin up a machine in the Amazon EC2 cloud. I checked out the costs and it seemed fairly reasonable. VERY reasonable, actually. Since it was based on time (cost per hour the machine is running), I waited for a day or two until I could dedicate enough time to it and complete the testing in one sitting. Continue reading ‘I bought my own server for $1.02 (USD!)’