Thursday, April 17, 2014

SQL Saturday #295 Las Vegas wrapup

(written while disconnected in the southern desert of Utah on April 8th)


This last weekend found me travelling to Las Vegas to help celebrate its first SQL Saturday ever. The team of Stacia Misner, Jason Brimhall, as well as a slew of others, were able to stand up a great event that drew a descent opening round as well as great support from local, regional and remote speakers.

As it is only a few hours (6) from my home, we decided to go as a family, spend some time in Vegas, then start a Spring Break week vacation afterwards. Various options were discussed in my family, and it was decided that we would go camping and dirt biking after an extra bit of time in Vegas. This requires us to load up anb haul our camper somewhere. We chose to camp and bike in St. George, which is partway to Las Vegas. We found a location where we could drop off our trailer and leave it for a few days while we spent time in Las Vegas.

The fateful day came when we left to make our way south. With the trailer loaded and ready we took off around 10:30 am, with 2 planned stops that would consume an hour or so, extending the trip from 6 hours to about 7 hours.

The first stop occurred as planned, and I even saved $20 I wasn't planning on. Yeah, its always nice to save money. Around 11:30 am we left that stop and headed south. We passed a town called Nephi that is an hour plus from our home. As we pass this town, there is a large gap of towns that usually means you are really on your way. The trip underway, we get excited to be going. Its about this time that I hear a small explosion from the left side of our vehicle, and am alerted to flying debris from the mirror. Having never experienced a blow out on a tire, I was surprised at how I simply knew what had happened. Blowout. Yeah. less than 200 miles from our home and at the beginning of the tirp, boom. I pull over semi gracefully and get stopped to  inspect the damage. As I exit the vehicle, I grab my nexus 5 cell phone so I can take pictures, if needed. As I exit, the phone slips from my hand and lands screen down on the pavement. It looks fine, but I can tell that it just added to the cost we had just had shoved down our throats with the blowout. Sure enough, the screen was cracked before I could even go see the damage behind us. It turns out that only 1 of the 4 trailer tires blew, which explains why it was easy to pull over after I saw the tire pieces in my mirror escaping their previously contained existence.

After several phone calls, we found a tire dealer that could send out help. When Ben arrived, he was able to pull the borked tire off our trailer. While he was there, I inquired as to his expert opinion concerning the other tires. The prognosis was rather negative, and while he returned to his shop to fix the 1 tire, we deliberated what we should do. Several options lay in front of us. Replace the single tire, replace the other partner tire on that side, replace all 4, take the trailer back home, continue on with some or all tires replaced, and so on. Ugghh. When Ben returned with a brand new tire, we were able to get it mounted easily. We had decided to return to Nephi and get all 4 tires replaced. The tires were from 2007, and though they didn't have a ton of wear and were actually had plenty of tread left, there was evidence of age and cracking visually. The time had come, we just hadn't planned on it. The tire store was amazing and got us fixed up really quickly and on the road again. 2.5 hours delayed, but on our way. Let SQL Saturday trip begin, again.

Back on the road again, with stress levels high, we got to experience them even higher as the winds blew us all over the road for the next 4 hours. At some points during the trip I had to remove my hands from the wheel and shake them out because of holding on so hard to counteract the pull of the wind gusts. Stress was a constant companion. Eventually we made it to St. George and were successfull in dropping off the trailer at storage. Phase one of the trip was complete. On to Vegas, just hours and hours delayed.

We arrived in Vegas about 20 minutes after the speaker dinner started. I went in to the dinner to grab some keys to our hotel (SQL Solutions Group had a suite of rooms rented for us and had checked in already). I intended to grab and go, but ended up hanging out a bit to talk and say hi to #SQLFAMILY members before getting my family to the hotel. After dropping the kiddos off at the hotel, my wife and I returned to the dinner and had a great time catching up with folks. It is always fun to introduce my wife to the members of my other family. The dinner that was thrown for the speakers was delicious. The gifts given to speakers were spot on for the locale. A deck of cards with the SQL Saturday Las Vegas logo, a handful of similarly logoed poker chips and a usb jump drive shaped like a poker chip. Turns out my cards have 2 5's of clubs and no 4 of clubs. But #firstworldproblems aside, I love the cards and enjoyed playing card games with my family and with them already.

At the dinner, I was able to meet and talk with several of the volunteers that would be helping out in the morning. Questions were answered, tasks discussed, and times set to meet in the morning to get it going.
The next morning I awoke late, but was at the facility a little after 7am. I grabbed the signs and went back out to the various roads to put up the directional signs to help attendees find the facility. The morning in the desert was glorious, warm, wind blowing slightly, and bright blue skies. ?I ran around with my truck and the signs and the radio blaring, pounding the sign holders into the bedrock around the area. Once complete, I returned to the facility and said more "Hi's" to folks as well as started doing various volunteer tasks. The morning got going with a couple of hiccups, mainly the printing of SpeedPasses had some issues. The rooms were up on the third floor and we found attendees had an issue initially getting around and finding the rooms and getting around. Once the layout was understood, folks had less of an issue with it. It turned out that the printed map had a typo that also lent to this confusion. It was a problem, but not a large one, and once communicated out, it was overcome. The location of the vendors was not ideal, and lent to light foot traffic to visit the vendors. This should be remedied next time if this facility is reused. The location for lunch was ideal as it was the largest room and also served as one of the session rooms. So once lunch arrived, attendees and speakers knew where it was and easily found their way there. Lunch had been catered and the catering folks did a great job keeping the supply available as the attendees flowed thru to get food. The food was good and probably one of the best lunch selections I have had at an event like this. There was a sponsor for lunch, which may have helped it be such good food as well as catered. I likey.

The facility was a bit odd, and one of the rooms had almost lounge chairs to sit in instead of the normal desk and office type chairs. The speaker stations were really high tech and had an ipad controller to select all the options for display. Initially I thought that this may prove to be a problem as each new speaker took the station, but it turned out to not be an issue, as the previous speaker had already selected the options and left it ready to go. The speaker ready room was large enough for a few speakers to get ready in, and small enough that all the speakers were unable to hang out the entire day there. This is a good thing, as it forced speakers to get out and around the event, instead of holing up in the speaker room. It was also centrally located and lent to a good flow. It's nice when the speaker room is not off in a far off location of the facility.

When the last session was completed, we all adjourned to the lunch room and got to listen to a sponsor session from Microsoft. This was a bit odd, and luckily was not the entire half hour as previously mentioned. Since it was a large room, folks gathered in the back of the room and started talking. These conversations became loud enough that the attendees actually trying to listen started turning around and giving 'that look'. Truth be told, i was one talking and received 'that look'. Suffice it to say, the location in the big room and the gathering of us all there, was not ideal for an after session. Maybe it would have been better to let folks know to convene in the big room at 5:30, after the sponsor session was over, instead of 5:00 and talking. But its never a perfect science.

After the sponsor session, the attendees were able to receive the large prizes from the sponsors. Each sponsor was given a moment to talk and give out their prizes. A lot of folks were able to receive some great prizes and they all looked happy. Once completed with the giveaways, Stacia and Jason were able to sell the event for next time, as well as PASS and the local user group. Hopefully this activity will have instilled in the local populace the excitement that we have all felt, and propel them to continue building these communities, networking with locals as well as others in the community, and help themselves and their careers.

There was an after party and though not everyone attends these, enough folks did that we were able to hang out  a bit and ultimately say our good byes to the #SQLFAMILY members we did get to see. Hugs, shakes and networking completed, we went our separate ways. Some returning to hotels, others going to the airport, and still others heading out to be entertained by the Vegas nightlife. Thus bringing to and end, the event, the learning, and the networking. Another one in the bag.

Thanks to all organizers, volunteers, speakers and attendees. Let's hope that this is the start of many more to come.

Wednesday, April 02, 2014

SQL Saturday #295 in Las Vegas – April 5, 2014

SQL Saturday #295 in Las Vegas – April 5, 2014



Tuesday, April 01, 2014

SQL Saturday #279 Phoenix wrapup

This last Saturday I had the opportunity to go from 29 degree mornings to the Arizona desert and 70 degrees during the day. Looking at it that way, it wasn't much of a choice. The weather was nice and very enjoyable. Thanks for having it. If whoever was in charge of that weather, please send some my way.

But back to the event. I took off from Utah around lunchtime and flew down to Phoenix with Randy Knight and Jason Brimhall of SQL Solutions Group fame. We had an uneventful flight down, with rough weather on both takeoff and landing. Landing in Phoenix with clothing on the bottom halves of my legs seemed like a bad idea and was remediated quickly. The evening found us at a baseball game with the rest of the speakers. Amy Lewis and crew were able to acquire tickets to the D-Backs vs Cubs game. We also got vouchers to get food in the local offerings. After standing in line for 15 minutes and waiting for a my food for a half hour I had a descent burger with some incredible garlic fries. They tasted amazing. And hours, even days later, they still tasted great. Just kidding, but they did hang around for quite a while. I enjoyed them for most of the time that they were with me. The game was disappointing if you were a DBack fan, but it was still enjoyable, with some great plays and tense moments, as well as some great time to mingle and talk with all the folks attending the event. It is always like the first day of summer camp when you get to see all your friends and catch up with them, telling stories, comparing, sharing and simply enjoying each others company. Mixing and talking to everyone, watching the game, repeating, I think that we all had a blast. With the game over, folks broke up into groups and disappeared, the evening almost over and the next day looming.

Luckily I was able to return to the hotel and not have to dig into and demos or slides, as my presentations were ready to go. I know that some were not so fortunate and went back to do more work on session prep or even more real work. Saturday was rapidly approaching and i am sure we all made it there in different way. But collect again we did, the morning of Saturday at the event. More mixing and mingling and reuniting with old friends not seen the night before occurred. The morning started with some great bfast snacks and a keynote in a large windowed room with garage door openings and massive fans. One can imagine the doors opened and the fans pushing the air around. However, since the weather was just nice and not insanely hot like Phoenix can be, the room was nice, with the blue skies glowing softly outside. 

After the keynote, folks traveled to 3 different buildings on campus to attend from 11 distinct sessions. I was up for the first session hour and had the opportunity to speak on Release Management. The session time came and I only had 2 attendees and the room monitor that was forced to attend. We waited for a bit past the start time and a few more folks came in, bringing the total up to 8. So an intimate session it was, with discussion and sharing, and me blathering on about my experiences and ideas on how to keep change from impacting us negatively in our production environments. I believe that the session was well received and I enjoyed giving it and interacting with the attendees. 

More sessions came until the lunch hour. We reunited back in the large room with the glass and glass garage doors and massive fans. Pizza and pasta and salad was on the menu as well as networking and story telling. Food was consumed and connections made. SQL was discussed and plans were made. 

Sessions resumed in the 3 locations and more learning was presumably acquired by the attendees and speakers. I got to listen to parts of 2 sessions in the afternoon and learned a few things, and need to make follow up to learn more. I love being able to have access to such great knowledge and experience. Keying into the right people and making those contacts to forge even stronger relationships through learning is what these events are about for me. Sharing and learning.

The last hour of the event found me in a room with nearly 20 other folks discussing the gloriously exciting topic of Documentation. One would not think that I could get even my closest friends tricked into coming to my session of documentation, let alone 20 strangers, but alas, this is what happened. After a lively discussion that strangely started out with an explanation of my name and ended with folks wanting to get copies of my documentation, I finished the session. 

With the session ended and the final group session beginning, the attendees that were left were energized at the chance to receive presents. We went to the keynote room and swag was given out to a ton of folks. Books, licenses and more was given away. 

Cleanup started, goodbyes were hugged out, and eventually attendees, speakers and volunteers finished and parted ways. Some to the airport, some to hotels, some back to their homes and some went to the after party at a local watering hole. More networking and stories occurred, which eventually led to more hugging and departing friends. Some of us were talked into going to a Packers Bar to witness the lovely Tim Mitchel and Kathy Kellenberger sing karaoke. After some descent performances and some, well, other performances, we were treated to our favorites sing 'Babe' to the packed bar. After that, more hugs and goodbyes. And it was over.

I enjoyed the venue. The keynote room was large and spacious, with views to the outside making it feel larger than it was. The speaker room was spacious and allowed folks to spread out and practice their craft. There was also a speaker lounge just outside the serious room where seats enough for a few more speakers to kick back and chat. One could wander in here and always find someone to talk with and scheme. The rooms were not all together, which i heard a few 'complaints' about. In quotes because they were not hearty complaints, more a discussion. But having them grouped together as they were was helpful. one could go from 1 to the next easily. Once inside the rooms, with the doors even open, it provided enough space and quiet to perform the session well without outside interruptions. Having the sponsors on the way to the sessions outside one of the buildings was a great idea, letting folks chat up the sponsors without impacting any of the rooms or hallways. And being outside was glorious. The parking was plentiful and close to the venue. The venue was close to the freeway and easily accessible. I really have no complaints about this event that would even help it be better next year. It was well done, the volunteers kept the machine running along well, and you could tell that they had had some practice with previous years, and this year it just coasted along smoothly, giving everyone a great experience.

My thanks to the sponsors and volunteers that put the event on. Thanks to the speakers that came from near and far to help with the event. And a very special thanks to the attendees for giving up their Saturday to come out and get some SQL learning on.

Cya next year!!



Wednesday, March 26, 2014

SQL Saturday #279 in Phoenix – March 29, 2014

SQL Saturday #279 in Phoenix – March 29, 2014

Tuesday, March 25, 2014

upgrade to downgrade and pull some hair out while you troubleshoot

I have a Monitoring system that houses RedGate SQL Monitor, as well as home grown monitoring solutions. It is called VOO1DBMGR1. When this system was stood up, a version of SQL Server 2012 was used that shouldn't have been. It was an Enterprise Eval version. It should have been the licensed Dev version we bought for this purpose, but without going back in time, its tough to change. 

But change it needed. I needed to upgrade it, or downgrade it, as it were. We wanted to go from Ent to Dev on sql server 2012. A testing VM was stood up to help with the process.

After several tests on testing VM that was configured for me to play with, I have installed and uninstalled and upgraded and removed SQL Server 2012 a bunch of times. After reading several blogs and articles, I found a path to upgrade (or downgrade in this case). Once the eval time expired, things stopped working like SQL Server Management Studio stopped working on box. It would error upon launch. One could continue to connect to it remotely via SSMS, but not locally. Also, when one shut of SQL Services on box, one was unable to start them again without setting the date back into the distant past and resetting the date once services started up again. This proved to be a problem.  

I discovered that once SQL Server 2012 was installed with an evaluation version, I could do a version upgrade with a properly keyed installation. We downloaded an iso from MSDN which embedded our key in it, and I was able to use this version to upgrade (downgrade actually) the version. Since it was already an Enterprise version, and we wanted Developer, it was a downgrade, but the same process could be used. So you select upgrade, it steps you thru several other steps of information before performing its operations. When done, the version has been changed. I even ran some powershell script the last time on the test system to determine that the service actually went down a couple times for a few seconds while it was upgrading.

So, it was time to perform this on VOO1DBMGR1.

I copied the same MSDN iso we used in the testing environment, and ran it. Since the key is embedded, when it comes to that screen, it’s there already, and not in ‘evaluation mode’ like the previous install was. I had the powershell script running elsewhere so I could watch the service status. As the upgrade proceeded, I saw that the service went down via the powershell script. But it wasn't for a few seconds. It simply kept reporting that it was down. When I looked at the upgrade process screen, it was simply working. Not locked. Not ‘Not Responding’. Just doing. Going. Working. But no real response from it.

I waited.

Then I looked at the windows application logs, and saw that an attempt to start the SQL Service had been attempted, and failed. The reason? ‘SQL Server evaluation period has expired.’ But that should have been taken care of with the upgrade, no? I believe so. That’s what happened in the test environment. But not here. 3 attempts so far. I looked into the ErrorLog of SQL Server itself, and it had a typical start-up sequence, but then the same error about the evaluation period expiring.

Hum.

In the past, when the service didn't start, I simply set it back to 3/1/2012. And viola, it would start up services, and then I was free to reset the date. It made me feel cheap, but I did it anyway. It’s for the greater good. So, while the upgrade process was obviously stuck in a loop, I gritted my teeth and changed the date.
I was prompted with some odd screen that said it couldn't complete an operation and would I like to retry. I said yes. It asked again. I said retry. It asked yet again, and in a fit of madness, I answered the same way. The next time it asked, I canceled the screen. At which time I was prompted with the upgrade utility, and all greens across the board, and it letting me know that it had successfully completed its activity. I beg to differ, but defer to its better judgment.

I looked for my SQL Service, and it was up and running. I logged in with SSMS (an act I was prohibited from doing for oh too long on this machine) and was successful. Once in SSMS, I was able to detect that the upgrade had actually performed its operation and the version was as expected.

I set the date back to today, to shake off those cobwebs of uncertainty. I  then turned off the SQL Service, without the date being set back to 2012. And found that the services were able to turn on and off at will now, without getting dirty.

All seemed well.

Then I tried to test something from my machine, and I encountered a slew of errors. Were these related? At first, would think so. But after some breathing exercises and several tests, along with a reboot of my machine, all was well again.

All is well.


Short version.

VOO1DBMGR1, our trusty monitoring server is now on a fully licensed and has a proper version. SSMS is once again able to run on box. Remote connections are functioning as expected. Monitoring systems are up and running. And SQL Services act well now, which will allow them to be shut down for random maintenance.


Wednesday, February 19, 2014

Release Management : planning meetings

After avoiding the obvious for a while now, we have decided to institute recurring planning meetings to cover only Release Management tasks. Heretofore we allowed these conversations to occur naturally, when needed, and with whom needed to be present. This usually involved someone in the development arena talking to someone that would push code changes into production. Few more were involved. And to the detriment of others, others were not always included.

The good of this method of informing of 'need to change' is that it is organic, non planned, skips meetings (everyone hates meetings) and has the randomness that is requisite at times to be agile and allow change to occur. A process was still followed, but it was executed at random times. This is a plus to those that want to keep light on their feet.

The bad is that this method forces change into a system that should and could take a bit more planning prior to execution. Regardless of how agile your development teams want to be, changing production is fraught with potential disaster. Small change or even well choreographed large change can impact a system minimally and leave no lasting scaring. However, rushed, inconsiderate change, can wreak havoc. This is the type of change that is likely to occur in a rushed or unplanned manner.

So a process was instituted at least at the inception of the change request so that several necessary tasks could flow upon knowledge of an impending change. If we stuck to the task list and performed them satisfactorily, we were usually successful. Not always. But usually. Tweeks occur as time passes, and the process is tightened.

Fast forward to today. We now realize that this method of singular knowledge sharing with a select few was detrimental to others in the organization. Some poor business analyst on the north end of the building had no idea that one of his favorite tables just suffered a drastic change and the fields he commonly referred to in his reporting were just altered inextricably, and he had no forewarning of said changes.  No one thought to let him know and his reports now suffer. Sad. But no one knew. Well, someone knew, eventually, but to the chagrin of those displaying the information to others, probably some executive in a plush conference room. Oops.

So we now have instituted a recurring meeting (uugghh, peal the skin from my face would be a better use of my time...). In this meeting, we invite many. Hopefully all. But for now, many folks. These folks were chosen for their potential interest in change to our production system. They now have the option to attend a brief meeting and hear discussion of potential changes to the production system. Here is a forum in which they can ask why, when, and why. Conversations can begin here and continue to the satisfaction of all parties. Plans will be made as to when the change will occur. Bartering as to how this change can be introduced with the least impact will ensue. Parties will be informed, knowledge shared, and life will move forward.

The changes will still involve a select few. The process to perform the change and even prepare for the change will remain similar to before. Tasks being accomplished, questions asked and answered, plans created, testing, and so on. But with this little recurring planning meeting, folks are informed. Change is much less drastic and caustic. Acceptance can begin much earlier in the process and anything needing to be tweeked to allow and accept this change can be implemented much earlier in the process. No more waiting for someone else to point out the flaw, later in a meeting, and hopefully not in front of C level folks you are trying to impress.

Start with small tweeks to your Release Management process. See how you can improve it. Add some oil here, change a gear there, and before you know it, your machine that drives and introduces change into your production topology will be so smooth you won't even hear it purring along gracefully.


Thursday, February 13, 2014

Be tenacious

A Release Management tale


Last night we performed a Release that affected a production database and a website. It was a fairly simple release, and we had done the same steps previously. So with a little effort, we prepared the Release Plan which contains the steps to be performed, and executed on those after hours. Within 15 minutes, all backups and snapshots and the like were done. Within a few more minutes, all new code had been successfully pushed out. Testing ensued and the Release was labeled a success.

We all finished up our tasks, our compares, our post snapshots, documentations, and so on. Emails were sent, and we logged off. All was well.

Until the morning.

The users, darn them, starting using the website and noticing some issues. They complained. Those complaints reached our ears early in the morning, before most of us made it in to the office. So from comfy home office chairs, we logged in and started looking around. Email was the form of communication initially, but this became burdensome to await for responses, and a chat room was opened up in our internal IM product so we could talk more freely.

Initially, there were members of the troubleshooting team that wanted action. Something is broke and its only natural to want to fix it as quickly as possible. Especially since users were using it and seeing the issues. Its different at night when no one is online. Less pressure then. But now, in the morning time, people are anxious and that transfers rather quickly to the rest of us.

I had to say no. We are not just going to roll back. Just be patient.

Once we all gathered and started troubleshooting, we could dig into the why. What was happening. What we thought was happening. Reading logs. Watching processes. Watching memory. And so on. At one point we even said that it was not the database. And it was suggested that I could go back to my normal tasks. But I stuck around. I didn't feel confident that we knew what was going on, and even though I could show that the database was not under any duress, I stuck it out. I kept working on it. I helped, we all helped. Others were brought in to the mix and their ideas were considered.

Fast forward. We still do not know what is happening, except that the IIS server will get a lot of memory pressure, the site will cease to function, and once it all blows up, things start over, and the site seems to work. We see this over and over. Users are in there. We are in there. All of us contributing, but there is still no smoking gun.

So I open Profiler and limit it to the particular database and web server that is having an issue. We capture everything that is happening on the db, which is a lot, and just cross our fingers. After a few more iterations of the memory consumption and release, I notice a repeating query in the profiler, just as all hell breaks loose. Its the last statement, seemingly, that was trying to execute. I grab it as is, and attempt to run it raw. It gives a divide by zero error.

Divide by Zero!

What is this query doing? Does anyone recognize it? does it have anything to do with what we pushed last night? Is data goofed? And other relevant questions were asked. After digging a bit, sure enough, deep in a view that was altered last night, a field was being divided by, and it could be zero on occasion.

I hear a muffled 'Oops' escape the developer standing behind me. 'How did that get past testing?', he asks no one in particular. We discuss for a bit, come up with a solution, and make an alteration on the fly in production that fixes this little issue. After that, the query run raw was able to complete. And as soon as we made the change, we notice the memory consumption and explosion slow down.

It didn't cease, but it did slow.

This gave us more time. More time to look deeper. We continued to watch the Profiler results. We continued to perform tests, and we continued to see the web server work for a bit, then struggle, then use all its memory, then flush everything and continue on as if it had a goldfish sized memory. All's well now, lets go. seemingly forgetting that mere seconds ago it had used and flushed all its memory.

Another query started being the last query executed just prior to the spike in memory usage. As I captured and executed this manually, it too gave us an error. Something about a field it couldn't find in the db. Some field that looked like a valid field, yet it didn't exist. After pointing it out to the developer, he incredulously stammered something like 'where did that come from?'. Turns out that the staging environment had an extra field. This field was built in to the middle ware code that had been generated, and now was trying to do its thing against production where no such field exists.

And the web server simply crashed.

Instead of throwing an error that was helpful, or logging that it got a bad result or no result or some error, it simply kept attempting the query, letting its memory consumption expand to biblical proportions, and come crashing down. Only to try again in a few minutes, as if it had no memory of the preceding few minutes.

So now we fix it.

Now we know what is causing it. And the quickest route to fixing it is to roll back. Roll back all the changes and the site should work like it did yesterday. Not like an Alzheimer patient continually performing the same unsuccessful task. Roll back the code.

The point here is that more than half this story ago you will recall that was the suggestion. Roll it back. But that suggestion was in the heat of the moment. Something was broke. We changed something. Roll it back. If we had done this, then the 2 pieces of code that were hiding well hidden within would have never been known or fixed. Dev would have re factored the release, we would have performed it again on another day, probably tested a lot better, and found the same results. Something not working right, and we had no idea what.

So it took us a few hours. So it was frustrating. So the users were unable to use the site for a bit. With sufficient communication, we let the users know. They were placated. With some time, we dug and dug and discussed, and tried, and ultimately found the culprits. Silly little things bugs are. Scampering here and there, just out of the corners of your eyes. But havoc is being caused, until they are eradicated.

I am happy that the end result was more knowledge, time spent in the forge of troubleshooting, and an actual cause to the problem instead of a quick acting rollback, ultimately hiding the problem, but reverting us to a known state.

Its the unknown that kills you. Or if not kills you, at least puts a damper on your day.

Be patient. Be thorough. Be smart. Be tenacious.