The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.
Hope there are some SQL gurus out there who can help me out -
I am noticing a weird phenomenon when profiling a database (MS SQL 2005). For every transaction I start in an application, I am explicitly setting the isolation level to read uncommitted. 99% of the time, I will see "set isolation level read uncommitted" in the profiler at the beginning of that transaction, which is what I expect.
Every now and then though, I'll see it being set to "read committed". It is not being set to that anywhere in my code, and the SAME query run more than once can show this behavior. Is it possible for the isolation level to be overridden? Why would it only happen now and then on different queries every time?
Only one of the isolation level options can be set at a time, and it remains set for that connection until it is explicitly changed. All read operations performed within the transaction operate under the rules for the specified isolation level unless a table hint in the FROM clause of a statement specifies different locking or versioning behavior for a table.
That's taken from this page, but the emphasis is mine. I've only worked with Oracle, not SQL Server, but as soon as I read that, the first thing I thought was "oh noes hints." I can't count the number of times I've been scratching my head, trying to figure out why the hell a part of an Oracle-backed application I'm supporting is performing poorly, only to find it's due to some misbegotten hint that's forcing the DB to do something in a suboptimal way.
Posts
That's taken from this page, but the emphasis is mine. I've only worked with Oracle, not SQL Server, but as soon as I read that, the first thing I thought was "oh noes hints." I can't count the number of times I've been scratching my head, trying to figure out why the hell a part of an Oracle-backed application I'm supporting is performing poorly, only to find it's due to some misbegotten hint that's forcing the DB to do something in a suboptimal way.