Browsing Posts tagged MicrosoftSQLServer

Bob here.  Database performance is a fun topic.  ”I’m running reports overnight.  They take 3 hours!”  Not necessarily the end of the world (still waiting for me in the morning).”There’s a customer on our web site.  It takes 15 seconds to see their order status.”  Ok, now we’ve got a problem.

“Sooooo…let’s buy new hardware.”  That’s the quickest and often the easiest (in terms of justifying the expense) sell to management.  Because more…more cpu, more memory, more disk…that’s going to fix it, right?

Not necessarily.  If you’re not measuring what was actually happening during those 15 seconds, then you don’t know why it took so long.  That’s why we have Performance Counters.  Metrics we can get (in real time, if needed) to tell us what’s actually going on.  Why it took 15 seconds (or 3 hours, for that matter).  And the conclusions we can come to from these metrics?

Two words:

It depends

Read on!



This article excerpt, by Robert L. Davis, originally appeared here: http://bit.ly/1ryLksn
When you’re working through some vague performance issues (e.g., “SQL Server seems slow today”), one of the common things to do is to collect some performance counters. If you are collecting performance counters for the first time on the server, you don’t have anything to compare them against. This usually leads to searching the web for resources that will tell you what numbers the counters should be. And sadly, it quite often ends with either misleading advice or with a disappointing message of “it depends.”
There are several reasons why we say that the target values for performance counters depend, and why we say that you need to baseline your systems.
•Workloads and server configurations vary wildly.
•Workloads change.
•SQL Server environments are constantly evolving.
Baselining
So we tell you to baseline to know what your system generally looks like when it’s healthy. If you are baselining, you can compare your performance counters to last week’s numbers or last month’s number or even last year’s numbers. But if you’re not already baselining, and you have an emergent issue that you need to investigate right now, this advice is not going to help with this issue. Most of the performance counters are not going to be very helpful, and you will need to dig into the current activity on the server. Look for obvious things like blocking and extremely high degrees of parallelism. Look at the wait statistics for the currently active requests and try to determine if we have an issue with memory or CPU utilization or other bottlenecks.

Doug here.  The article below offers some highly relevant advice, with a few caveats.

Performance tuning is a necessity these days but is difficult to achieve for most SQL Server developers.  The reasons vary, but typically involve:

  • Lack of permissions to utilize tuning tools (often reserved for DBAs only)
  •  Lack of knowledge on performance tuning methodologies (as opposed to techniques)
  •  Lack of availability of DBA or someone with permission and knowledge required for success
  •  Inefficient code being generated by their development environment
  •  Lack of understanding of computing environment, application internals, normal v. non-normal workloads
  •  Lack of experience overcoming the above mentioned roadblocks

Performance tuning is often a typical catch-22 situation.  The developer(s) understand the application (hopefully) and the DBA understands performance tuning. Therefore, it is by necessity a collaborative effort.  But each party has their own priorities and often the DBA’s manager is not willing to devote their staff member to solve ‘someone else’s problem’.  Also, relevant parties are often less than motivated to share what they know due to concerns about exposing ‘soft spots’ within their environment or operations.

That’s where an experienced SQL Server professional can assist.  They usually have both the developer and DBA experience required to understand ‘both sides’ of the issue.  They also should have skills in crossing functional boundaries in an organization and can bring the relevant issues together for analysis and resolution.

At Simplex-IT we are often asked by clients to ‘take a look at’ a performance issue that is hampering their production environment.  In nearly every case it involves overcoming the typical roadblocks shown above as well as utilizing related skills in networking, server configuration, people management, etc.

However, developers can perform ‘Level 1′ performance tuning by focusing on the basics of data retrieval:

  • Retrieve only what is needed at that moment
  •  Ensure indexes exist and are used by the application

Both of those items are typically well within the knowledge domain of the developer.

After those items are addressed, engage a competent consultant to overcome the skills that may be lacking in the situation.  And learn from them.  Many performance issues are caused by the same, predictable recipe which can be overcome with some solid documentation, sound design patterns and common sense.



This article excerpt, by Andrew Pruski, originally appeared here: http://bit.ly/1vusVxr
Performance tuning often gets called an art as people feel that a certain knack or innate talent comes into play. And while I don’t disagree that a certain level of knowledge is involved, I completely disagree that only certain people can performance tune.
Given the correct approach, anyone should be able to learn to effectively performance tune.
But how should performance tuning be approached? I’d like to take a step back from delving into specifics and define the process of performance tuning, a set of guidelines that can be used in any circumstance.
Performance tuning should follow the scientific method, which is defined as: “A set of principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experimentation and the formulation and testing of hypotheses.”
In practice, this can be broken down into the following steps:
•Observation
•Declaration of the end goal or issue
•Forming a hypothesis
•Testing the hypothesis
•Analysis of results
•Conclusion
•Further research
This way of approaching performance tuning comes into its own, particularly when investigation is required to respond to issues in a live environment. The trick is to follow the method and not to go with “gut” feelings or guesses. Stick to what you know.
For example, your production server has started responding slowly. Queries are taking longer than expected to complete. This could be due to memory pressure, but you are not sure. It would be better to start off by declaring “Performance of the server is poor,” which you know to be 100 percent correct, rather than declaring “Performance of the server is poor due to memory pressure.”
So the steps to follow would be:
•Observation: Queries on production running for longer than expected
•Declaration: Performance of the server is poor
•Hypothesis: The poor performance of the server is due to memory pressure
•Testing: Running an extended events session on the server to catch incoming queries
•Analysis: Several queries performing scans of large tables
•Conclusion: Creation of a covering non-clustered index to prevent the table scans
•Further Research: Are there any other queries being executed that are performing table scans?
These steps may seem obvious, but using the scientific method will prevent you from being led down blind alleys and potentially missing the actual problem. It provides a structure for the investigation.

Doug here.  Health Checks on your SQL Servers are an easy proactive way to determine if data is safe and secure.  Data is the life blood of business and keeping yours protected should be a priority

Recently we performed a routine SQL Server Health Check for a mid-sized client and uncovered a variety of surprising discoveries.  The system was installed by a 3rd party application vendor and it served as the back end to their productivity application.  The system was running slower than expected and justified a closer look.

During the Health Check we discovered a number of misconfiguration, security and availability shortcomings.  Some of the configuration settings were misaligned.  Access to restricted data had been granted to the entire user base rather than the select few with appropriate authorization to access the data. The backup jobs created were still working against the test system rather than the production system.  Also, some critical maintenance jobs were failing silently resulting in a misleading level of comfort.

These issues were adversely impacting the overall solution and putting the client’s data at risk.  Once identified, they were easily corrected.  At that point the client’s data was actually in the state they desired – secure, available and protected.


 
This article excerpt, by Andy McDermid, originally appeared here: http://bit.ly/1lSCAFj
 
Halfway through the management of a crisis is not the time to discover that your disaster recovery plan involves crossed fingers. To avoid that kind of drama, a formal SQL Server health check should be considered routine, required and responsible.
 
Basic Database Health
A comprehensive SQL Server health check provides you with a better understanding of your complex database environment, and delivers answers you need to the questions you need to ask:
 
•Is our data accessible, available and recoverable?
 
•Is our SQL Server stable? Is the hardware able to manage current demand for the data it provides?
 
•Are there any immediate security concerns?
 
•Is our SQL Server properly configured? Are memory settings accurate?
 
•Is our current hardware and software infrastructure sufficient to support our current database growth patterns?
 
The answers to those questions provide the kind of baseline data that is critical if your team is going to be able to support your ongoing database needs. It’s equally important to their understanding and support of the day-to-day operation and maintenance of these systems.
 
Advanced Database Care
Building on that baseline, consider whether your organization can confidently respond to the following questions:
 
•What SQL Server maintenance should be completed within the next 30 days?
 
•Is there a regular maintenance schedule that allows for security patches to be applied and server upgrades to be performed?
 
•Are our databases performing efficiently? Have indices been implemented? Is there index fragmentation? Are there duplicate or unused indices?
 
•Is there a predetermined series of best practices that are being followed as they relate to database design and management?
 
•Has a disaster recovery plan been implemented? Does it adequately support the needs of your organization? (consider the maximum acceptable time period you could be without access to your data, as well as the legal liability and potential consequences that could result if it is unavailable for recovery)
 
•Does our team have the necessary expertise?
 
Database Diagnostics
Even beyond the obvious benefits of being educated and prepared, there is an additional dimension to consider regarding the management of SQL Servers. As an organization you expect excellence from your team, and that means equipping them with the tools and information they need to perform competently and confidently.
 
Don’t worry if this feels overwhelming. An ounce of prevention is a lot less expensive than a pound of cure.

Bob here.  We do a lot of work with SQL.  In fact that’s ourprimary role with several of our customers. Maintaining, troubleshooting, all that good stuff.  And one thing is absolutely clear:

A lot of organizations don’t know how tomeasure/maintain/troubleshoot SQL server performance.  And it’s understandable.  It can be a complicated configuration, onewith many pieces and parts.  And if anyone of them isn’t optimally configured…stuff slows down to a crawl.

And the first thought is “let’s add hardware!”  And although that usually will improveperformance, it often masks what the real problem is.  Poorly formed queries, outdated statistics,server level configurations are just some of the examples.We’ve seen many examples where tweaking just one aspect of adatabase results in tenfold performance improvement.

But you need to have a solid understanding of what’s goingon under the hood.  You have to have a baseline measurement.  And here are sometools that are built in that can help.

If you’re not seeing the performance from your SQL environment, contact us here at Simplex-IT.  We can help.



This article excerpt, by Dan Popescu, originally appeared here: http://bit.ly/1ja70CH
Using the SQL Profiler trace log – this powerful utility is one of the best to use for troubleshooting MS SQL Server’s performance. SQL Profiler is used to monitor all activity running on the Server. With this tool you can analyze data before bottlenecks are made, making it easier to determine the root cause. It is also useful for troubleshooting slow queries and responses and to determine bad application configurations. If your server handles many calculations, it is suggested that the Index Tuning Wizard tool be used. The Index Tuning tools optimizes SQL queries, determines if indexes are used properly, and can also be used to increase the overall performance of the SQL server.
Blocker script output – a mechanism that once configured properly can be used to easily determine system blockings or troubleshoot performance problems. Data gathered from blocker scripts can be incorporated into monitoring software for better analysis and troubleshooting. We won’t touch on how to implement blocker scripts because this can be quite complicated and could require an entirely separate article.

SQL Server Performance Monitor log – the performance tool available with MS SQL.
Using it you can monitor performance counters that are responsible for bottlenecks like memory and CPU usage, system blocking, etc. For troubleshooting the server’s performance we have to enable the performance counters. Gathering data can be done on the local SQL Server or it can be sent to the monitoring tool on a remote machine. Make sure that the following data is included in the analysis (information taken from Microsoft’s website): Paging file, Process, Processor, All SQL Server counters, Memory, Threads, Logical disk, Physical disk and System.