Technical Blog Articles

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Recently, after assessing a client’s backup statistics, we discovered that over 20% of a client’s database instances had not been backed up.  Before configuring backups for so many instances, we wanted to discover just how many databases were actually actively in use.

After investigating options, we developed the following query to detect database inactivity.  The query uses the command sys.dm_db_index_usage_stats.   The query returns the last accessed time for all indexes (including heap). Based on this, we are able to determine the last accessed time for each database.  Any databases with a last access data beyond an agreed upon threshold (e.g. older than 1 week) can be flagged for follow up.

One Drawback:  If the SQL Server service is restarted, last accessed stats are reset. So, this query is not useful for detecting database activity for servers that are frequently restarted. However, the query also returns the server restart date, which you can use to help you flag those restarts for further analysis.

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Are you wondering how long before you run out of disk space?  As a database services provider, we are frequently asked to monitor disk space usage across large multi-platform environments.  Seeking to provide a proactive method to detect and predict disk space usage before overages occur, we developed a PowerShell script to automatically gather those disk space usage statistics across an entire environment. The data is then available for alerting, reporting, and historical trending, so that you can forecast, plan, and add disk space before issues occur.

The PowerShell script is called by a Windows batch script, scheduled to run on a nightly basis.  The PowerShell script reads through a prepared table of server names and issues a series of commands to:

– Detect (valid) disks attached to each server

– Pull back total space and open space from each disk, and compute used space. 

The script then calls a stored procedure to store the data for alerting and reporting.


Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

On Saturday, August 17, 2013, John Abrams hosted a speaker session at SQL Saturday #235 in New York City.  In case you missed it, or just want to revisit our presentation….we’ve posted it here…

As every DBA knows, the one question you want to be able to answer affirmatively is “Can you recover that data?” Monitoring is critical, but monitoring methods can be imperfect. Traditional methods are difficult to set up and maintain across your entire environment, resulting in incomplete monitoring and missed alerts, so that it’s difficult to be sure of your answer to that all important question. This presentation will:

• Show you how to implement a better way to monitor your database environment that is more efficient, easier to maintain, and guarantees that you never miss an alert.

• Share the methodology, framework, and key syntax, so that you are certain the databases you are responsible for are always up, always backed up, and never run out of disk/file space. So that your answer to that all important question is always YES!

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Continually evolving businesses (e.g., entering new markets, mergers, and acquisitions), coupled with high DBA-turnover rate can quickly result in an inefficient database environment: you could be backing up defunct databases or not backing up active databases, under- or over-utilizing server space, and, as a result, spend too much in licensing and operating costs.

As IT organizations search for new ways to reduce costs, more and more organizations are looking to database consolidation efforts to improve efficiency and reduce infrastructure operating costs. 

Database Consolidation:  A Smart Way to Reduce Costs:  Database consolidation is a smart way to reduce costs and optimize resources.  Reducing the number of databases and spreading active databases over fewer servers allows you to save money by reducing licensing costs software (e.g. Oracle, SQL Server, OS), and realize costs savings by reducing energy needs and the resources needed for support.

But, as more and more organizations are running with a leaner staff, they often don’t have the time to dedicate to assess and plan a large scale consolidation effort. 

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

As a cost center, IT Departments frequently are charged with reducing bottom line costs and improving efficiency, while facing the challenge of continuing to meet business objectives and competitive SLAs.  When thoughtfully planned out and executed, server consolidation can be the answer to help improve efficiency and reduce infrastructure operating costs.

What is Server Consolidation:  Server Consolidation is a process of condensing applications, processes, management and servers across fewer and/or virtual devices and locations.

Why Consolidate?  Realize a Significant Return on Investment:  Consolidating servers creates a more efficient use of hardware and can reduce the number of operating system software licenses and resources needed to manage the environment, thus reducing operating costs and potentially increasing system utilization.  These actions create power savings, increase efficiency and can reduce your carbon footprint, all of which can be very easy to justify financially.

If executed thoughtfully, consolidation almost always results in a positive return on investment.  You can realize both immediate and long-term software license and hardware costs reductions that far outweigh the cost of the time and effort spent to plan and implement your consolidation plan.

However, without a proper plan in place, you can make investments in inefficient or incompatible technology, make mistakes that can lead to performance degradation, catastrophic system failures, and end up spending costly hours on repairs.

Whether this is your organization’s first attempt or 21st, we’ve laid out a few tips to help guide you through the process.

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Universities and higher education institutions around the country are constantly faced with new and challenging issues in the ever changing world of technology and data management. Developing an organizational model to accommodate the changing IT environment, and facilitate openness is necessary to meet the demands of increasingly complex federal, state and local oversight of these institutions, along with continually evolving IT needs, and the staff that supports them.

Retiring Old and Complex Data Processes

As new technologies emerge, which will undoubtedly result in cost-savings and improved overall efficiency, IT departments in large universities are challenged with consolidating and upgrading their infrastructure, while minimizing the operational impact the changes will have on the institution and their current platform.

Government and State Legislation Creates Need for Efficient Source of Data

Aside from emerging technologies, changes in state legislation and civil law throughout the nation have increased the burden on state universities to produce records in timely manner. There is a need for a cost-efficient common source of shared data, and existing systems may lack a coordinated approach, resulting in inconsistent compliance and widely varying records retention practices.

Government agencies monitor activities, and penalties for violating laws and regulations are often severe.  Universities must emphasize compliance with these laws and regulations. Compliance relies on comprehensive student information systems that manage the student lifecycle from recruiting and admissions, through student services and alumni relations.