What is the WSS_Logging Database?

 

What is the WSS_Logging database?

I was recently at the SharePoint Saturday in Bend and one of the people that attended my session asked a question afterwards that I answered, but I don’t think I gave the response enough justice. The question I was asked was “What is the WSS_Logging database?” After talking with him the underlying problem presented itself. He was having problems because his WSS_Logging was getting rather large. So I explained to him what I thought the database was used for. In general terms it is used for gathering ULS data for the farm. Which if you think about it, this of itself is pretty cool. If you have a 10 server farm you have one location to check for any errors. But this database does so much more. Not only does it aggregate the ULS data it also contains the server event log information as well. The health analyzer tool in Central Admin also uses this database to warn you of problems. In SharePoint 2010 the default name for this database is WSS_Logging in SharePoint 2013 the default is SharePoint_Logging. Other than that the database and information is pretty much identical. In SharePoint 2013 a lot of the usage reports are now part of the Search Databases however.

 

How does this database get its data?

This database gets its data from SharePoint Timer Jobs that run on all the servers in the farm. The data for each type of item is “Partitioned” into tables for the day in question. These partition tables are generated based of the log providers that have been setup. You can control what data gets into this database and on what schedule it gets added from Central Admin -> Monitoring – Configure Usage and Health Data Collection. From here collection can be turned on and off and you can control what items are pulled into the tables.

HealthCollectionTypes

As you can see there are plenty of options that you can select and they are all selected by default. Further down on the screen there is an option that will you to configure the schedule of the data collection. The link is called “Log Collection Schedule”. If you click on this link you will be taken to the two jobs that are used for logging. They are:

  • Microsoft SharePoint Foundation Usage Data Import – Imports the data into the logging database, defaults to running every 30 minutes.
  • Microsoft SharePoint Foundation Usage Data Processing- Handles the processing to move the data into the correct buckets, runs daily.

For each of the events that you select to log you will get tables that correspond. Included below is a snapshot of the tables created for ULS Logging.

WSSLoggingULS

These tables continue through 31. This means you could have data for the last 31 days. The processing job makes sure that daily the events that are tracked are in the correct table for the day.

 

What can I do with this Database?

Microsoft’s TechNet article on this database has an interesting note, “the logging database is the only SharePoint 2013 database for which you can customize reports by directly changing the database.” (http://technet.microsoft.com/en-us/library/jj715694(v=office.15).aspx). Most of the time we are told never touch the SharePoint databases but this is one exception to the rule. Now we are still limited in what we can do with it but at least we won’t get our hands slapped if we look at it. To change the reports we can create SQL views if we want a different view of the report data that is already logged or we can create a custom log provider that will allow us to add new data.

 

Now What?

Now for the million dollar question, how do we control how big this database has become. The simplest option would be to limit the event types that are tracked. This could be done in two ways, by removing the event type all together or going into the Configure Diagnostic Logging screen and changing the logging level. ULS logging can take up a lot of space on the file system and if you multiply this space by the number of servers that are in the farm you can see how this can get out of control really quickly. Limiting what is logged will not immediately fix your database size since the old data will be removed automatically after it is too old.

Another option is to look at which table it is in the database that is taking up the space and you get adjust how long the data is retained in the database. The tables allows us to retain up to 31 days worth of data but the default is 14. To check what you retention level is run the following powershell script: Get-SPUsageDefinition.

If you run this script on a default install that has logging enabled will give you the following output.

UsageRetention

From PowerShell you can change the Retention Period for each individual Event Type by using the Set-SPUsageDefinition commandlet. This commandlet lets us set the retention days and whether an event type is enabled or not. So if we wanted to set the retention period for Timer Jobs to one week instead of two we can run Set-SPUsageDeffinition -Identity “Timer Jobs” -DaysRetained 7. After this is done go to both of the timer jobs mentioned above and run them. This will flush the data that is older than the retention period out of the database. So in this scenario any timer job data older than 7 days will be removed.