Triggers are both a useful tool with a specific niche… and the devil. Thank you Steve Jones for hosting this week’s topic on triggers!
When used appropriately, it’s a great skill to have in your repertoire. When used inappropriately, you will probably ruin someone’s day.
The best common uses I have seen for triggers are for preventing things that shouldn’t happen and auditing data.
You can use triggers to enforce rules in your database, such as not allowing people to log onto production without a special account during off hours. Or you can use them to stop deletes that would remove more than X% of a table or X rows, perhaps they keep people from accidentally dropping a table. These are all great uses that can prevent problems.
This mixes well with CONTEXT_INFO() and temp tables. You can use those functions with hard-coded exceptions in the trigger that is unknown to other users. When the trigger detects your CONTEXT_INFO() or specifically declared temp tables, it will allow you to bypass the trigger. You can find a detailed link to this example at the bottom of this article or here. So if you use the creation of a temp table or use of CONTEXT_INFO() into your script, this can help keep accidents to a minimum while making tasks take a little more time.
Using triggers to help manage accounts is a task that can scale easily such as preventing users from logging in after 9pm during the weekend. The enforcement ruleset on each table is a less scalable model and holds a developmental burden however, such as preventing X rows being deleted.
Triggers are a great method for auditing data (but now I would begin to investigate if the combination of Temporal tables with SQL Auditing would be a better solution, see which method is best for you!). Change data capture (CDC), Change Tracking (CT), and Temporal Tables cannot tell you who changed what because it runs under a service account and it will always show the service account made the change.
The trigger fires as part of the user’s transaction and is able to capture that piece of data. Extended events can help capture events in your database, but triggers serve a specific niche in this instance. This again comes at a performance, developmental, and administrative cost. I have seen plenty of instances well designed with auditing however that performed extremely efficiently, it all comes down to design and proper/cautious use cases.
However, I have seen a post recently about configuring SQL Audit and performed a little testing of my own. I created a database, a table, and ran a few changes with auditing enabled. It was able to capture what statement I ran when I ran it, and the account I ran it under. If you had this combined with Temporal tables, this would mimic the capabilities of Triggers, albeit a little more difficult to piece together when you need to analyze the data. (You’ll need to do detective work with datestamps and looking in between time frames.)
This does become an incredibly easy administrative task to configure and it will likely have a performance gain over the triggers. I will need to go back and research the impact of SQL Audit on the instances, but I would anticipate it would be less than triggers.
Here is the SQL Audit post by Nic on Stack Exchange if you would like to test it out too.
I’ve mentioned already you’ll have an administrative, developmental, and performance cost to adding triggers in your database. There can be hidden costs too if you don’t use the proper tools for the tasks. I’ve seen triggers that were designed to utilize business logic, they perform actions in the database when really the code should have the logic handled to execute various procedures instead. Perhaps the trigger is there to enforce constraints on a table, like keeping a column in sync with another table. These are problems and this is not how triggers should be used, there are other features that do this better. (Exceptions do apply as you’ll see at the end!)
Table constraints, keys, and data types aren’t only for performance and data integrity, those serve as crucial hints and clues in understanding a database. Triggers hide important information when you use them for those tasks, this will make it very difficult to create Entity Relation Diagrams (ERD). Likewise, splitting the logic between the database and the application is a bad idea. You want to keep things consistent between both environments, otherwise no one is going to know if they need to modify the application or if they need to modify the database.
Likewise, there are risks to using triggers. There have been two circumstances I’ve had the pleasure of troubleshooting involving triggers that really stand out in my mind.
The first tribulation was a legacy process I encountered in my first week on site at a new job, it would accrue ~5,000 deadlocks a day and the process would run for hours. Data integrity was out the window and luck was king. (If you run something X times, it’s Y unlikely that the data will be inaccurate still.) We would need to run the same records twice a day for seven days in a row, making the load times unnecessarily large but mandatory due to the deadlocking. These issues became exponential for when large amounts of records unexpectedly needed to flow through the system.
This was primarily due to the fact that the application was built to run a stored procedure to work row by row. If the record already existed, it would update the record, if it didn’t exist, it would insert the record. It would execute that procedure upwards to 20 times in parallel for 5,000-10,000 records. (There were some days we needed to push 50,000 or more at a single time, this forced us to manually alter and chop up the process to do small batches of data since it would time out.) A trigger on the table would update the rows inserted or updated with when/who updated it and insert the record into a audit table of the changes made.
The solution was simple, adjust the procedure the application was calling in parallel to insert into a HEAP table (index less table structure) instead of the production table. We then call a stored procedure one time to update/insert from the staging table to the primary table. This meant we went from executing a trigger ~5,000-10,000 times a run to 1 time. The procedure also no longer needed to fight over locks during this load as it was now a set based transaction vs the row by agonizing row (RBAR) method we were using when we loaded the production table directly with the trigger on it.
The second trigger incident took a mindset adjustment. Essentially, we have a table and we need to make sure that when things flow into it, all records with that value are updated based on the values in other tables in other databases.
The database piece was a problem and I was unable to use constraints for that reason. (You cannot have a table in Database A have a constraint reference another table in Database B. The tables need to reside in the same database.)So rather than trying to accomplish “When things flow into a table, all records with that value are updated based on the values in the other tables in other databases.”, I decided to change the logic as “When things flow into a table, the records that flow in are updated based on the values in the other tables in the other databases.” The difference is that we are looking at a small subset of data and not the entire table itself. This is important because we cannot guarantee that all past records in the table are updated correctly with this new trigger mindset, we can only ensure all future transactions are correct.
Here is a fake code sample to help illustrate this mindset adjustment:
(Note, it does not illustrate the foreign key constraint reinforced with triggers, it is meant to show how the table is being updated with the trigger.)
-- Deadlocking trigger UPDATE dbo.TableTest SET ColumnValue = NULL FROM dbo.TableTest WHERE ColumnValue IS NOT NULL -- Re-worked trigger UPDATE dbo.TableTest SET ColumnValue = NULL FROM INSERTED i INNER JOIN dbo.TableTest ON TableTest.ID = i.ID WHERE ColumnValue IS NOT NULL
The inner join to the INSERTED table narrows the records down that we need to update. Otherwise SQL Server was just updating the entire table each time, rather than attempting to update row by row.
So to incorporate our adjusted mindset trigger, we can just set the database to single user mode, that way no one else can make changes while we are working on it. Then we can update the table correcting the data, and then we can update the trigger to update all records going forward without escalating to a table lock. Of course once finished, we set the database back to multi-user mode.
To wrap up triggers, here are a few questions I’ve seen and answered on triggers!
Is SELECT * ok in a Trigger? Or am I asking for trouble?
SQL Server : How to disable trigger for an update only for your current session?
Creating a trigger after update exclude a column?
How to find who changed a table value in SQL Server 2014?
Additional Citation:
[1] Wikipedia contributors. (2018, April 22). Law of the instrument. In Wikipedia, The Free Encyclopedia. Retrieved 16:29, September 8, 2018, from https://en.wikipedia.org/w/index.php?title=Law_of_the_instrument&oldid=837740920