I’ve been a bit distracted from my scripting work, but I thought it might be fun to post some thoughts on new features I’d like to see in the tool.
The new feature I’d like to see (today) is an improvement to the check model functionality. I once managed a group of modelers (apparently not my strong point) and now I still perform model reviews and work on internal process improvements for our modelers. I love the check model functionality, and I’ve tried to write as many custom checks as possible to enforce our standards and best practices. I have a few problems with it though.
- You can’t mark a warning as reviewed
- The results of a check model aren’t stored anywhere.
I want people to run a check model FREQUENTLY and review the results thoroughly. Unfortunately, for a large model, the number of warnings can grow quite large. In our environment, we support a large number of models which originated long before our current standards and best practices. We update portions of them as we do ongoing maintenance, but we rarely have the opportunity to go through an entire model and apply standards just for the sake of applying standards. If we don’t have a reason to touch the code operating on a section of a model, it remains “as-is” until we do. On a large model that can mean getting hundreds of warnings that are “expected” and won’t be addressed, along with a few important ones that need to be looked at (and usually a few errors too). The warnings that occur every time, and for whatever reason we’ve decided are acceptable, make it easier to miss the errors we are interested in. Usually the modelers end up ignoring those warnings, or turning that specific check off. Neither is a very good option.
Now suppose when we inherit our model we run a check, review the results and either fix the existing warnings, or mark them as “reviewed”. The check dialog shows them with a new symbol and perhaps allows you to enter a comment such as “table pre-dates current standards”. That error stays “reviewed” until the underlying object is modified. Then they go back to warnings, which can be addressed, or marked as reviewed again. Now the modelers can leave the check on and focus on the warnings that apply to the portions of the model they’ve modified.
In order for this to be useful, the results of checks must be stored in the model. If they’re in the model, they’re in the repository. Now if I put my manager hat back on, I can find out which models have been properly reviewed and which haven’t. I can see whose models are being reviewed, and who is performing it. Useful information at review time. I can also find out which checks are most commonly being triggered and which are being reviewed and ignored. A review that is being marked as reviewed 90% of the time may need to be refined to reduce false positives (or we may need more training for the team to ensure they understand what the check is supposed to be doing). Knowledge is power, and with this information, we have the opportunity to improve our model checks and then our models.
If you’re reading this (bored?), what new features would you like to see?
Discussion
No comments yet.