Have you ever wished you could see into the past? What is the most natural thing in the world with photo albums and television is almost impossible with Navision or Business Central.
And yet it is so necessary, and so helpful! Because: Programming is debugging, which you can also learn more about in this advertising link:
In general, you only have these resources:
–ChangelogDepending on the change frequency in the tables, logging via the change log will quickly fill up your database. Information that is no longer relevant and is 5 years old competes with minute-by-minute changes, e.g. in sales lines and financial accounting sheets. Added to this is the load on the server and in the database, combined with table locking.
–Modified on: The master data of Navision or Business Central usually contains the field "Changed on" (supplemented by most companies with "Changed by" and "Created on/by"). Although this saves a lot of space, it tells you when the last change was made to this record, but not which change.
–Guess. "If it says this now, it could have said this before."
–Data backups. In a nice Navision and Business Central development environment, for example, you have an automatic development version for every day of the month for the last 30 days, which you can look up at any time. But even this only reveals the actual status at the time of the data backup.
–Debugger: Unfortunately, this only allows you to research the current status of Business Central & Navision, and that only with an immense amount of time. Even if you have a very good idea of where the error could or should occur, you have to use the debugger to watch the same program code at work over and over again in endlessly long sessions around critical code. This is about as productive and exciting as watching paint dry.
The solution: File logging. A generic, easy-to-call and always the same function is used to write the information you specify per day/session to a file.
Older versions are automatically deleted, so that even after years only fresh information from the latest program versions is available.
As there is no competition between processes, this solution is very fast and lock-free. And the database is not cluttered with 99% unnecessary information.
You decide for yourself when you write which information away, e.g. set filters, found records, selected sorting:
You can then comfortably view the compressed results in the log, both during development and when troubleshooting much later, without having to tap through the Business Central debugger with bleeding fingers:
10:11:45. 67,PAGE 50195,20-FA-10572,Korrekte Spalte suchen,Status=CONST(Released),FA-Nr.=CONST(10572),Arbeitsplanref.-Nr.=CONST(10000),Arbeitsplannr.=CONST(000643),Arbeitsgangnr.=CONST(30)
10:11:45. 67,PAGE 50195,20-FA-10572,Spalte gefunden,201
10:11:45. 67,PAGE 50195,20-FA-10572,Alter Inhalt,0
10:11:45. 83,PAGE 50195,20-FA-10572,Dauer umrechnen,Orig: 50,57 active FAs 1 Worker 2
10:11:45. 86,PAGE 50195,20-FA-10572,Alt Neu Gesamt,0 101,14 101,14
10:11:45. 89,PAGE 50195,20-FA-10572,Zeit aktualisiert,BDE:50 Minuten 34 Sekunden/FA:101,14
As the "Log" codeunit itself changes by
-Creating a new logging file
-Delete outdated log files (freely adjustable)
-Expand & format the log file
the call (see 1st screenshot) is correspondingly simple.
Since the log file is always stored locally (at least this is my recommendation), and preferably buffered on an SSD or at least RAM cache, logging does not delay the regular program flow. Typically, at least 15 log lines per millisecond can be expected. A single incorrectly set filter/key found as a result usually justifies several thousand log lines (from a runtime perspective).
As the log function works so quickly and does not cause any other side effects in the system or the database server, inputs, file interfaces (what comes? What goes?), unusual calculations (does the expected result always come out?) etc. can be monitored in the long term without any problems.
And it is much quicker to notice in the log if an expected program code is not run at all, or too often, or in a non-optimal sequence. Or sequences jump unexpectedly, e.g. because key fields have been modified. Unusually long runtimes (every find with more than 7 ms is too long!) also show incorrectly set filters/keys/query strategies at a glance. Endlessly long repetitions indicate pointless iterations - purely visually!
In this way, even complex issues can be quickly broken down into logical sub-blocks with comprehensible entry and exit states.
Files that are not needed (experience shows that more than 99.9%) disappear automatically and without residue after one week.
Did you know that you can force any file with text content into the text preview in Windows File Explorer? Log files, for example, are not displayed in the preview by default.
Necessary changes in the registry. The first change is necessary for Windows to be able to treat any file with text content as a text file. It does not yet change any display behavior!
EditFlags is already there and will not be changed. The other values:
[HKEY_CLASSES_ROOT]
@=“Textfile“
„Content Type“=“text/plain“
„PerceivedType“=“text“
„PersistentHandler“=“{5e941d80-bf96-11cd-b579-08002b30bfeb}“
And, using the example of *.log files, the registry extensions required for each extension to be displayed as a text file:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT.log]
@=“txtfile“
„PerceivedType“=“text“
„Content Type“=“text/plain“
[HKEY_CLASSES_ROOT.log\PersistentHandler]
@=“{5e941d80-bf96-11cd-b579-08002b30bfeb}“