I configured recently the AC Datalog on my home heating management application that will run on a RPi 3.
My idea is to store some values every 5 min for Analog values and use the event logging for Boolean ...
This Datalog generate a lot of red message saying that the buffer is full at each data log in the message log on the target.
What can I do to avoid this?
Csv file parameter has a stand buffer size of 2000 bytes and even triggering the write at 50% rather than 75% does not help.
Thank you for any idea to solve this.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm sorry I can't seem to recreate this, or haven't yet. I am using a Rpi 3 B+ 1gb ram with a 32gb class 10 sdhc card that has 16gb free. I'm using the standard buffer size of 2000 bytes, write is triggered at 75%. I do have i2c and spi enabled. Also the latest codesys package for the Rpi but only version 10 of rasbian buster.
How many analog variables and of what type are you recording? You might need to increase the buffer size.
Right now I'm logging 10 real variables and 10 boolean variables.
Last edit: Morberis 2020-10-06
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello
Thanks for your feedback .
My first version of Datalog config was logging 4 real and 2 Boolean but I discovered that the HVAC library I use for pump and valve management has its own Datalog configuration that I cannot deactivate so the number is quite higher.
In the second version, I configured two Datalog channels, one event and one cyclic
I Created the whole IO list of my future app From excel sheet and put all real value I a cyclic 5min ( that is triggered every 15s now for testing purpose). I have 43 real values logged.
I put all Boolean and HVAC stuff in the event channel ( so recorded at startup and then on change)
The recording itself seems working properly for the cyclic data channel, but event channel was recorded only at startup.
I mean that the file Date and size on the RPi did not change... ( could be because there was not enough change to reach the buffer limit triggering a write on the disk)
Unfortunately, the runtime was stopped after half an hour due to exception ...
So data( if recorded in buffer ) where lost...
I think that the error messages were already present with the first version but the number or error message was smaller.
I mean that it seems that this message come multiple times with roughly same timestamp.
Do you know how to export journal of error to a text file?
I am running on a RPi 3+ with SD only ... I will check the exact rasbian distribution but it was installed in beginning of this year.
More details by mail this evening...
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
here is the archive of my testing project combining investigation on
datalog / trends / traces.
J’ai lié 1 fichier à ce message :
Gestion chauffage H3 V1.0.projectarchive https://we.tl/t-JRoG39HdQC(128 Mo)WeTransferhttps://we.tl/t-JRoG39HdQC
Mozilla Thunderbird https://www.thunderbird.net permet de partager
facilement des fichiers volumineux.
i also copy the log file.
Thanks for your help.
Le 07-10-20 à 08:13, pruwetbe a écrit :
Hello
Thanks for your feedback .
My first version of Datalog config was logging 4 real and 2 Boolean
but I discovered that the HVAC library I use for pump and valve
management has its own Datalog configuration that I cannot deactivate
so the number is quite higher.
In the second version, I configured two Datalog channels, one event
and one cyclic
I Created the whole IO list of my future app From excel sheet and put
all real value I a cyclic 5min ( that is triggered every 15s now for
testing purpose). I have 43 real values logged.
I put all Boolean and HVAC stuff in the event channel ( so recorded at
startup and then on change)
The recording itself seems working properly for the cyclic data
channel, but event channel was recorded only at startup.
I mean that the file Date and size on the RPi did not change... (
could be because there was not enough change to reach the buffer limit
triggering a write on the disk)
Unfortunately, the runtime was stopped after half an hour due to
exception ...
So data( if recorded in buffer ) where lost...
I think that the error messages were already present with the first
version but the number or error message was smaller.
I mean that it seems that this message come multiple times with
roughly same timestamp.
Do you know how to export journal of error to a text file?
I am running on a RPi 3+ with SD only ... I will check the exact
rasbian distribution but it was installed in beginning of this year.
Good news , some times the simplist idea are working the best...
So the preset size of the buffer is really too small...
I will try this tomorrow morning !
Thanks a lot...
Now I have the licences RPi and HVAC working on a dongle so I can test without having my runtime stopping after 30 min...
I will check the options and macros, thanks for the indications.
Hello Morberis,
after get datalog running without error, the next step was to check how to create flat CSV (one tag per column with common time stamp sampling) sothat I can use the nice and easy free tools (datplot) to create plot.
After some search on pythons,that was finally more easy than I thought.
Basically, Pandas library give very powerfull function for resampling time series.
If you are interested, I can share such python script.
The more difficult part was on codesys side again. For a reason, I did not catch, up to now, the hashfile containing the Tag list with tag ID and format stopped to be updated by the datalog configuration.
I tried a lot of different changes in the datalog definition to force this file to be updated.
The crazy stuff is that the log file indicate that the file is created with success but when looking to the RPI datalog directory, the old hashfile stayed unchanged.
Even when i finally deactivate the hash file option, the CSV file never include again the tags property.
The datalog config txt file in the config subdirectory follow perfectly the configuration done and the logging itself is working but it is difficult to find the ID when the hashfile is not updated...
I finally tried after so many unsuccessfull trial to start from an old backup and could get one hashfile again but when updating my codesys program and datalog code with the code from the last version, i got again no creation the datalog.
Hello Morberis,
The only solution I found to solve this strange problem was to remove completely all the Datalog definition from my codesys project and then recreate them from scratch and now it is working normally.
From now, each time I will change something in my software, I will check if hashfile are properly updated before to go further step.
Thanks for your help on these subject, now I can go back to my main subject, continue to develop the app with simulation and be able to évaluation the perfect function with trace, trends and Datalog.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
After multiple trial with Ac datalog, I came to the conclusion that this solution is not stable enough to be use for my case.
It causes too many runtime error without sufficient explanation nor tools for debugging.
So I decided to do the data logging with external tool.
My choice was to use an influx DB time series Database and I created a small node-red config that subscribe to my codesys global data using OPCUA and record the change in the influx DB.
I use grafana to plot curve .
The more complicated part was to understand the way to create an automatic subscribe from a browse of one global variable block.
But working this way, when I add new global variable to log , I just need to trigger a new OPC UA browse.
Influx DB automatically creates the field when adding new data.
👍
1
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
That is a good work @pruwetbe, I'm sorry that Ac_Datalog didn't work. If it's ok I'll refer back to this thread for anyone else trying to do logging on the Pi. Thanks for your write up I might emulate you if I end up having similar problems with my personal devices.
Did you use the grafana visualization in codesys with the web browser element, or was it for browsing remotely?
One thing that I learned about recently that may have been at play here, and can be an issue for other things, is the monitoring interval of the device. It's in the properties tab of the controller that you see when you right click on it in the device tree. Here's the Codesys Help page but it doesn't say much about it and I can't give a good explanation. Is is possible for someone with Codesys to give an explanation of what it does? Or perhaps why prewetbe ran into the issues he outlined in his post?
Personally I don't have the option for most of my deployments to use anything but the tools Codesys provides so as I start deploying AC_Datalog solutions to more clients it would be helpful to know what to be on the lookout for.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Up to now I used grafana with a browser from my computer. this is mainly for long term recording but it works also with the very last data and it is very flexible about the graph definition with a lot of nice tools for filtering, sampling and even filling missing data.
I am still working on the à OPC UA subscribe With deadband that is not working for now. Not sure it is really supported by RPI codesys OPC UA server.
I have to test it with another OPC client than node-red
But I have at least my basic requirements, Boolean and setpoint recorded on change, measurement recorded on cycle base...
A next step could be also to run node-red’ influx and grafana on a separate Pi ( this is the advantage of OPC UA)
What is also interesting for me is the retain policy of influxDB that can automatically compress old data
Like this I can record every 10 s in OPC UA but resample data every 5 min for data older than one month...
I hope that you will have more chance than me with AC datalog...
À good thing is probably to configure it at the end of your project when everything else is finalized and you don’t need to download for testing or debugging ...
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I configured recently the AC Datalog on my home heating management application that will run on a RPi 3.
My idea is to store some values every 5 min for Analog values and use the event logging for Boolean ...
This Datalog generate a lot of red message saying that the buffer is full at each data log in the message log on the target.
What can I do to avoid this?
Csv file parameter has a stand buffer size of 2000 bytes and even triggering the write at 50% rather than 75% does not help.
Thank you for any idea to solve this.
more posts ...
I'm sorry I can't seem to recreate this, or haven't yet. I am using a Rpi 3 B+ 1gb ram with a 32gb class 10 sdhc card that has 16gb free. I'm using the standard buffer size of 2000 bytes, write is triggered at 75%. I do have i2c and spi enabled. Also the latest codesys package for the Rpi but only version 10 of rasbian buster.
How many analog variables and of what type are you recording? You might need to increase the buffer size.
Right now I'm logging 10 real variables and 10 boolean variables.
Last edit: Morberis 2020-10-06
Hello
Thanks for your feedback .
My first version of Datalog config was logging 4 real and 2 Boolean but I discovered that the HVAC library I use for pump and valve management has its own Datalog configuration that I cannot deactivate so the number is quite higher.
In the second version, I configured two Datalog channels, one event and one cyclic
I Created the whole IO list of my future app From excel sheet and put all real value I a cyclic 5min ( that is triggered every 15s now for testing purpose). I have 43 real values logged.
I put all Boolean and HVAC stuff in the event channel ( so recorded at startup and then on change)
The recording itself seems working properly for the cyclic data channel, but event channel was recorded only at startup.
I mean that the file Date and size on the RPi did not change... ( could be because there was not enough change to reach the buffer limit triggering a write on the disk)
Unfortunately, the runtime was stopped after half an hour due to exception ...
So data( if recorded in buffer ) where lost...
I think that the error messages were already present with the first version but the number or error message was smaller.
I mean that it seems that this message come multiple times with roughly same timestamp.
Do you know how to export journal of error to a text file?
I am running on a RPi 3+ with SD only ... I will check the exact rasbian distribution but it was installed in beginning of this year.
More details by mail this evening...
Hello
here is the archive of my testing project combining investigation on
datalog / trends / traces.
J’ai lié 1 fichier à ce message :
Gestion chauffage H3 V1.0.projectarchive
https://we.tl/t-JRoG39HdQC(128 Mo)WeTransferhttps://we.tl/t-JRoG39HdQC
Mozilla Thunderbird https://www.thunderbird.net permet de partager
facilement des fichiers volumineux.
i also copy the log file.
Thanks for your help.
Le 07-10-20 à 08:13, pruwetbe a écrit :
Your buffer needs to be bigger. I just made it 20000 instead of 2000 for both and it worked fine. So you'll need to play around with that.
Good news , some times the simplist idea are working the best...
So the preset size of the buffer is really too small...
I will try this tomorrow morning !
Thanks a lot...
Now I have the licences RPi and HVAC working on a dongle so I can test without having my runtime stopping after 30 min...
I will check the options and macros, thanks for the indications.
Pierre RUWET
Hello Morberis,
after get datalog running without error, the next step was to check how to create flat CSV (one tag per column with common time stamp sampling) sothat I can use the nice and easy free tools (datplot) to create plot.
After some search on pythons,that was finally more easy than I thought.
Basically, Pandas library give very powerfull function for resampling time series.
If you are interested, I can share such python script.
The more difficult part was on codesys side again. For a reason, I did not catch, up to now, the hashfile containing the Tag list with tag ID and format stopped to be updated by the datalog configuration.
I tried a lot of different changes in the datalog definition to force this file to be updated.
The crazy stuff is that the log file indicate that the file is created with success but when looking to the RPI datalog directory, the old hashfile stayed unchanged.
Even when i finally deactivate the hash file option, the CSV file never include again the tags property.
The datalog config txt file in the config subdirectory follow perfectly the configuration done and the logging itself is working but it is difficult to find the ID when the hashfile is not updated...
I finally tried after so many unsuccessfull trial to start from an old backup and could get one hashfile again but when updating my codesys program and datalog code with the code from the last version, i got again no creation the datalog.
Any idea?
Last edit: pruwetbe 2020-10-11
Hello Morberis,
The only solution I found to solve this strange problem was to remove completely all the Datalog definition from my codesys project and then recreate them from scratch and now it is working normally.
From now, each time I will change something in my software, I will check if hashfile are properly updated before to go further step.
Thanks for your help on these subject, now I can go back to my main subject, continue to develop the app with simulation and be able to évaluation the perfect function with trace, trends and Datalog.
Hello all,
After multiple trial with Ac datalog, I came to the conclusion that this solution is not stable enough to be use for my case.
It causes too many runtime error without sufficient explanation nor tools for debugging.
So I decided to do the data logging with external tool.
My choice was to use an influx DB time series Database and I created a small node-red config that subscribe to my codesys global data using OPCUA and record the change in the influx DB.
I use grafana to plot curve .
The more complicated part was to understand the way to create an automatic subscribe from a browse of one global variable block.
But working this way, when I add new global variable to log , I just need to trigger a new OPC UA browse.
Influx DB automatically creates the field when adding new data.
That is a good work @pruwetbe, I'm sorry that Ac_Datalog didn't work. If it's ok I'll refer back to this thread for anyone else trying to do logging on the Pi. Thanks for your write up I might emulate you if I end up having similar problems with my personal devices.
Did you use the grafana visualization in codesys with the web browser element, or was it for browsing remotely?
One thing that I learned about recently that may have been at play here, and can be an issue for other things, is the monitoring interval of the device. It's in the properties tab of the controller that you see when you right click on it in the device tree. Here's the Codesys Help page but it doesn't say much about it and I can't give a good explanation. Is is possible for someone with Codesys to give an explanation of what it does? Or perhaps why prewetbe ran into the issues he outlined in his post?
Personally I don't have the option for most of my deployments to use anything but the tools Codesys provides so as I start deploying AC_Datalog solutions to more clients it would be helpful to know what to be on the lookout for.
Up to now I used grafana with a browser from my computer. this is mainly for long term recording but it works also with the very last data and it is very flexible about the graph definition with a lot of nice tools for filtering, sampling and even filling missing data.
I am still working on the à OPC UA subscribe With deadband that is not working for now. Not sure it is really supported by RPI codesys OPC UA server.
I have to test it with another OPC client than node-red
But I have at least my basic requirements, Boolean and setpoint recorded on change, measurement recorded on cycle base...
A next step could be also to run node-red’ influx and grafana on a separate Pi ( this is the advantage of OPC UA)
What is also interesting for me is the retain policy of influxDB that can automatically compress old data
Like this I can record every 10 s in OPC UA but resample data every 5 min for data older than one month...
I hope that you will have more chance than me with AC datalog...
À good thing is probably to configure it at the end of your project when everything else is finalized and you don’t need to download for testing or debugging ...