This question was bothering me for a long time, and I'd be happy to receive some explanation from other users or some CDS Visualization guru.
Suppose, we have a "sub-visualization" (to be used only inside a frame element of another visualization, or sub-visualization). For example a small P&ID symbol of a valve. I wonder about the performance aspect of the following two alternatives:
A)
Using VAR_IN_OUT or reference as input variable for the visualization
Binding member variables to the properties of visualization elements
VAR_IN_OUTVALVE:FB_VALVE;END_VAR//Element1"Toggle Color"issettoVALVE.xOUT_Opened//Element2"Input configuration / Toggle variable"issettoVALVE.xVISU_ButtonOpen//Element3"Text variable is set to VALVE.strID
B)
Using interfaces as input variable for the visualization
Binding interface properties and methods to the properties of visualization elements
Question briefly:
What is the effect on the performance of the visualization platform using property/method based approach "B" versus variable reference approach "A"?
Supposing a visualization containing large number (100-200) of such elements, each of them with 10-20 bound properties, used by 2-3 web clients on the LAN, and occasionally an additional web client trough 4G WPN.
Illustration:
This is from an application I built 5 years ago on SlowMachine, that time it was like CDS 3.5.4(?) - It was quite tricky to achieve this, but eventually it did work fine in many installations. But since then there are many improvements and changes in CDS - so probably it's time to reconsider some of the solutions I had to use that time.
Any level of answer / recommendation / suggestion is welcome, I'd be happy to get a better understanding of the internals of the entire visualization platform. Cardinal answers I can imagine:
1) There is no major difference "from the visualization platforms aspect", the compiler will generate same, optimal code for both alternatives. The overall performance (memory vs. processor load) will depend only on the implementation of the properties, methods and / or function blocks.
-or-
2) Avoid using properties and method calls in visualizations wherever possible. It is creating a big overhead for the visualization platform compared to version A, "above the obvious additional processor demand executing the get/set accessors"
-or-
3) Provide interfaces and use interface methods / properties as much as possible. The memory and or processor demand of the compiled visualization application might improve. (Or if not, it's definitely not a concern on a modern platform running Linux or Windows with gigabytes of memory and a reasonable CPU)
-or-
4) There is no general answer for that. Maybe depends on datatype - so it's different with long strings and numeric data. Maybe depends on the structure and amount of the data. Maybe if there is a big read-only table to display, it's worth to generate an array of structure containing all the information needed to display it...
So just use clean and comfortable object oriented approach by default, and if in some cases it causes performance issue, you can still do a workaround for that case. There are dirty tricks, like properties returning references..., or global variables for common read-only access shared among all clients...
-or-
5) Stupid question
Even after decades in Codesys programming I miss some basic information on the current implementation of Visualization platform - and I might not be alone with this.
So, what is your answer? Or just oppinion?
Any contribution is well received. Later I'll share my clues, experiences and guesses as well
You can also configure a visualization element with a property in those properties where you select an IEC variable. Then CODESYS creates **additional code** for the property handling when a visualization is compiled.
So I guess that not using properties, thus not generating additional code that's executed every time the element is rendered, should be faster than using them.
After all, if you are concerned with performance, an access via VAR_IN_OUT translates to a pointer dereference (with a constant offset from the base FB memory area determined at compile time) under the hood, so it roughly the faster way you can access data. But maybe microseconds vs. tens of microseconds do not matter much for your application anyway: try both!
A plus for the interface approach is that you can handle FBs with different logic with the same visualization element. You can do that also if you have an inheritance hierarchy and use only the common VAR_INPUT/VAR_OUTPUT of some ancestor, but that means that you can only have a single "type" since multiple inheritance is not supported.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
"You can also configure a visualization element with a property in those properties where you select an IEC variable. Then CODESYS creates additional code for the property handling when a visualization is compiled."
Funny, but the "activate property handling for all visualization" was recently removed (SP17 or 18?)... This might suggest, that it doesn't really matter anymore?
"A plus for the interface approach is that you can handle FBs with different logic with the same visualization element. You can do that also if you have an inheritance hierarchy and use only the common VAR_INPUT/VAR_OUTPUT of some ancestor, but that means that you can only have a single "type" since multiple inheritance is not supported."
Yes, this is exacly the reason I'm also tending more and more to use Interfaces... But still would be nice to know, what's in the backround...
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This thread that I was involved in a couple years ago doesn't directly answer your questions, but perhaps it has some useful information. I often use VAR_INPUT with an interface and it works quite well.
Recommended reading for everyone... To summarize the (subjectively) most relevant parts:
I was not alone to have trouble with SloMachine π
"... referenced visualization was not stateless before SP15. Starting with SP15 the referenced data will be updated in each cycle. Before SP15 the referenced data was only assigned once in the start of the application."
This, I remember clearly - that's what made it complicated.
So, in order to realize a page displaying data of a given function block, I had to add each and every instance of the object to a frame, and use "Switch Frame Variable" to select the required instance...
When the system run out of memory, had to do tricks, like regularly copying data of the selected FB instance into a global variable. Change detection had to be implemented in the FB - and when it set a flag, I have just copied back the data...
It's very nicely detailed how and when data was copied, by the visualization - suggesting that the compiler collected all possible outcomes / values for a reference to a given FB class... and eat a lot of memory.
But for some reasons, this was not needed with interfaces (and was not possible with interfaces) - But still it was working even with SP12.... It suggests, that Interface access had a very different implementation from the beginning...
I wonder what happens in the background, how Interface access is implemented compared to VAR_IN_OUT and VAR_INPUT references?
Do I feel / think correctly, that in the recent versions referenced data is threated the same way as interfaces (ie. maybe not copied)?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
When I finished posting the opening post / question of the topic I had the impression, that it won't be easy to get obvious and clean answer for such a complex value as "performance". It is interesting to see the changes in CODESYS from 2.3 to nowadays. I remember I could barely wait the first release of 5.3 or 5.4 on Moeller XV panel - the object oriented features very promising.
That time, and still now I'm hoping to have a development platform that can spare me a lot of time - building reusable, extensible components, maintain code quality, structure and readability... Maybe now we're getting close to that :)
So in order to have some clue about the performance, would be good to know, what happened, changed in the background of the Visualization platform... What were the aspects (maybe some compatibility or performance issues) keeping the original stateful (ie. non-stateless) approach before?
What did change since, and what was the "cost" of doing so? Or? is it still there in the background?
Anyway, deffinitely need a better understanding, so here are my questions:
1) What is happening in a visu_task cycle? What are the main steps? Is there still "copying" of data?
2) Regarding a Visu cycle : is "all the data" accessed and checked, "or" just the "tags" "actually used / needed" by the active clients?
3) Is there some kind of "change detection / tracking" on the application data referenced in visualizations?
If there is, how is it implemented in case of "variables", and in case of "properties"? Is there a difference?
4) What kind of data is "transmitted" in between the application and the client? Is it "structured, tagged data", or it is more like ""drawing commands"?"
Is it a "continuous, cyclic" data exchange, or it is "triggered" by change detection?
5) Does the client know about anything about the "structure of the application data" it is displaying? Is there any "difference" from the Clients aspect between modifying a "variable and modifying" a property value?
6) Is the "webvisu client different" in this aspect from "target / remote" visualization?
7) What happens, when some "data" is modified on client side - for example changed text in a textbox, "write variable". Is it a "scan" from the visualization Application, or an "action" from the Client?
Or it works like a continuous message exchange back and forth, containing lock requests, responses, changed data "on demand"?
8) How is ""locking"" implemented (if it is)? Is some kind of locking is requested by the client before changing it's local copy of data? Very probably Client has to lock on it's side to prevent overwriting the data, but does the application know about this? If so, does it have to acknowledge the lock request?
9) How is modified data "written back" to the application memory? Is it just synchronized to the beginning of the Main IO task, and other tasks trying to reach the modified data just get a "semaphore until" the next cycle?
Or tasks are using a cache like construct for data access, taking care of per task introduction of changed data?
... π Have plenty of questions, but I guess it's enough for now...
I have some idea about about these topics, but I'm sure not alone looking forward to get some more proper and precise information... Please feel free to share any documentation / article / topic you know about...
"Or maybe someone from 3S could give some quick directions?" If that's the condition, I'm more than happy to spend a support ticket on this. I have the intention to summarize the results on a Forge Wiki page, once I have achieved a good understanding of the topic, especially if someone willing to contribute, co-edit.
π
1
Last edit: Strucc.c 2022-12-03
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This question was bothering me for a long time, and I'd be happy to receive some explanation from other users or some CDS Visualization guru.
Suppose, we have a "sub-visualization" (to be used only inside a frame element of another visualization, or sub-visualization). For example a small P&ID symbol of a valve. I wonder about the performance aspect of the following two alternatives:
A)
Using VAR_IN_OUT or reference as input variable for the visualization
Binding member variables to the properties of visualization elements
B)
Using interfaces as input variable for the visualization
Binding interface properties and methods to the properties of visualization elements
Question briefly:
What is the effect on the performance of the visualization platform using property/method based approach "B" versus variable reference approach "A"?
Supposing a visualization containing large number (100-200) of such elements, each of them with 10-20 bound properties, used by 2-3 web clients on the LAN, and occasionally an additional web client trough 4G WPN.
Illustration:
This is from an application I built 5 years ago on SlowMachine, that time it was like CDS 3.5.4(?) - It was quite tricky to achieve this, but eventually it did work fine in many installations. But since then there are many improvements and changes in CDS - so probably it's time to reconsider some of the solutions I had to use that time.
Any level of answer / recommendation / suggestion is welcome, I'd be happy to get a better understanding of the internals of the entire visualization platform. Cardinal answers I can imagine:
1) There is no major difference "from the visualization platforms aspect", the compiler will generate same, optimal code for both alternatives. The overall performance (memory vs. processor load) will depend only on the implementation of the properties, methods and / or function blocks.
-or-
2) Avoid using properties and method calls in visualizations wherever possible. It is creating a big overhead for the visualization platform compared to version A, "above the obvious additional processor demand executing the get/set accessors"
-or-
3) Provide interfaces and use interface methods / properties as much as possible. The memory and or processor demand of the compiled visualization application might improve. (Or if not, it's definitely not a concern on a modern platform running Linux or Windows with gigabytes of memory and a reasonable CPU)
-or-
4) There is no general answer for that. Maybe depends on datatype - so it's different with long strings and numeric data. Maybe depends on the structure and amount of the data. Maybe if there is a big read-only table to display, it's worth to generate an array of structure containing all the information needed to display it...
So just use clean and comfortable object oriented approach by default, and if in some cases it causes performance issue, you can still do a workaround for that case. There are dirty tricks, like properties returning references..., or global variables for common read-only access shared among all clients...
-or-
5) Stupid question
Even after decades in Codesys programming I miss some basic information on the current implementation of Visualization platform - and I might not be alone with this.
So, what is your answer? Or just oppinion?
Any contribution is well received. Later I'll share my clues, experiences and guesses as well
more posts ...
Quoting the documentation at https://content.helpme-codesys.com/en/CODESYS%20Visualization/_visu_dlg_project_settings.html
You can also configure a visualization element with a property in those properties where you select an IEC variable. Then CODESYS creates **additional code** for the property handling when a visualization is compiled.
So I guess that not using properties, thus not generating additional code that's executed every time the element is rendered, should be faster than using them.
After all, if you are concerned with performance, an access via VAR_IN_OUT translates to a pointer dereference (with a constant offset from the base FB memory area determined at compile time) under the hood, so it roughly the faster way you can access data. But maybe microseconds vs. tens of microseconds do not matter much for your application anyway: try both!
A plus for the interface approach is that you can handle FBs with different logic with the same visualization element. You can do that also if you have an inheritance hierarchy and use only the common VAR_INPUT/VAR_OUTPUT of some ancestor, but that means that you can only have a single "type" since multiple inheritance is not supported.
"You can also configure a visualization element with a property in those properties where you select an IEC variable. Then CODESYS creates additional code for the property handling when a visualization is compiled."
Funny, but the "activate property handling for all visualization" was recently removed (SP17 or 18?)... This might suggest, that it doesn't really matter anymore?
"A plus for the interface approach is that you can handle FBs with different logic with the same visualization element. You can do that also if you have an inheritance hierarchy and use only the common VAR_INPUT/VAR_OUTPUT of some ancestor, but that means that you can only have a single "type" since multiple inheritance is not supported."
Yes, this is exacly the reason I'm also tending more and more to use Interfaces... But still would be nice to know, what's in the backround...
This thread that I was involved in a couple years ago doesn't directly answer your questions, but perhaps it has some useful information. I often use VAR_INPUT with an interface and it works quite well.
https://forge.codesys.com/forge/talk/Visualization/thread/bcbc83fe88/
Wow, this is a great resource, thanks π!
Recommended reading for everyone... To summarize the (subjectively) most relevant parts:
I was not alone to have trouble with SloMachine π
"... referenced visualization was not stateless before SP15. Starting with SP15 the referenced data will be updated in each cycle. Before SP15 the referenced data was only assigned once in the start of the application."
This, I remember clearly - that's what made it complicated.
So, in order to realize a page displaying data of a given function block, I had to add each and every instance of the object to a frame, and use "Switch Frame Variable" to select the required instance...
When the system run out of memory, had to do tricks, like regularly copying data of the selected FB instance into a global variable. Change detection had to be implemented in the FB - and when it set a flag, I have just copied back the data...
But for some reasons, this was not needed with interfaces (and was not possible with interfaces) - But still it was working even with SP12.... It suggests, that Interface access had a very different implementation from the beginning...
I wonder what happens in the background, how Interface access is implemented compared to VAR_IN_OUT and VAR_INPUT references?
Do I feel / think correctly, that in the recent versions referenced data is threated the same way as interfaces (ie. maybe not copied)?
Last edit: Strucc.c 2022-12-03
When I finished posting the opening post / question of the topic I had the impression, that it won't be easy to get obvious and clean answer for such a complex value as "performance". It is interesting to see the changes in CODESYS from 2.3 to nowadays. I remember I could barely wait the first release of 5.3 or 5.4 on Moeller XV panel - the object oriented features very promising.
That time, and still now I'm hoping to have a development platform that can spare me a lot of time - building reusable, extensible components, maintain code quality, structure and readability... Maybe now we're getting close to that :)
So in order to have some clue about the performance, would be good to know, what happened, changed in the background of the Visualization platform... What were the aspects (maybe some compatibility or performance issues) keeping the original stateful (ie. non-stateless) approach before?
What did change since, and what was the "cost" of doing so? Or? is it still there in the background?
Anyway, deffinitely need a better understanding, so here are my questions:
1) What is happening in a visu_task cycle? What are the main steps? Is there still "copying" of data?
2) Regarding a Visu cycle : is "all the data" accessed and checked, "or" just the "tags" "actually used / needed" by the active clients?
3) Is there some kind of "change detection / tracking" on the application data referenced in visualizations?
If there is, how is it implemented in case of "variables", and in case of "properties"? Is there a difference?
4) What kind of data is "transmitted" in between the application and the client? Is it "structured, tagged data", or it is more like ""drawing commands"?"
Is it a "continuous, cyclic" data exchange, or it is "triggered" by change detection?
5) Does the client know about anything about the "structure of the application data" it is displaying? Is there any "difference" from the Clients aspect between modifying a "variable and modifying" a property value?
6) Is the "webvisu client different" in this aspect from "target / remote" visualization?
7) What happens, when some "data" is modified on client side - for example changed text in a textbox, "write variable". Is it a "scan" from the visualization Application, or an "action" from the Client?
Or it works like a continuous message exchange back and forth, containing lock requests, responses, changed data "on demand"?
8) How is ""locking"" implemented (if it is)? Is some kind of locking is requested by the client before changing it's local copy of data? Very probably Client has to lock on it's side to prevent overwriting the data, but does the application know about this? If so, does it have to acknowledge the lock request?
9) How is modified data "written back" to the application memory? Is it just synchronized to the beginning of the Main IO task, and other tasks trying to reach the modified data just get a "semaphore until" the next cycle?
Or tasks are using a cache like construct for data access, taking care of per task introduction of changed data?
... π Have plenty of questions, but I guess it's enough for now...
I have some idea about about these topics, but I'm sure not alone looking forward to get some more proper and precise information... Please feel free to share any documentation / article / topic you know about...
"Or maybe someone from 3S could give some quick directions?" If that's the condition, I'm more than happy to spend a support ticket on this. I have the intention to summarize the results on a Forge Wiki page, once I have achieved a good understanding of the topic, especially if someone willing to contribute, co-edit.
Last edit: Strucc.c 2022-12-03