Real_To_String Strangeness

2011-01-13
2011-01-28
  • jason.the.adams - 2011-01-13

    I have a benign hunk of code that converts the value of a structure: MyStructure.Value — where Value is a Real. Here's the culprit:

    SomeString := REAL_TO_STRING(myStructure.Value);
    

    Within the debugger I see that my value is what I expect it to be—159.155—but the string I receive is '159.154'. Huh. Has anyone else ran into this? Seems like something that really shouldn't have any problems. It seems like my last bit is being ignored in some form.

    Any insight is appreciated. Thanks!

     
  • johndoe - 2011-01-14

    jason.the.adams hat geschrieben:
    I have a benign hunk of code that converts the value of a structure: MyStructure.Value — where Value is a Real. Here's the culprit:

    SomeString := REAL_TO_STRING(myStructure.Value);
    

    Within the debugger I see that my value is what I expect it to be—159.155—but the string I receive is '159.154'. Huh. Has anyone else ran into this? Seems like something that really shouldn't have any problems. It seems like my last bit is being ignored in some form.

    May be a little difference between floating-point conversion routines used by the debugger (and thus executed on the PC) and that used by PLC.

     
  • jason.the.adams - 2011-01-14

    johndoe hat geschrieben:
    May be a little difference between floating-point conversion routines used by the debugger (and thus executed on the PC) and that used by PLC.

    I don't think so, and here's why: If it were merely the the debugger, then the internal value would remain correct when I send it via TCP to the client. The client doesn't receive a float-type but a string, and the string it receives is the incorrect value.

    I'm tempted to check out the DWord and see about perhaps writing my own conversion function, but that would certainly be a real bugger. I've also considered sending it to the client as a UDINT_TO_STRING, then having the client convert the UDINT to a floater.

    Also, to clarify, I am using v2.3 on a WAGO PLC.

    Thanks!

     
  • jason.the.adams - 2011-01-14

    The plot thickens!

    It is an error of rounding within the thousandths decimal position. This is only the case if the thousandth place is both a 5 AND the last position. For example:

    159.155 // Returns 159.154
    159.255 // Returns 159.254
    159.1551 // Returns Same
    159.154 // Returns Same
    159.156 // Returns Same
    0.005 // Returns 0.004
    

    Absolutely marvelous. I can't think of anything to add to that.

     
  • shooter - 2011-01-17

    and to make it worse there is a difference in rounding between the simulator and the PLC.
    Even a difference between compiled versions.
    digital is NOT exact science, however it is always the same miscalc.

     
  • johndoe - 2011-01-17

    jason.the.adams hat geschrieben:
    I don't think so, and here's why: If it were merely the the debugger, then the internal value would remain correct when I send it via TCP to the client. The client doesn't receive a float-type but a string, and the string it receives is the incorrect value.
    I'm tempted to check out the DWord and see about perhaps writing my own conversion function, but that would certainly be a real bugger. I've also considered sending it to the client as a UDINT_TO_STRING, then having the client convert the UDINT to a floater.

    Sorry, I don't understand, who is the "client"?

    jason.the.adams hat geschrieben:
    159.155 // Returns 159.154
    159.255 // Returns 159.254
    159.1551 // Returns Same
    159.154 // Returns Same
    159.156 // Returns Same
    0.005 // Returns 0.004

    Referring to 1st line, who write 159.155 in the PLC? If you write 159.155 (as string) in the debugger I think that debugger converts it in a 32-bit sequence (conversion n.1 on PC) and transmits it to the PLC. If you watch this value in the debugger I think that debugger converts 32-bits received from PLC to a string (conversion n.2 on PC). If you use REAL_TO_STRING function, PLC converts its 32-bits to a string (conversion n.3 on PLC).

    Now, which is the buggy conversion? n.1, n.2 or n.3?

    If you can access to the 32 bit internal representation try to use this site to obtain the correct (I think) conversion.
    http://babbage.cs.qc.edu/IEEE-754/32bit.html m

     
  • jason.the.adams - 2011-01-21

    shooter hat geschrieben:
    and to make it worse there is a difference in rounding between the simulator and the PLC.
    Even a difference between compiled versions.
    digital is NOT exact science, however it is always the same miscalc.
    Actually, digital conversion a science, and rigidly mathematical at that. A miscalculation of this sort is not an "oh, that happens" matter. No disrespect, but it should be exact, always.

    johndoe hat geschrieben:
    Sorry, I don't understand, who is the "client"?
    The "client" is software I wrote to exchange between the PLC via UDP/TCP. Really, for this matter, however, the client shouldn't matter; the client only deals with a string value, telling me only what it actually receives—and, for that matter, what the PLC sent.

    johndoe hat geschrieben:
    Referring to 1st line, who write 159.155 in the PLC? If you write 159.155 (as string) in the debugger I think that debugger converts it in a 32-bit sequence (conversion n.1 on PC) and transmits it to the PLC. If you watch this value in the debugger I think that debugger converts 32-bits received from PLC to a string (conversion n.2 on PC). If you use REAL_TO_STRING function, PLC converts its 32-bits to a string (conversion n.3 on PLC).
    Now, which is the buggy conversion? n.1, n.2 or n.3?
    Whether I assign the value pre-compile as a constant, run-time via debugger as a variable, or Client-to-PLC via TCP, the REAL_TO_STRING conversion fails. I'm quite sure the floater being entered is correct, and it's not merely a visual difference between the PLC and debugger. Even if I were to ignore the debugger entirely (removing n.1 and n.2), it still outputs the wrong value.

    Thanks for the thoughts! It's a real head scratcher.

     
  • johndoe - 2011-01-28

    jason.the.adams hat geschrieben:
    Whether I assign the value pre-compile as a constant, run-time via debugger as a variable, or Client-to-PLC via TCP, the REAL_TO_STRING conversion fails. I'm quite sure the floater being entered is correct, and it's not merely a visual difference between the PLC and debugger. Even if I were to ignore the debugger entirely (removing n.1 and n.2), it still outputs the wrong value.

    I agree with you: buggy conversion is that performed by REAL_TO_STRING.
    I tried this code on a Wago 750-849

    VAR
       ff: REAL := 159.155;
       ss: STRING(20);
       bb: ARRAY[0..3] OF BYTE;
       p: POINTER TO BYTE;
       i: INT;
    END_VAR
    p := ADR(ff);
    FOR i := 0 TO 3 DO
       bb[i] := p^;
       p := p + 1;
    END_FOR
    ss := REAL_TO_STRING(ff);
    

    After the execution, bb contains the hex representation of 159.155 (it is 431F27AE in big-endian order); according to my IEEE-754 Bible (http://babbage.cs.qc.edu/IEEE-754/) it is correct.
    ss contains 159.154; it is wrong because the hex representation of this value is 431F276D.

     

Log in to post a comment.