Welcome to our new forum
All users of the legacy CODESYS Forums, please create a new account at account.codesys.com. But make sure to use the same E-Mail address as in the old Forum. Then your posts will be matched. Close

Good improvement in linux Real Time.

mondinmr
2021-12-11
2022-11-07
  • mondinmr

    mondinmr - 2021-12-11

    We started using Linux SL since 3.5.12.
    On i5 6th gen., isolating 2 cores, using RT Preempt Kernel precompiled by DEBIAN, was very interesting see improvements update after update.

    3.5.12 Max Jitter in field application ~180µs.
    3.5.14 Max Jitter in field application ~102µs.
    4.2.0 Max Jitter in field application ~62µs.

    In 4.2.0 is near RTE windows Jitter on same hardware, but with a big advantage on security side.
    Linux SL is running in user space and it's possible isolate it on a chroot jail!!!
    Windows RTE is in kernel ring and this is very dangerous on all sides in security meanings.
    Today continuos upgrading of Windows OS meke it a good Desktop OS, but create tons of problems on industrial field applications.

    This message is just to congratulate the team that is following the development of Codesys!

     
    👍
    4
    • Ingo

      Ingo - 2021-12-12

      Thanks a lot for your feedback. I will forward it to the team.

      An additional information:
      We are working on a students project to analyze the realtime performance on
      Kubernetes. On the testsystem, we got a maximal Jitter below 10us. This is
      achieved with several tweaks of the system, and by making heavy use of
      multicore. Its very impressive what's possible with a current realtime
      kernel!

       
  • mondinmr

    mondinmr - 2021-12-17

    Wow just tested on a J1900!!! 39µs max jitter!!!
    We'll put in production next month.

     

    Last edit: mondinmr 2021-12-17
  • mondinmr

    mondinmr - 2021-12-17

    @Igno, you activated me! I played around with /etc/CODESYSControl.cfg, interrupts, kernel parameters etc ...
    22µs on J1900 ...

    Not under 10µs like your tests, but a good result for me.
    In a system running KDE as window manager!

     

    Last edit: mondinmr 2021-12-17
  • mondinmr

    mondinmr - 2021-12-17

    Another test!

    -Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
    -Debian/Linux Buster;
    -RT kernel;
    -Headless system;
    -KVM Hypervisor;
    -Hyperthreading disabled;
    -Core 2 and 3 detached from OS;
    -Core 0 and 1 used by OS and pinned to Windows10 VM;
    -Windows10 running with main GPU, USB and WIFI in passtrought;

    -Runtime Linux SL 4.0.2;
    -Codesyscontrol main process forced to core 2;
    -IEC tasks fixed and pinned to core 3;
    -Many tunings in kernel parameters;

    Max Jitter 18µs after 20 minutes!!!

     
    • Ingo

      Ingo - 2021-12-17

      Hehe, sounds great!

      One of the "tricks" in our setup is currently, that we use "isolcpus", and
      start the runtime without multicore support, bound to one of the isolated
      cores.

      So this core is free of any interrupts. Only some rare kernel locks are
      producing some slight jitter.

       
  • mondinmr

    mondinmr - 2021-12-17

    Isolcpus is the basis. Then there are a couple of extra parameters on the kernel line.
    An additional script that shifyvinterrupts the kernel previously assigned is very useful in init.
    Then there would be the two interval parameters in codesyscontrol.cfg, but I haven't found any documentation and I still don't quite understand what they change. I know for a fact that they affect jitter a lot. And in last some tuning in /sys and /proc.

     
  • mondinmr

    mondinmr - 2021-12-20

    Intel(R) Core(TM) i5-6440EQ CPU @ 2.70GHz

    Same kernel tuning.
    10µs!!!

    As in attached screenshot.

     
    👍
    2
  • mikuroung - 2021-12-31

    what is the debian version.?

    I wonder what kind of tuning you did.

     
    • mondinmr

      mondinmr - 2022-01-03

      Buster. I'm still in Buster, Bullseye is too young.

      Here is my kernel line in i7 using KVM as hypervisor.

      GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=2,3 processor.max_cstate=1 intel_idle.max_cstate=0 acpi_irq_nobalance noirqbalance console=ttyS0 earlyprintk=ttyS0 quiet nofb loglevel=0 vfio-pci.ids=1002:6987,1002:aae0,8086:9dc8 nofb nomodese intel_iommu=on iommu=1 video=vesafb:off,efifb:off vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1"
      
      • In BIOS I disabled hyperthreading

      • isolcpus=2,3 turn off cores 2 and 3 from OS usage.
        I obtained better results on cores 2 and 3 leaving 0 and 1 to OS.

      • processor.max_cstate=1 intel_idle.max_cstate=0
        Force some power management in intel CPU (look frequency and disable power management in BIOS is not enough)

      • acpi_irq_nobalance noirqbalance
        Force IRQ balancing to off, avoiding kernel IRQ on cores 2 and 3

      • A script is useful to move preassigned IRQ out of cores 2 and 3.

      I found it in some forum.

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      #!/bin/sh
      if [ $# != 1 ]
      then
          echo "Syntax: $0 cpu_hex_mask"
                      echo "Mask is 1-F where 1 is CPU0"
                      echo
          exit
      fi
      for i in /proc/irq/*; do
              if [ -d $i ]; then
                      echo $1 > $i/smp_affinity || true
              else
                      echo $1 > $i || true
              fi
      done
      

      ForceIrqToCore.sh 1
      Force all movable IRQ to core 0.

      • I modified /etc/init.d/codesyscontrol start function
      start_runtime () {
          #exit script if package is not installed
          [ -x "$EXEC" ] || exit 5
      
          if [ ! -z "$DEBUGOUTPUT" ]; then
              ARGS="-d $ARGS"
              if [ -z "$DEBUGLOGFILE" ]; then
                  DEBUGLOGFILE=/tmp/codesyscontrol_debug.log
              fi
          else
              DEBUGLOGFILE=/dev/null
          fi
      
          mkdir -p $WORKDIR
          cd $WORKDIR && ( $DAEMON $DAEMON_ARGS $EXEC $CONFIGFILE $ARGS >$DEBUGLOGFILE 2>&1 & echo $! >$PIDFILE )
          sleep 1
          if [ ! -z $DAEMON ] && which pidof >/dev/null 2>&1; then
              # wait up to 10 seconds for process to become ready
              local TIMEOUT=10
              while ! pidof -s $EXEC >$PIDFILE 2>/dev/null; do
                  TIMEOUT=$(expr $TIMEOUT - 1)
                  if [ "$TIMEOUT" = "0" ]; then
                      break
                  fi
                  sleep 1
              done
          fi
      
          do_status
          if [ $? -eq 0 ]; then
              rm $PIDFILE
              echo "Error: Failed to start codesyscontrol"
              exit 1
          else
              PID=$(cat $PIDFILE)
              echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
              echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
              echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
              echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
              taskset -p 4 $PID >>/root/codelog.txt
              renice -n -20 -p $PID >>/root/codelog.txt
              tuna --threads $PID --priority=RR:99
              echo 0 > /proc/sys/kernel/numa_balancing
              echo never > /sys/kernel/mm/transparent_hugepage/enabled
              echo 0 > /sys/kernel/mm/ksm/run
              echo "codesyscontrol started"
          fi
      }
      

      Forcing runtime to core 3 (Mask 4)
      I forced nice level and priority with tuna and renice.
      (apt-get install tuna)
      - I forced tasks in CODESYS in fixed pinning on core 2.

      • In last I made a tuning in /etc/CODESYSControl.cfg
      [CmpSchedule]
      SchedulerInterval=100 
      ProcessorLoad.Enable=1
      ProcessorLoad.Maximum=200
      ProcessorLoad.Interval=200
      DisableOmittedCycleWatchdog=1
      

      Lowering SchedulerInterval and ProcessorLoad.Interval until runtime take 70-80% of core 3 usage. (I don't know very well what they do respectively)

      I also removed irqbalance:
      apt-get purge irqbalance

       
      👍
      2

      Last edit: mondinmr 2022-01-07
      • Ingo

        Ingo - 2022-01-03

        Maybe I can explain the mysterium about the scheduler interval:
        It is an interval, in which the scheduler checks all IEC task watchdogs, as well as the processorload watchdog.

        The IEC task watchdog is what you define in the taskconfiguration in CODESYS. And when you increase this value in the. config file, your system will react later on a watchdog event.

        The interval of the processoload watchdog can't be lower than the scheduler interval (at least it would not make much sense), because it is the scheduler, which checks the processor load.

        What can you do with this intercal?
        You can reduce the CPU overhead, by increasing the interval. but it will not really have an effect on the maximum jitter.

         
  • MadsKaizer

    MadsKaizer - 2022-01-03

    Thank you for sharing these improvement speeds, its always nice to see some real numbers behind a patch note "Improved code execution" :)

     
  • mikuroung - 2022-01-06

    thank a lot for your answer.
    This is very helpful for me

     
  • mondinmr

    mondinmr - 2022-03-04

    Raspberry PI4, debian 64bit, Kernel RT stock from debian repo.
    Runtime SL ARM64.
    A little tuning in kernel parameters and CPU pinning.
    Jitter <= 52µs with ethercat on native adapter.
    I think that on ARM remain a big margin of improvement, 6-7µs should be possible.

     

    Last edit: mondinmr 2022-03-04
  • paho - 2022-08-03

    Good to hear that, because i'm using the Linux SL on my i5 IPC too. Having Debian 4.19 with RT Preempt Kernel. The max. Jitter is 272us.
    Are there additionally adjustments to do in the OS? My application is now fixed at CPU1 and i want to know if using the multicore brings really an improvement...

    For my application, i'm not that unhappy with the jitter, but i'm using MODBUS TCP/IP bus couplers from WAGO and if i'm going to send data in short intervals (like every 50ms) then i can see some delays (not lost packets or erros) - i can see that on a LED that i'm going to switch on and of every 50ms. From time to time it seems that there is a delay of one 0/1 cycle, like a pause of 50ms.
    The coupler is connected with a seperate ethernet interface and static configuration, point-to-point connection with a patchcable.

    I think this can also be some network driver issue... maybe someone has a suggestion for my problem.

    Thank you,
    paho

     
    • mondinmr

      mondinmr - 2022-08-12

      Hi! In latest versions of Codesys Control for linux everything is improved very well!

      Only a few steps are required to obtain very low jitter on many CPUs.

      0 - Use RT Preempt kernel (Debian has a precompiled RT kernel)

      1 - DISABLE HYPERTHREADING FROM UEFI Setup/BIOS!!!

      2 - In INTEL CPUS change in /etc/default/grub this line:

      GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=2,3 processor.max_cstate=1 intel_idle.max_cstate=0 acpi_irq_nobalance noirqbalance quiet"
      

      isolcpus detach cores 2 and 3 from Linux KERNEL.
      High cores work better for me.
      processor.max_cstate and intel_idle.max_cstate tune power management.
      acpi_irq_nobalance noirqbalance avoid IRQs balancing. This has a big impact on jitter.

      3 - Use folowing script to push all IRQs possible on low cores use mask 1 or 3 to use cores 0 and 1 and leave alone 2 and 3:

      #!/bin/sh
      if [ $# != 1 ]
      then
          echo "Syntax: $0 cpu_hex_mask"
                      echo "Mask is 1-F where 1 is CPU0"
                      echo
          exit
      fi
      for i in /proc/irq/*; do
              if [ -d $i ]; then
                      echo $1 > $i/smp_affinity || true
              else
                      echo $1 > $i || true
              fi
      done
      

      4 - Modify /etc/initrd/codesyscontrol do_status:

          do_status
          if [ $? -eq 0 ]; then
              rm $PIDFILE
              echo "Error: Failed to start codesyscontrol"
              exit 1
          else
              PID=$(cat $PIDFILE)
              echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
              echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
              echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
              echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
              taskset -p 4 $PID >>/root/codelog.txt
              echo 0 > /proc/sys/kernel/numa_balancing
              echo never > /sys/kernel/mm/transparent_hugepage/enabled
              echo 0 > /sys/kernel/mm/ksm/run
              echo "codesyscontrol started"
          fi
      

      This move codesysbin to core 2, fix CPU frequency to maximum (This is very important for jitter) and make some irq tuning.

      5 - In codesys go in TaskGroup and select CORE 3 for all tasks. Now you have GNU/Linux OS on cores 0 and 1, only CODESYS runtime on core 2 and only IEC tasks on core 3.

      6 - Tune /etc/CODESYSControl.cfg as follow:

      [CmpSchedule]
      SchedulerInterval=200 
      ProcessorLoad.Enable=1
      ProcessorLoad.Maximum=200
      ProcessorLoad.Interval=200
      DisableOmittedCycleWatchdog=1
      

      In 4.2 and 4.4.1 runtimes max jitter shoud be from 13µs to 40µs depending on luky of your hardware configuration.

      We have many production machines out in different hw configurations (J1900, i5 5th gen, i5 6th gen) and jitter max never go over 40µs. In luky hardware remain under 13µs.

      There are many wonderful things about linux runtime:

      1 - It's working in User Space, this make all stack exceptions bring debugger to yellow row indicating its.
      In Windows RTE it's working in kernel space many time severe exceptions crash all device bringing Windows to reboot or to a blue screen of death or a black screen (I bring in all situations in some machine). In such cases it's very hard find error causes (I found only due a RPI4 in my bag connected in place of Windows controller and replaced back after solving problems).
      Kernel space is painful for safety of IoT.

      2 - All INTEL and Realtek chips working on a PCIe line work in ethercat very well also in 250µs cyclic with standard linux kernel modules (drivers)!

      3 - You can develop external components in C.

      4 - GNU/Linux is hundreds time much safe and stable in production machines, you don't fall in painful Windows 10/11 update system that is became a nightmare in industrial automation!

      As side note, we are also testing ARM 64 runtime in Raspberry 4, with a our distro built in yocto.
      Raspberry 4 has ethernet connector on PCIe line in contrast to USB of old Raspberries until 3.
      It's impressive! We tested an application for many hours, using a Ethercat SLAVE board SPI, a C++ application for slave, CODESYS as master on native ethernet connector connected master to slave on itself rpi and obtained a max jitter of 19µs in a 300µs cyclic.
      In only master configuration I controlled a Lenze Servo Inverter down to 250µs cyclic.

      The only difference in RPI 4 is kernel command line of step 2, all other steps remain unaltered.

       
      👍
      2

      Last edit: mondinmr 2022-08-12
      • paho - 2022-08-18

        Great, with that tutorial i've made it. Working like charm.

        Another question, because there is a current project where the client asked me if we can put the CODESYS Runtime on a virtual system (because they've a big server farm with thousands of virtual systems) with soft real time (cycle times of 0,5-1ms)? Did anyone encountered something like this before?

        Thanks a lot!

         
  • Ingo

    Ingo - 2022-08-16

    Hi @mondinmr,
    thanks again for this nice summary about realtime optimizations!
    And also thanks for the outlook regarding the Rasperry Pi 4.

    Looking forward to hear from that.

    Hint: If you post the Raspberry Pi results in CODESYS Talk, maybe use a different topic for RPI. Then it will be easier found.

    Don't know if you found this already, but from CODESYS there is an FAQ, giving some optimization hints: https://faq.codesys.com/pages/viewpage.action?pageId=122748972

    And soon, there will be a new chapter in the online help.

    Keep up the great work!

    Cheers,
    Ingo

     
  • paho - 2022-10-14

    Hi again,

    i did everything that mondinmr wrote in his tutorial above. If i'm starting the codesyscontrol and let my program run, there is a max. jitter with ~15us.
    If i let the program run for lets say a few hours over night, there is a max. jitter with 300us. Is this because you cannot remove all IRQs from the two isolated cores and something is blocking the cycle?

    I switched yesterday from Debian 10 to Debian 11 with RT patch (with all the tweaks from above) and except from that i think the IEC-Tasks and everything else works perfect.
    Also the Intel I225-LM is working great with my TCP/IP Modbus controller with Debian 11 out of the box.

    Greetings,
    paho

     
    • mondinmr

      mondinmr - 2022-10-20

      Also in Debian 10? In many machines using kernel 4.19.xRT Is working good. On medium load we have some spike after many hours but it's remaining <50-100µs depending on machine.
      Results obtained with LXQT, a heavy Qt/C++ application doing HMI and IoT works, a mysql server and a openVPN service running on other cores.

      In attached image you can see 65µs max after 2.5 hours.

      It's a J1900 with medium load.

       
      👍
      1

      Last edit: mondinmr 2022-10-20
      • paho - 2022-11-07

        Currently on Debian 11, but somehow i've managed to get it working with the steps you suggested.
        On a i5-11th Gen. iEi DRPC industrial PC it's working for a few weeks without any problems.
        The cores 2+3 are isolated from the rest and there is no conflict on the CODESYS runtime with the NodeJS/MongoDB server, also on heavy load.

        This thread should be official included to the CODESYS FAQ, because of the rare infos about the Linux runtime.

        Thanks to everyone here, especially @mondinmr

         
        👍
        1
    • mondinmr

      mondinmr - 2022-10-25

      Here a i5 5th gen.
      On same PC there are:

      • Runtime with ethercat master
      • Qt/C++ Settings manager, Gateway, logger and recorder/player to provide and store program recordings.
      • Qt/C++ HMI on LxQt Desktop.
      • X2GO remote desktop connected via VPN.
      • OpenVPN Server.
      • MySQL server for programs management and messages management.

      As you can see, after about 2 hours max jitter is 45µs, but it remain many time under 10µs, there are spikes of 15-20µs in about 10-15 minutes and 30-45µs in about 30 minutes.

      We have many machines working in this configuration out in last 3 years.

       
      👍
      1

Log in to post a comment.