Preface
This chapter is part of a series of posts:
- Remote control the great Stockcorner JC3-s tuner
- Automate tuning of the JC3-s tuner
- Complete, headless and web controlled remote operation of the FT-991a
- Remote transmission and reception of all common operation modes including voice
To learn more about the series, go here.
Chapter 2 - Remotely operate the Yaesu FT-991a (and automate tuning)
In the previous post I explained how I use an ESP8266 development board, a transistor, buck converter and some wires to tap into the control box of the JC-3S tuner. A process which can be applied to other tuners.
In this post I'm going over on how to use this new power to automate the tuning process, since right now all we can do is press the START button virtually and watch the cinematics of the KEY Led.
To replicate elements presented in this post you need:
- An instance of home assistant
- A breadboard computer like a raspberry pi, or any other 24/7 machine capable of running linux attached to your transceiver
- node-RED install somewhere* on your network
- A way to control your tuner remotely, see the previous post.
Use cases for the Raspberry Pi
Many HAMs have Raspberry Pis or comparable single board computers at home. Boards like the Raspberry Pi 4b are powerful enough to run light Desktop loads and can replace Intel NUCs or even older Notebooks completely. Thanks to Linux, the options are limitless on how to use the power and IOs of a Pi. My Pi 4 is used in the following way:
- Run Ubuntu 20.04 LTS 64bit for Raspberry Pi 4
- Run headless - aka no screen, keyboard or mouse attached
- Run xRDP Server instead of VNC or Teamviewer - If a visual remote session is required (which in my setup it almost never is)
- Be permanently connected to the FT-991a
- Transport and receive audio with the lowest possible latency
- Run flrig but render it on a remote XServer
- If needed: Run wstj-x, fldigi, js8call locally
In my use case I spent little time in front of the transceiver. Most of my time I'm on the sofa or in my office, but I am still QRV using digi or even voice modes.
Node-RED for automation

In the previous post I briefly touched upon that home assistant offer powerful automatons. But sometimes they are not enough. Enter Node-RED. A nodeJS based visual flow designer with a great community and lots of custom nodes we can pull in.
Node-RED is, like home assistant, friendly to non-coders. All you need is a talent for logic and idea on what you want to achieve.
With the dashboard nodes installed, creating a responsive dashboard is very easy. Of course, there's also a tight home assistant integration. I used a combination of native esphome Web APIs and home assistant integration.
Have esphome, home assistant and node-RED exchange notes
esphome offers two methods to call exposed functionality of an esp, like polling sensors or setting GPIOs: Real time event source and REST. One could also use MQTT, but I still haven't understood MQTT.
I tried a node-RED module for server events, but wasn't happy. Instead I use the native GET to poll the gosund SP111 plug for the current power draw:


Using the bi-directional home assistant integration, interacting with all devices under the control of home assistant is very easy. This extends to platforms home assistant just integrates, like Philips Hue. Why not turn all bulbs in the house to red when you go QRV? Why not turn off the TV to avoid a nagging YL about your voice in the speaker?
For example, if I want to turn on the smart power plug, I can simply use home assistant nodes:

*You can run node-RED on any of your local devices and reach the same effect, because you ideally communicate with all devices via their local host names. This way you can "develop" your flows on your machine and export/import later into the 24/7 node-RED host in your network.
Tip: If you don't have a powerful enough system attached to your transceiver, you could install node-RED on the same machine running home assistant. This makes integration even easier and you have a master for all thing automation.
If you're just getting started: Why not use the board you are going to use to control your rig?
With having home assistant, esphome and node-RED exchanging messages, it's time to add the FT-991a
flrig
Part of the fl* series of great software package by W1HKJ & Associates, flrig connects to your transceiver and allows controlling most of it's aspects via it's user interface. Many functions are also exposed via an XML-RPC interface, allowing limited control of flrig via remote commands.

There are many guides out there on how to setup flrig, I will only touch upon important tips required for this special use case.
flrig does not expose the "DT gain" meter of the FT-991a. When running digi modes, you must watch the ALC meter (by clicking on the S-Meter during transmit) and adjust the input volume reaching the FT-991a. The ALC meter is not exposed via XML-RPC, leading to a decision discussed below.
Adding a fixed virtual com port for flrig
When you attach your transceiver to a linux machine via USB, it will be assigned a virtual com port. This com port can change every time the device appears on the USB port. For the FT-991a, this is especially problematic: One USB cable carries both the USB soundcard and the CAT interface. Both appear as /dev/ttyUSB*
devices. It's a lucky draw to have the previously set port work the next time. This makes a remote and headless operation cumbersome, automations impossible.
To have flrig (and all other applications sending CAT commands) find the FT-991a at the same spot all the time , we need a udev rule:
$ cat /etc/udev/rules.d/10-local.rules
SUBSYSTEMS=="usb", ATTRS{interface}=="Enhanced Com Port", ATTRS{bInterfaceNumber}=="00", GROUP="dialout", MODE="0666", SYMLINK+="ft991a"
This udev rule ensures that whenever the FT-991a appears to the system, a symlink is created under which it can be reached. In this cas: /dev/ft991a
. We avoided the roulette. In addition, lower privileges are required to access the device. The user starting an application communicating with CAT ports needs to member of the dialout
group.
After adding the rule, we can reload the udev system without a reboot via:
udevadm control --reload-rules && udevadm trigger
Run flrig without an active local X session
I see a lot of 90's styled webpages, best viewed in IE 7.0, telling you to run VNC to remote access a system attached to your rig. That is old school. VNC does not allow audio transport (xRDP does) and is horrible in performance.
Short detour: If you really want to have a virtual desktop, which you can login to and run your ham applications with the rig attached via USB, please go for xRDP.
Install guides: Ubuntu/arch - go for xrdp-git/debian. To enable audio ingress and egress, install xrdp pulseaudio
A good linux client is called "Remmina", a good MacOS client is called "Royal TSX" and Windows users have a client built in - Because xRDP is a free version of Microsoft Remote Desktop.
Tip: Apache Guacamole is a web remote gateway, which enables access from a browser with bi-directional sound redirection. I make use of that on my root server for when I don't trust the machine I am using.
Instead of starting a remote X session, I have two methods of making use of remote X apps like flrig:
Method a) remote headless xserver
My first attempt was to never ever look at flrig and just communicate with it via XML-RPC. To an extend I still make use of XML-RPC, but it doesn't expose all functions the user interface does, therefore it is not enough.
I am currently not using this approach, but if you want to try, here's the command:
xvfb-run -a flrig
xvfb-run
is a script which trigger the virtual display server "X virtual frame buffer". It's a neat service to run apps which deny startup if they can't find a display to render to. In case of flrig, we can use this if we are happy with the XML-RPC exposed functions.
Table of XML-RPC exposed flrig functions
Function | key | Description |
---|---|---|
main.set_frequency | d:d | set current VFO in Hz |
main.get_version | s:n | returns version string |
rig.get_AB | s:n | returns vfo in use A or B |
rig.get_bw | s:n | return BW of current VFO |
rig.get_bws | s:n | return table of BW values |
rig.get_bwA | s:n | return BW of vfo A |
rig.get_bwB | s:n | return BW of vfo B |
rig.get_info | s:n | return an info string |
rig.get_mode | s:n | return MODE of current VFO |
rig.get_modeA | s:n | return MODE of current VFO A |
rig.get_modeB | s:n | return MODE of current VFO B |
rig.get_modes | s:n | return table of MODE values |
rig.get_sideband | s:n | return sideband (U/L) |
rig.get_notch | s:n | return notch value |
rig.get_ptt | s:n | return PTT state |
rig.get_power | s:n | return power level control value |
rig.get_pwrmeter | s:n | return PWR out |
rig.get_smeter | s:n | return Smeter |
rig.get_split | s:n | return split state |
rig.get_update | s:n | return update to info |
rig.get_vfo | s:n | return current VFO in Hz |
rig.get_vfoA | s:n | return vfo A in Hz |
rig.get_vfoB | s:n | return vfo B in Hz |
rig.get_xcvr | s:n | returns name of transceiver |
rig.get_volume | s:n | returns volume control value |
rig.get_rfgain | s:n | returns rf gain control value |
rig.get_micgain | s:n | returns mic gain control value |
rig.set_AB | s:s | set VFO A/B |
rig.set_bw | i:i | set BW iaw BW table |
rig.set_bandwidth | i:i | set bandwidth to nearest requested value |
rig.set_BW | i:i | set L/U pair |
rig.set_frequency | d:d | set current VFO in Hz |
rig.set_mode | i:i | set MODE iaw MODE table |
rig.set_modeA | i:i | set MODE A iaw MODE table |
rig.set_modeB | i:i | set MODE B iaw MODE table |
rig.set_notch | d:d | set NOTCH value in Hz |
rig.set_power | i:i | set power control level, watts |
rig.set_ptt | i:i | set PTT 1/0 (on/off) |
rig.set_vfo | d:d | set current VFO in Hz |
rig.set_vfoA | d:d | set vfo A in Hz |
rig.set_vfoB | d:d | set vfo B in Hz |
rig.set_split | i:i | set split 1/0 (on/off) |
rig.set_volume | i:i | sets volume control |
rig.set_rfgain | i:i | sets rf gain control |
rig.set_micgain | i:i | sets mic gain control |
rig.swap | i:i | execute vfo swap |
rig.cat_string | s:s | execute CAT string |
I am still using XML-RPC for the automatic tuning process in Node-RED which I will explain in a minute. Or hour.
Method b) Rendering remote flrig locally
In my setup, I don't need a virtual desktop at all. This saves resources. I can do this, because MacOS has the ability to run an XServer via XQuartz, which in turn can render applications running on a remote system.
Windows 10 can also do that. Linux users won't even read this part.
Assuming you have access to a Xserver and you can SSH into a machine, then starting flrig can be as simple as this:
ssh -Y [email protected] flrig

The app is running on the remote machine, it's just that the UI is rendered on your machine. The magic of *nix.
Having apps rendered locally is certainly better than accessing them via a virtual desktop, but even better than that is locally running apps tapping into remote resources, like sound and CAT. But we get to that later.
Now we have most components together. You can stop here if you don't need the audio from your transceiver on your local machine (or are happy with xRDP for that) and if you don't require automation for tuning.
flrig's XML-RPC and node-RED
No matter if flrig runs in a virtual framebuffer, is rendered locally or remotely: Once it's running on the machine connected to the rig, we can use XML-RPC to poll and set different states:

With the knowledge of the XML-RPC functions rig.get_mode
/rig.set_mode
, rig.get_power
/rig.set_power
and rig.set_ptt
, we can create a Node-RED node flow which automates the FT-991a side of tuning by controlling the mode and RF power.

svc: switch.toggle
kicks our our little ESP8266 in action. It calls home assistant to trigger the ESP to close the gate of the transistor and thus put the tuner into tune mode. Once this action is triggered, node-RED's trigger
nodes take care to set the FT-991a's mode to 'AM' and RF power to '15' - Settings required for tuning.
After 12 seconds they set mode and power back to their states saved 15 seconds before by change
nodes named "Save X before tune".
The total time allocated to tuning is 12 seconds. Power and mode are saved every 15 seconds. There are surely smarter ways, like saving mode and power once tuning has been triggered and before the action starts, but this timer based approach works for me, so far.
Notice how I don't get the LED (KEY) status from the ESP8266. Since I switched from headless flrig to a locally rendered flrig I have a, not super fast, S-Meter to check whether tuning was successful. This is better than checking if an LED turns off before the internal time limit of the tuner for its tuning mode runs out.
For auto tune to work, flrig must be started after the rig has been powered on and after the the ESP8266 has "booted" up. Please keep this in mind.
Bonus: S-Meter
flrig has a rig.get_smeter
call which delivers values from 0 - 100. This is not especially useful when you want S-Meter values in an gauge in Node-RED. If you opt to render flrig's UI one way or the other you may not need this, but I still find it need to have an S-Meter gauge in Node-RED for when I just want to monitor the rig.
In the coming github repo you will find a node which translated the values send by flrig to S-values 1-15. Yes, I am aware that S9 is where it ends. Node-RED doesn't allow an off-the-shelf gauge to have customized labeling.
Using a remote flrig instance with digi mode apps
A neat feature of flrig comes from it's well established position in the HAM radio ecosystem. Popular digital mode tools like fldigi and wsjt-x, as well as its siblings jtdx and js8call offer the ability to set a remote flrig for radio control:

For some reason the XML-RPC port of flrig is not documented. The default is '12345' - yes, really. You can change it in ~/.flrig/flrig.prefs
Now all components are complete to remotely power, adjust and tune our shack.
What is missing is a proper, low-latency bi-directional audio transport. Without it, we cannot run digi modes or voice applications from our local machine. Luckily, there are also great open source methods to achieve this. More on that in the next post.