Monday, December 17, 2018

Annoying Adobe Updater

The most annoying thing on my Mac is (was!) the Adobe Flash Updater.  This annoying program will pop up and steal the focus and it doesn't actually work.  It never succeeds in updating the Adobe Flash plugins.

The only way to do an update is to go to the Adobe web site with a web browser and download their apps again and install them manually.

Man will only be free, 
once the last computer has been strangled 
with the power cable of the last router.
— With apologies to Didero.

There are many pages on the wild wild web that suggest how to suppress this atrocious thing, but I have not seen a method that actually works.  So I hunted all Adobe updaters down with the top, kill and  find commands and then rooted them out with brute force:
$ sudo su -
password
# find / -name "Adobe*app"
... 

# cd /Library/Application\ Support/Adobe/ARMDC/Application/
# mv "Adobe Acrobat Updater.app" "Adobe Acrobat Updater.bad"
# cd /Applications/Utilities/
# mv "Adobe Flash Player Install Manager.app" "Adobe Flash Player Install Manager.bad"

So there!


Postmortem
What continues to amaze me, is that there are people working at software houses like this, who write the most atrocious bug ridden software and then have the nerve to inflict it on the world - Have they no shame?


Cheers,

Herman

Friday, December 14, 2018

GQRX SDR on Ubuntu Linux Server 18.04

GNU Radio on Linux


Software Defined Radio requires a reasonably fast computer and won't work properly on a virtual machine.  The heart of Free SDR is of course GNU Radio, from here https://www.gnuradio.org/ and here https://wiki.gnuradio.org/index.php/Main_Page.

I like the GQRX program which I use with the RTL-SDR and Great Scott Gadgets HackRF One and these are all very well supported on Linux and Mac as described here http://gqrx.dk/tag/hackrf.

 Gqrx SDR 2.6 with RFSpace Cloud-IQ
While I can make this work on my Mac, whenever Apple releases a large OS update, I have to re-install the whole house of cards all over again.  This gets very tiring after a while.

So to get this lot working and keep it working, I bought a nice new Intel NUC https://www.intel.com/content/www/us/en/products/boards-kits/nuc.html and installed Ubuntu Linux Server 18.04 LTS on it.

This is a long term support (10 year) Linux version which means that it will get security updates, but the essentials will remain more or less the same, so my GNU Radio software should keep working until 2028 or beyond and not get broken every few months by Apple.  Since I only use it every few months, it meant every time I wanted to use it, it was broken - sigh...

Ubuntu Linux Server Download

The advantage of a server version, is that it contains only the essential software packages to get a computer running efficiently - no bloatware.  You are therefore assured of getting some raw speed.

However, since I am not a complete masochist, I install the light weight Desktop Environment XFCE on it, so that I can do things without having to resort to ASCII art.

Download the 18.04  LTS server ISO file from here https://www.ubuntu.com/download/server/thank-you?version=18.04.1.0&architecture=amd64

On the Mac, open a terminal, set user to root and copy the ISO file to a USB stick as described here https://www.aeronetworks.ca/2013/05/using-dd-on-mac-to-copy-iso-file.html.  Instead of dd, you can use cat also, it works just as well.  (Even head or tail will do, if you can make head or tail of the syntax).

A word of caution: Never, never, never write to /dev/sda or /dev/disk1 since that will destroy your computer.  Watch the ins and outs.

Basic Installation

The Linux server software installs in seconds - in the blink of a lazy eye.  Stick the USB widget in the NUC and boot up.  Create a user account and looong password and follow the defaults to use the whole disk, then reboot.  As easy as borscht.

You will now have a lightning fast machine that boots up to a beeyoootiful black screen and prompt, waiting patiently on your beck and call.

Install the Actually Useful Stuff

$ sudo su -
password
# apt install xfce4 mplayer firefox geany mousepad vlc x264 ffmpeg gstreamer1.0-plugins-* libreoffice gimp pdfshuffler xournal evince links lynx xnec2c xnecview

Something in the above will automatically pull in the build-essential package, so the compiler and headers will be there too.  With the above tools, you can control the world.

Go get some coffee, then:
# reboot

Login again and launch XFCE:
$ startx

Click the Default Config button and Mark's your Uncle.  Now you need neither Timmy nor Saty anymore.

Static IP Address

In order to use the NUC remotely over ethernet, it will help if it has a static address, so you know how to reach it.  You can configure this in the rc.local file, which is the last process to run at computer startup.  This is the best place to put user additions to the system, since at this point, everything is up and running and stable.

First see what the name of the ethernet port is:
# ip link show
# ip addr show

It could be enp0s25 or some equally silly device name.  Also look at the address given by the DHCP server and pick a new one that is similar but not in the DHCP allocation range.

Create the /etc/rc.local file:
# cd /etc
# nano rc.local
Add this:
#! /bin/bash
ip addr add 192.168.1.200/24 dev enp0s25

Then make it executable and enable the rc-local process:
# chmod +x rc.local
# systemctl enable rc-local
# reboot

The machine will now have two IP addresses on the same port.  One given by the DHCP server and the other statically assigned.  Both should work.

Install GQRX

Install the GQRX repositories:
# add-apt-repository -y ppa:bladerf/bladerf
# add-apt-repository -y ppa:myriadrf/
drivers
# add-apt-repository -y ppa:myriadrf/
gnuradio
# add-apt-repository -y ppa:gqrx/gqrx-sdr
# apt update


Finally, install gqrx:
# apt install gqrx-sdr

You can now run the Volk optimizer to get even more speed:
# apt install libvolk1-bin
# volk_profile

Remote Access with the Secure Shell

If you install the Quartz X server on your Mac, then you can open an xterm and launch a program on the NUC.  It will then transparently pop up on the Mac desktop:
$ ssh -X user@192.168.1.200 mousepad

Note that on this server version, the SSH daemon sshd runs at startup and since it has its own small X server and client built in, you can run X programs remotely with ssh, without actually running X on the server, but you need X, Xorg, or Quartz, on your desktop/laptop computer.

If the above mousepad example works, plug your SDR widget into the NUC and launch GQRX:
$ ssh -X user@192.168.1.200 gqrx

Now you can run the NUC in your radio shack with a screen, keyboard and rodent attached, or you can stick the NUC and SDR gadget inside a NUMA weatherproof box and put it on a mast with a satcom antenna, then access it remotely over ethernet from the comfort of your radio shack, or your living room couch, over SSH with your laptop machine.

WiFi Interface

The NUC WiFI interface required some subtle attention to make it work. This young lady's guide was helpful:
https://www.linuxbabe.com/command-line/ubuntu-server-16-04-wifi-wpa-supplicant

First check if the device is detected and available:
# ifconfig -a
wlp58s0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether d4:6d:6d:d8:c7:7d txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


Check if the firmware is installed:
# dmesg | grep firmware
[ 15.242446] iwlwifi 0000:3a:00.0: loaded firmware version 34.0.1 op_mode iwlmvm


Bring the interface up:
# ip link wlp58s0 up

Look for networks:
# iw dev wlp58s0 scan
BSS 0e:b6:d2:a0:ef:12(on wlp58s0)
last seen: 5690.766s [boottime]
TSF: 9933444402819 usec (114d, 23:17:24)
freq: 2437
beacon interval: 100 TUs
capability: ESS Privacy ShortSlotTime (0x0411)
signal: -78.00 dBm
last seen: 1292 ms ago
Information elements from Probe Response frame:
SSID: yourssid


Install wpa_supplicant, since the default iw tool suffers from a segmentation fault:
# apt install wpasupplicant
(Note that the Ubuntu wpasupplicant install package doesn't have an underscore)

Create a configuration file:
# wpa_passphrase yourssid yourasciipassphrase
network={
ssid="yourssid"
#psk="yourasciipassphrase"
psk=ab5dcc5eccfbf3ff0867e94bb9a73d6ccd32789ec9aa1ade204b8d4d876a5225
}


Write it to the configuration file:
# wpa_passphrase naila 037649906 > /etc/wpa_supplicant.conf

Verify that it actually works by running wpa_supplicant in the foreground:
# sudo wpa_supplicant -c /etc/wpa_supplicant.conf -i wlp58s0
Successfully initialized wpa_supplicant
wlp58s0: SME: Trying to authenticate with 54:b8:0a:1f:67:90 (SSID='naila' freq=2457 MHz)
wlp58s0: Trying to associate with 54:b8:0a:1f:67:90 (SSID='naila' freq=2457 MHz)
wlp58s0: Associated with 54:b8:0a:1f:67:90
wlp58s0: CTRL-EVENT-SUBNET-STATUS-UPDATE status=0
wlp58s0: WPA: Key negotiation completed with 54:b8:0a:1f:67:90 [PTK=CCMP GTK=TKIP]
wlp58s0: CTRL-EVENT-CONNECTED - Connection to 54:b8:0a:1f:67:90 completed [id=0 id_str=]

Ctrl-C

Run it again with the -B option in the background:
# sudo wpa_supplicant -B -c /etc/wpa_supplicant.conf -i wlp58s0
Successfully initialized wpa_supplicant


Get an IP address with DHCP:

# dhclient wlp58s0

Test the connection:

# ifconfig -a
wlp58s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.17 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::d66d:6dff:fed8:c77d prefixlen 64 scopeid 0x20<link>
ether d4:6d:6d:d8:c7:7d txqueuelen 1000 (Ethernet)
RX packets 75 bytes 7048 (7.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 22 bytes 3060 (3.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Hermans-MacBook-Pro:~ herman$ ping 192.168.1.17
PING 192.168.1.17 (192.168.1.17): 56 data bytes
64 bytes from 192.168.1.17: icmp_seq=0 ttl=64 time=5.176 ms
64 bytes from 192.168.1.17: icmp_seq=1 ttl=64 time=1.836 ms


W00t!


Put the following in /etc/rc.local and put a few sleeps in there to allow the magical fairy dust to settle:
ip link wlp58s0 up
sleep 1
sudo wpa_supplicant -B -c /etc/wpa_supplicant.conf -i wlp58s0
sleep 5
ip addr add 192.168.1.250/24 dev wlp58s0


La voila!

!Updates

For the next decade, only do security updates (or no updates at all if it is not hooked to the wild wild web).  DO NOT do feature updates.   This way, the system should keep working forever and ever, the same as the day you originally installed it.


Happy RF Hacking!

Herman






Wednesday, November 14, 2018

Care and Feeding of a Parabolic Reflector

If you want to listen to Jupiter sing, bounce a message off the Moon, or bounce off aircraft, random space junk, or meteor trails, talk to a Satellite, or a little unmanned Aircraft, you need a very high gain antenna.  An easy way to make one, is from an old C-band satellite TV, Big Ugly Dish (BUD).

Considering that the amount of space junk is ever growing, Junk Bounce Communications (TM) can only improve.  An advantage of Junk Bounce is that it works at any frequency, from UHF up to K-band, so orbiting space junk could become the new ionosphere, a neat radio wave reflector around the planet!

To use an unknown dish, you need to find its focal point and then make a little antenna with a good front to back ratio, to use as a feed.

Focal Length

The focal point of a parabola is easy to find using some forgotten high school geometry:
  • Measure the diameter (D) and the depth (d) of the dish.
  • The focal length F = D^2 / 16 x d
Note that an offset feed dish is only half a parabola.  It is best to use a circular dish with centre feed.  There are millions of these things lying around, pick a good one.

When it is free, take two.
-- Ancient Jewish proverb.

Some people may even pay you to please take their old BUD...

Feeding a Hungry Dish

Most satellites are rotating slowly, to improve their stability - the space station is an exception.  This means that a ground antenna needs to be circularly polarized, otherwise the signal will fade and fluctuate, twice, with each revolution.  This requires either a helical antenna, or a turnstile Yagi antenna.

A Yagi antenna tends to have a very low impedance, while a helical antenna tends to have a very high impedance.  The parasitic elements of a Yagi loads the active element, much like resistors in parallel.  One can use the same effect with a helical antenna, to reduce its impedance to something closer to a 50 Ohm co-axial feed cable.

A multifilar helical antenna can be tweaked to almost exactly 50 Ohm, by driving the one filament and leaving the other ones floating, just like Yagi director elements.  The more floating metal parts, the lower the impedance gets.

Bifilar Helical Feed for WiFi ISM Band

An easy(!?) way to make a small helical feed antenna for the Industrial Scientific & Medical (ISM) S-Band is with semi-rigid coaxial cable of 2.2 mm diameter.  A semi rigid co-ax is a thin copper pipe, which is easy to form with your fingers, without any resulting welts and blisters (3 mm is more stiff, but still doable, while 3.6 mm is already hard to bend and twist by hand).


Bifilar Helix Model

The reflector should be circular and about 3 times the diameter of the helix - roughly 100 mm or more will do.  For the model, I made a square patch, 1 mm below Z = 0 - since it is easier to define in NEC.

Twist and Shout

Cut two filaments with a Dremel cutting disk, grind and file the ends till they are exactly the right length and then bend them carefully into a circle.  When the circle is as round as you can get it, slowly pull the two ends sideways until they are 49 mm apart.  Copper recrystallize at room temperature, so take your time.  If the wire turns hard - leave it till the next day - then it will be soft again.

Bifilar Helix

I mounted the filaments into little wood dowels and glued one to a circular FR4 PCB reflector - the bottom of a coffee tin will work too. Note that the dowels will not be parallel - the ends of the wires should line up, which means that the two dowels will seem slightly off kilter.

The top cross bar could also be a straight piece of coax, since the current at that end is zero. Therefore, you could make a bifilar helix out of a single piece of wire, but bending and stretching it precisely is quite hard, which tends to crack it at the 90 degree bends.  I fixed the one below, with solder.  Pick your poison!

One Piece Helix

The outside of the one driven element must be soldered to the centre of a 50 Ohm feed line and the screen of the feed line must be soldered to the reflector.  It always requires some improvisations to make a helix, which is a large part of the 'fun'.

The other parasitic element will just be standing there above the ground plane, seemingly doing nothing, but it does affect the antenna pattern and impedance, so it is an important working part of the antenna.

Mount the feed at the end of a strong 1/2 inch wooden dowel rod, at the focal point of the dish.  I cut the FR4 reflector and the slots in the dowel with a Dremel cutting wheel, a small hacksaw and a file.  It required a jig made from multiple clamps and funny putty to hold everything square while the epoxy glue cured.  Another way is to put it together using little triangles and hot glue, then use epoxy on the other side, finally remove the hot glue and epoxy the rest.  This is the easy part...

 Helix Mounting Jig

I got the dowels at a gift shop - I bought a couple of little flags, kept the sticks and discarded the flags. That was very unpatriotic of me, but flag poles are commonly used for covert radio amateur antennas!


Completed Helical Feed 

If you zoom in on the picture, you'll see that the coax goes to one end of the helix.  The other end is left floating.  Tie the RG316 coax to the rod with waxed nylon lacing twine (Otherwise known as Johnsons dental floss!).  I glued a BNC connector into the disc.  BNC is not the best for ISM microwave frequencies, but it is easy to work and experiment with.

The base disc was cut from a bread board with a jig saw.  I marked the hole pattern with nails lightly tapped with a hammer through the dish mounting holes.

Paint the antenna feed with Conformal Coating (I used an Italian V66 conformal PCB spray), or any other kind of clear varnish to make it last a while.  Do not use coloured paint, since you don't know what was used to make the pigment.  If it is a metal salt, or carbon black, then the paint will ruin the antenna.

Polarization

An end fire helix is naturally circular polarized.  This little one is RHP, but when it reflects off the dish, the phase shifts 180 degrees and it becomes LHP.  So depending on what exactly you want to do with your feed, you got to be careful which way you wind it.

If you get confused, get a large wood screw.  A common screw is Right Handed.

Helix Design

From the famous graph of Kraus, we get the following:
  • Frequency: 2450 MHz helical array
  • c=299792458 m/s
  • Wave length = 2.998x10^8 / 2450 MHz = 0.122 m
  • Axial Mode:
    • Circumference = 1.2 x 0.122 = 0.146 m
    • Diameter = 0.146 / pi = 0.0465 m
    • Pitch = 0.4 x 0.122 = 0.049 m
    • Turns = 1
  • Length of filament: sqrt(circumference^2 + pitch^2) x turns = 0.154 m

NEC2 Model

Here is the NEC2 model of the bifilar WiFi ISM band helical feed:

CM Bifilar 2.450 GHz ISM Band Helical Antenna with Parasitic Element
CM Copyright reserved, Herman Oosthuysen, 2018, GPL v2
CM
CM 2450 MHz helical array
CM c=299792458 m/s
CM Wave length = 2.998x10^8 / 2450 MHz = 0.122 m
CM WL/2 = 0.061 mm
CM WL/4 = 0.030 mm
CM Axial Mode:
CM Circumference = 1.2 x 0.122 = 0.146 m
CM Pitch = 0.4 x 0.122 = 0.049 m
CM Turns = 1
CM
CE
# Helix driven element
# Tag, Segments, Spacing, Length, Rx, Ry, Rx, Ry, d
GH     1     100   4.90E-02  4.90E-02   2.3E-02   2.3E-02   2.3E-02   2.3E-02   2.20E-03
# Parasitic helix element, 180 degrees rotated
GM     1     1     0.00E+00  0.00E+00   1.80E+02  0.00E+00  0.00E+00  0.00E+00  0.00E+00
# Ground plane
SM    20    20 -5.00E-02 -5.00E-02 -1.00E-03  5.00E-02 -5.00E-02 -1.00E-03  0.00E+00
SC     0     0  5.00E-02  5.00E-02 -1.00E-03  0.00E+00  0.00E+00  0.00E+00  0.00E+00
GE
# THIN WIRE KERNEL: NORMAL: 0; EXTENDED: -1
EK -1
# EXCITATION: I1 VOLTAGE: 0, I2 TAG: 1, I3 SEGMENT: 1, I4 ADMITTANCE: 0, F1: 1 VOLT, F2: 0 IMAGINARY
EX  0   1   1   0   1   0
# FREQUENCY: IFRQ LINEAR: 0, NFRQ STEPS: 41, BLANK, BLANK, FMHZ: 2350 MHz, DELFRK: 5 MHz
FR  0   41   0   0   2.35E+03   5
RP  0   91  120 1000     0.000     0.000     2.000     3.000 5.000E+03
EN


Radiation Pattern

The helix made from 2.2 mm semi-rigid coaxial cable, has a good front to back ratio of about 6 dB and a nearly flat frequency response over the 2.4 GHz WiFi band.

Execute the simulation with xnec2c:
$ xnec2c -i filename.nec


Radiation Pattern

The parasitic element does its thing remarkably well, resulting in an impedance of 48 Ohm (inductive), which is a near perfect match to a 50 ohm coaxial line.  The imaginary impedance doesn't matter much - it just causes a phase shift.

Smith Chart


The actual antenna was measured with little KC901V, 2-port network analyzer and it looks pretty good. No additional impedance matching is required for a 50 Ohm coaxial cable.

Note that the frequency is very high and the wavelength is very short.  Therefore, if you change anything by as little as half a millimeter, the results could be completely different.

If you want to use a different size semi rigid co-ax from your cable junk box to make the helix, then you will need to spend a couple hours tweaking the helix parameters (width and spacing), to get the impedance back to about 50 Ohm again.

Other Antenna Designs With Circular Polarization

While the helix naturally does circular polarization, there are other ways to achieve the same:
  • A corner reflector with a skew mounted dipole.
  • A twisted Yagi antenna with the reflector, driven element and director at 45 degree angles w.r.t. each other - a discrete version of a helix.
  • A crossed Yagi antenna, with a 1/4 wave delay line between the driven elements, a.k.a. a turnstile antenna.
  • A patch antenna with two truncated corners.
  • A quad patch array with each patch rotated by 90 degrees.
  • A crossed monopole antenna, or an F antenna.
There are more ways to distort the EM wave and cause elliptical or circular polarization, just use some imagination.

Radio Transceiver

You can use this antenna with a HackRF One software defined radio, from Great Scott Gadgets:
https://greatscottgadgets.com/hackrf/

You can buy one at my favourite toy store, Sparkfun Electronics: https://www.sparkfun.com/categories/tags/hackrf

GQRX Software Defined Radio

The HackRF One radio works with GNUradio and GQRX : http://gqrx.dk/  It is a very nice half-duplex radio and can tune up to 6 GHz and is good for VHF to microwave experiments.

Junk Bounce Communications (TM)

To bounce pings off space junk, you could point your BUD straight up like a bird bath and use two radios (This sure is not a cheap hobby!), with a directional coupler (Note that the HackRF is half duplex - it cannot receive itself and for transmit, you obviously need an RF amplifier).  Once you got meaningful results, you could find a ham partner some distance away to exchange pings, chirps, or short messages.  If you are very dedicated then you can track and bounce messages off the space station for several minutes on a pass.

If you send a continuous stream of pings up into the sky, then you should receive an echo from a UFO every few minutes.  Only an engineer will find this exciting, while the missus will pretend that it is a great achievement (She knows you are crazy and will be real happy that you are staying in your radio shack and out of her hair).

To bounce off the moon, one would need a whole backyard full of BUDs in a multi-antenna array.  I don't think the missus will appreciate a Square 0.01 km Array very much, so I'll leave this idea to someone else...


La voila!

Herman



Wednesday, July 25, 2018

Patch Antenna Design with NEC2

The older free Numerical Electromagnetic Code version 2 (NEC2) from Lawrence Livermore Lab assumes an air dielectric.  This makes it hard (but not impossible) for a radio amateur to experiment with Printed Circuit Board Patch antennas and micro strip lines.


Air Spaced Patch Antenna Radiation Pattern


You could use the free ASAP simulation program, which handles thin dielectrics, you could shell out a few hundred Dollars for a copy of NEC4, You could buy GEMACS if you live in the USA, or you could add distributed capacitors to a NEC2 model with LD cards (hook up one capacitor in the middle of each element.), but that is far too much money/trouble for most.

More information on driving an array antenna can be found here: https://www.aeronetworks.ca/2019/03/driving-quad-patch-array-antenna.html

Air Dielectric Patch 

The obvious lazy solution is to accept the limitation and make an air dielectric patch antenna.

An advantage of using air dielectric, is that the antenna will be more efficient, since it will be physically bigger and it will have less loss, since the air isn't heated up by RF, so there is no dielectric loss.

An air spaced patch can be made of tin plate from a coffee can with a pair of tin snips.  A coffee can doesn't cost much and it comes with free contents which can be taken orally or intravenously...

Once you are done experimenting, you can get high quality copper platelets from an EMI/RFI can manufacturer such as Tech Etch, ACE UK and others.


Wire Grid With Feed Point

This grid is not square.  The length is slightly shorter than the width, to avoid getting weird standing waves which will disturb the pattern.   Making these things is part design and part art.  You need to run lots of experiments to get a feel for it.  It may take a few days.  You need lots of patience.  If the pattern looks like a weird undersea creature, then it means that the design is unstable and it will not work in practice.

Find the range where the radiation pattern looks pleasing with a well defined rounded main lobe and the gain is reasonable and go for the middle, so that you get a design that is not ridiculously sensitive and can be built successfully.  It doesn't help to design an antenna with super high gain and then when you build it, you only get a small fraction thereof, due to parasitic and tolerance effects - rather design something that is repeatable and not easily disturbed.

If you cannot find suitable tin plate, then you could try 1/32nd inch FR4 (0.8 mm) and then keep the gap to the ground plane relatively big, so that the effect of the little bit of dielectric is minimized, but if you don't build exactly what you modeled, then making a numerical model is not very useful...

Ground Plane

To model a patch antenna, you need to design two elements, the patch and the ground plane.  The ground plane needs to be a bit bigger than the patch.  The distance between the two is extremely critical and it is important that you can easily vary the gap to find the sweet spot where you get the desired antenna pattern.  With a patch antenna, varying the height by only one millimeter, has a large effect on the pattern.

The NEC ground plane GN card is always at the origin Z = 0.  If you model the patch as a grid of wires, then changing the height above this ground is a very laborious job.  A grid with 21 x 21 wires has 84 values of Z.  You need a programmer's editor with a macro feature to change all that, without going nuts in the process.  It would be much easier if the antenna grid could be kept still and the ground plane shifted up or down instead.

It turns out that the Surface Patch feature of NEC can be successfully misused as a ground plane.  Make a ground plane with GN 1 and make a surface patch and compare the radiation patterns - you'll see they are the same.

Normally, something modeled with SP cards must be a fully enclosed volume, but it works perfectly as a two dimensional ground plane if the antenna is always above it, with nothing below.  The height of a multi patch surface 'ground plane' can be altered by changing only three values of Z, which is rather easier than the 84 Z heights in the wire grid.

Wire Grid

You could model the patch using SP cards, but then you need to define all 6 sides of the 3D plate, which is just as much hassle as making a wire grid with GW cards.  You could also make a wire grid by starting with one little two segment wire and careful use of GM cards, to rotate it into a little cross and replicate it to the side and down, but then it becomes hard to figure out where to put the feed point, since the tag numbers of the wires become unknown after using GM cards.

In the end, I modelled the example patch grid using GW cards, since it is rather mindless to do and then defined the feed point on wire #16.  If you used the replication method, then define a tiny 1 segment, 1 mm long vertical wire, with the (x,y) co-ordinates calculated to be exactly on a grid wire, without having to know what the tag number of that wire is.  For this method, I assign a high number (1000) to the tiny feed wire tag, so I can tie a transmission line TL card to it.

You will see the logic in this approach once you try to make a multi patch array by rotating and translating the first patch with multiple GM cards and then sit and stare at the screen and wonder where the heck to put the feeds.

Parallel Plate Capacitor

A patch antenna is a parallel plate capacitor.

 Smith Chart - Capacitive Load

Whereas a Helical Antenna is inductive, a Patch is capacitive and you got to live with it.  The impedance on the edge is very high and can be made more reasonable by offsetting the feed point about 30% from the edge, but whatever you do, it will be capacitive, on the edge of the Smith chart.  For best results, you may need to add an antenna matching circuit to a patch array antenna.

Design Formulas

Designing an air dielectric patch antenna turned out to be very simple.  Whereas a PCB patch requires a complex formula to describe it, due to the edge effects that are through the air, vs the main field that is through the dielectric - with an air spaced patch, everything is through air and all complications disappear in a puff of magic.

Where c is the speed of light and f is the design frequency:
  • The wavelength WL = c / f
  • The width of the patch W = WL / 2
  • The length of the patch L = 0.49 x W
  • The feed point F = 0.3 x L
The height above ground is best determined experimentally and will be a few millimeters.

If you start with say a 10 mm gap and gradually reduce the height, then after a while you will find a spot where the calculations explode and the radiation plot becomes a big round ball (cocoanec), or just a black screen (xnec2c).  This is the point where the antenna resonates.  For this patch, it happens at 5 mm height.  The optimal pattern is achieved when the gap is one or two mm wider than that, at 6 or 7 mm - simple.

The design frequency should be 3% higher than the desired frequency.  

When you build an antenna, there are always other things in close proximity that loads it: Metal parts, glue, spacers, cables, etc.  All these things will make the antenna operate at a slightly lower frequency than what it was designed for.  Therefore design for a slightly higher frequency and then it will be spot on.  The Ham Radio rule of thumb is to design for the top end of a radio band, but that may not be high enough for a narrow band like this.

In this case, the ISM band is 900 to 930 MHz, so the mid point is 915 MHz and 1.03 x 915 = 942 MHz, so that is what I would design to. 

PCB Dielectrics Modeled With NEC2

If you really want to make a Printed Circuit Board (PCB) antenna, then you need to use a special type of Teflon (PTFE) PCB that has a controlled dielectric value.  Ordinary fibre glass and epoxy resin FR4 has a relative permittivity that varies wildly from 4.2 to 4.7, this is too much for consistent reproducible results.  Read this for details: https://www.arrowpcb.com/laminate-material-types.php

You need to find a PCB house, look at the available materials and then design the antenna accordingly.  For microwave RF applications, pure PTFE on a fiberglass substrate, with a relative permittivity ε0 of 2.1 and Loss of 0.0009, is the best available in wide commercial use.  Calculate the capacitance of a little elemental square with the simple thin parallel plate formula:
C = ε0A/d

You can simulate the dielectric in NEC2 by attaching a load (LD) card with a small capacitor as calculated above to the middle of each element - calculating all the co-ordinates will keep you busy for a while!   The NEC2 simulation result should be quite accurate when you add all these little parasitic capacitors.  The easier way to handle it is to create one little element and then use GM cards to rotate and replicate the elements in two dimensions to make a patch, without having to calculate hundreds of x,y co-ordinates, which would drive any sane person up a wall.

Signal speed is inversely proportional to the square root of the dielectric constant. A low dielectric constant will result in a high signal propagation speed and a high dielectric constant will result in a much slower signal propagation speed.  This has a very large effect on the dimensions of the antenna.

The problem is that you can only vary the patch to ground spacing in a few discrete steps, since it is determined by the thickness of the chosen PCB, which is typically 0.2, 0.8, 1.6 or 3.2 mm.  You can vary the length and width in the simulation using a geometry scale GS card, but scaling will also change the spacing, so then you have to modify the position of the ground plane to get the model back to the fixed thickness of the PCB.  Nothing is ever easy with this clunky old program, but it is free, so that is fair enough.

Example Patch Antenna

Here is a set of NEC2 cards for an air dielectric 33 cm Ham band or 900 MHz ISM band patch antenna made from a tin or copper rectangle, a few mm above a somewhat larger ground plane:

CM Surface Patch Antenna
CM Copyright reserved, GPL v2, Herman Oosthuysen, July 2018
CM
CM 940 MHz (915 + 3%)
CM H=7 mm, W=160 (80), L=156 (78)
CE
#
# Active Element: 21x21 Wires in a Rectangle
# X axis
# GW Tag NS X1 Y1 Z1 X2 Y2 Z2 Radius
GW 1  21 -8.00E-02 -7.80E-02 0.00E+00 +8.00E-02 -7.80E-02 0.00E+00 1.00E-03
GW 2  21 -8.00E-02 -7.02E-02 0.00E+00 +8.00E-02 -7.02E-02 0.00E+00 1.00E-03
GW 3  21 -8.00E-02 -6.24E-02 0.00E+00 +8.00E-02 -6.24E-02 0.00E+00 1.00E-03
GW 4  21 -8.00E-02 -5.46E-02 0.00E+00 +8.00E-02 -5.46E-02 0.00E+00 1.00E-03
GW 5  21 -8.00E-02 -4.68E-02 0.00E+00 +8.00E-02 -4.68E-02 0.00E+00 1.00E-03
GW 6  21 -8.00E-02 -3.90E-02 0.00E+00 +8.00E-02 -3.90E-02 0.00E+00 1.00E-03
GW 7  21 -8.00E-02 -3.12E-02 0.00E+00 +8.00E-02 -3.12E-02 0.00E+00 1.00E-03
GW 8  21 -8.00E-02 -2.34E-02 0.00E+00 +8.00E-02 -2.34E-02 0.00E+00 1.00E-03
GW 9  21 -8.00E-02 -1.56E-02 0.00E+00 +8.00E-02 -1.56E-02 0.00E+00 1.00E-03
GW 10 21 -8.00E-02 -7.80E-03 0.00E+00 +8.00E-02 -7.80E-03 0.00E+00 1.00E-03
GW 11 21 -8.00E-02 +0.00E+00 0.00E+00 +8.00E-02 +0.00E+00 0.00E+00 1.00E-03
GW 12 21 -8.00E-02 +7.80E-03 0.00E+00 +8.00E-02 +7.80E-03 0.00E+00 1.00E-03
GW 13 21 -8.00E-02 +1.56E-02 0.00E+00 +8.00E-02 +1.56E-02 0.00E+00 1.00E-03
GW 14 21 -8.00E-02 +2.34E-02 0.00E+00 +8.00E-02 +2.34E-02 0.00E+00 1.00E-03
GW 15 21 -8.00E-02 +3.12E-02 0.00E+00 +8.00E-02 +3.12E-02 0.00E+00 1.00E-03
GW 16 21 -8.00E-02 +3.90E-02 0.00E+00 +8.00E-02 +3.90E-02 0.00E+00 1.00E-03
GW 17 21 -8.00E-02 +4.68E-02 0.00E+00 +8.00E-02 +4.68E-02 0.00E+00 1.00E-03
GW 18 21 -8.00E-02 +5.46E-02 0.00E+00 +8.00E-02 +5.46E-02 0.00E+00 1.00E-03
GW 19 21 -8.00E-02 +6.24E-02 0.00E+00 +8.00E-02 +6.24E-02 0.00E+00 1.00E-03
GW 20 21 -8.00E-02 +7.02E-02 0.00E+00 +8.00E-02 +7.02E-02 0.00E+00 1.00E-03
GW 21 21 -8.00E-02 +7.80E-02 0.00E+00 +8.00E-02 +7.80E-02 0.00E+00 1.00E-03
#
# Y axis
# GW Tag NS X1 Y1 Z1 X2 Y2 Z2 Radius
GW 22 21 -8.00E-02 -7.80E-02 0.00E+00 -8.00E-02 +7.80E-02 0.00E+00 1.00E-03
GW 23 21 -7.20E-02 -7.80E-02 0.00E+00 -7.20E-02 +7.80E-02 0.00E+00 1.00E-03
GW 24 21 -6.40E-02 -7.80E-02 0.00E+00 -6.40E-02 +7.80E-02 0.00E+00 1.00E-03
GW 25 21 -5.60E-02 -7.80E-02 0.00E+00 -5.60E-02 +7.80E-02 0.00E+00 1.00E-03
GW 26 21 -4.80E-02 -7.80E-02 0.00E+00 -4.80E-02 +7.80E-02 0.00E+00 1.00E-03
GW 27 21 -4.00E-02 -7.80E-02 0.00E+00 -4.00E-02 +7.80E-02 0.00E+00 1.00E-03
GW 28 21 -3.20E-02 -7.80E-02 0.00E+00 -3.20E-02 +7.80E-02 0.00E+00 1.00E-03
GW 29 21 -2.40E-02 -7.80E-02 0.00E+00 -2.40E-02 +7.80E-02 0.00E+00 1.00E-03
GW 30 21 -1.60E-02 -7.80E-02 0.00E+00 -1.60E-02 +7.80E-02 0.00E+00 1.00E-03
GW 31 21 -8.00E-03 -7.80E-02 0.00E+00 -8.00E-03 +7.80E-02 0.00E+00 1.00E-03
GW 32 21 +0.00E-00 -7.80E-02 0.00E+00 +0.00E+00 +7.80E-02 0.00E+00 1.00E-03
GW 33 21 +8.00E-03 -7.80E-02 0.00E+00 +8.00E-03 +7.80E-02 0.00E+00 1.00E-03
GW 34 21 +1.60E-02 -7.80E-02 0.00E+00 +1.60E-02 +7.80E-02 0.00E+00 1.00E-03
GW 35 21 +2.40E-02 -7.80E-02 0.00E+00 +2.40E-02 +7.80E-02 0.00E+00 1.00E-03
GW 36 21 +3.20E-02 -7.80E-02 0.00E+00 +3.20E-02 +7.80E-02 0.00E+00 1.00E-03
GW 37 21 +4.00E-02 -7.80E-02 0.00E+00 +4.00E-02 +7.80E-02 0.00E+00 1.00E-03
GW 38 21 +4.80E-02 -7.80E-02 0.00E+00 +4.80E-02 +7.80E-02 0.00E+00 1.00E-03
GW 39 21 +5.60E-02 -7.80E-02 0.00E+00 +5.60E-02 +7.80E-02 0.00E+00 1.00E-03
GW 40 21 +6.40E-02 -7.80E-02 0.00E+00 +6.40E-02 +7.80E-02 0.00E+00 1.00E-03
GW 41 21 +7.20E-02 -7.80E-02 0.00E+00 +7.20E-02 +7.80E-02 0.00E+00 1.00E-03
GW 42 21 +8.00E-02 -7.80E-02 0.00E+00 +8.00E-02 +7.80E-02 0.00E+00 1.00E-03
#
# Ground plane
# H = 5 mm, Feed = 16
# Frequency 940.000 MHz
# Resonance; the calculation explodes
#
# H = 7 mm, Feed = 16
# Frequency 940.000 MHz
# Feedpoint(1) - Z: (0.116 + i 133.600)    I: (0.0000 - i 0.0075)     VSWR(Zo=50 Ω): 99.0:1
# Antenna is in free space.
# Directivity:  7.68 dB
# Max gain: 12.54 dBi (azimuth 270 deg., elevation 60 deg.)
#
# SM NX NY X1 Y1 Z1 X2 Y2 Z2
# SC  0  0 X3 Y3 Z3
SM 25 25 -1.00E-01 -1.00E-01 -7.00E-03  +1.00E-01 -1.00E-01 -7.00E-03
SC  0  0 +1.00E-01 +1.00E-01 -7.00E-03
#
# Frequency 850.000 MHz - 3 dB down
# Feedpoint(1) - Z: (0.176 + i 129.320)    I: (0.0000 - i 0.0077)     VSWR(Zo=50 Ω): 99.0:1
# Antenna is in free space.
# Directivity:  7.42 dB
# Max gain: 9.54 dBi (azimuth 270 deg., elevation 60 deg.)
#
GE
#
# Frequency 940 MHz
FR     0     1     0      0   9.40E+02
#
# Excitation with voltage source
# EX 0 Tag Segment 0 1Volt
EX     0     16     11      0         1
#
# Plot 360 degrees
RP     0    90    90   1000         0         0         4         4      0
EN


Now you can go and get a coffee can and tin snips and have fun.  The trick is to space the tin plate with paper or plastic washers and glue it to the ground plane with two or four hot glue blobs on the corners, then after hardening, remove the spacers.

For more information on what exactly to do with the contents of the coffee can, you can read this https://2b-alert-web.bhsai.org/2b-alert-web/login.xhtml

Once you have the first rectangular patch working in simulation, you can explore cutting the corners, or making slots in it, to get circular polarization for Satcom or mobile use.  You could also try drilling holes in two opposing corners and using those for little nylon bolts.  That could provide robust mounting and circular polarization, in one swell foop.

Once you built the widget, you need to measure it to see how close you got to your model and how you should tweak things.  It is very seldom that the first try is good enough. The aliexpress web site lists many different VNA models from $300 to $3000, which is orders of magnitude less than a couple decades ago.  I got a Measall KC901V, and I am very happy with it.

Patch Antenna Calculators

There are various patch antenna calculators on the wild wild web, for example:
http://www.emtalk.com/mpacalc.php


A calculator can quickly create a starting point for experiments.

To hook the antennas together, you can use microstriplines, which can be calculated with this one: https://www.pasternack.com/t-calculator-microstrip.aspx

However, at a height of 7 mm, the stripline tracks would need to be impractically wide.


If you put four patches in parallel, then the impedance becomes 90/4 = 22.5 Ohm, which is not a good match to a 50 Ohm co-ax.  For a good match, you can instead uptransform each patch with a taper (height 7 mm) from 90 Ohm (14 mm) to 200 Ohm (2 mm), so that when you combine them, you get 50 Ohm.

Where to put the taper?  A wide 14 mm track is a bit impractical, while a thin 2 mm track is a bit lossy, so how now brown cow?

One solution is to use two impedance tapers:
The impedance of the start and end of the transmission line is important, but what exactly it is in the middle, doesn't matter.  Therefore, at the patch, taper from 14 mm to 5 mm, run the track to the connector in the middle and then taper from 5 mm to 2 mm.  It is always a give and take - trade off one thing for another and try again!

Note that it is important that you run the tracks in a kind of swastika with the 50 Ohm connector in the middle of the panel, such that each branch is progressively 1/4 wavelength longer, to provide the required 90, 180, 270 degree phasing.  However, don't make 90 degree corners - make them rounded, or 45 degree sections - else the corners will radiate and cause reflections.

Note that 93 Ohm coaxial cable is available from Pasternack, so you could use it instead of strip lines: https://www.pasternack.com/93-ohm-coax-rf-cables-category.aspx

However, if you decide to use coax, then you can just as well probe the patch with a network analyzer till you find the exact spot where it is 50 Ohm and run garden variety RG316U coax - choices, choices...

Eventually, just to prove it, I got busy with tin snips on thin 0.8 mm FR4 since it is easy to work with and made a bunch of patches and measured them all - too big, too small, too big, too small - aaargh!

The little bit of FR4 epoxy has a significant effect on the resonant frequency and I had to make the patch about 20 mm smaller than the original design.  This exercise showed that to make a properly tuned patch you must have a reasonably decent VNA and a lot of patience.

On the right is an example that is almost the right size - about 1 mm too small - with a usable bandwidth from 908 to 938 MHz, which is 8MHz too high.  The 50 Ohm co-ax feed is soldered in from the bottom.   I made a 6 mm hole in the reflector, soldered the braid there and a 2 mm hole in the patch for the centre wire.  For these tests, the patch is held down with wads of 'chewing gum' (Faber-Castell Tack-It).
  

Circular Polarized Patch Array

With careful use of GM cards, one can replicate and rotate the patch and create an array of 4, 9 or 16 patches and then tie them together in series with 1/4 wave transmission line TL cards (the skew faint lines between the feed points on the below picture).  One can make the EM field rotate right or left depending on whether one feeds it at patch 1 or at patch 4.

One can daisy chain patches like this in a model, but for the real thing, I would hook them in parallel with a star of striplines or coax to ensure that the transmit power level is the same on each patch.

Coax delay lines are good for a one off test, while microstriplines are good for replication.  In the end, the whole assembly will be only as accurate as your test tools and your patience allow.


A 24 dBi Quad Patch Array Simulation

Obtaining 24 dBi from only four patches is very good - very well optimized.  Typical commercial quad patch antennas will yield 17 to 21 dBi, which is probably what you will get when you actually build it.

A simulation helps very much to figure out what should work and what won't, but to measure is to know!

A problem with using 9 (3 x 3) or 16 (4 x 4) patches, is the law of diminishing returns: Losses and radiation from the transmission lines will become significant and will distort the patterns from the patches.  Therefore, the NEC model may look great, but the practical results may disappoint.

A large patch array with nine or sixteen patches, could create a very high gain assembly - a pencil beam - the complete design of which would require an export license, due to the Wassenaar agreement on dual use items.  Therefore I'll rather just stop here with this article and not provide the complete design, before a black helicopter starts to follow me around.
;)

La Voila!

Herman

Friday, June 29, 2018

Mac, Linux or BSD

The eternal question:
Which is better - EMACS or Vi?
OK, this post is actually about the other eternal question!
As I use Linux, Mac, Open and Free BSD (Yes, and that other ball of wax too...), I think I can answer objectively:
Both OpenBSD and FreeBSD are reasonably easy to download, install and run on pretty much anything. At least, I have not found a server/desktop/laptop computer that it would not run on.  I even ran OpenBSD on a Sparc Station - remember those?

OpenBSD

Theo De Raadt has a 'cut the nonsense' mentality so OpenBSD is simpler, with a smaller repository of programs, about 30,000 packages. However, with a little effort, you can install FreeBSD software on OpenBSD to get the rest. After a few days of use, you will know how.

The best OpenBSD book is Absolute OpenBSD: UNIX for the Practical Paranoid.

In general, OpenBSD feels a lot like Slackware Linux: Simple and very fast.

FreeBSD

FreeBSD can also with some effort, run Linux programs and you can use a virtualizer to run other systems, so you are never locked into one thing.

FreeBSD has a gigantic repository of about 50,000 programs and it has very good documentation in the online FreeBSD Handbook.

MacOS

Compared to OpenBSD, Dragonfly and Slackware, some distributions look fancy and are very slow - there are many reasons why - see below.  MacOS obviously falls into the fancy and slow category. So if you want a Mac replacement then you first need to decide whether you want a fancy or a fast system.
My preference is to install a reasonably fast system on the host, then use a virtualizer for experiments and work and I frequently run multiple systems at the same time.  All the BSDs are good for that, be it Open, Free or Mac.
My home use system is a Macbook Pro running the latest MacOS with the Macports and Homebrew software repositories.  I even have the XFCE desktop installed, so when I get annoyed with the overbearing Mac GUI, I run XFCE, to get a weirdly satisfying Linux-Mac hybrid.


Linux

Linux is the step child of UNIX, which took over the world.  Of the Top 500 Super Computers, all now run Linux.  My work engineering system is an ancient Dell T420 running the latest Fedora Linux on the host.  All my machines have Virtualbox and a zoo of virtual machines for the rest.

Note that the Mandatory Access Control security systems on Red Hat and Debian distributions slow them down a lot (in the order of 50%).  If you have to have a fast and responsive system and can afford to trade it for security, then turn SELinux or AppArmor off.

Latency

For the control and remote sensing systems of robots, aircraft and rockets, the worst case OS latency matters very much.  For low latency, nothing beats Linux, since the whole kernel and all spinlocks are pre-emptible.

On average, all OS's have the same interrupt service latency - a few tens of nanoseconds.  However, every once in a while, the latency will be much worse.  In the case of Linux, the worst case will be below 1 ms, but for Win10, it can can be 20 ms and for Win7, 800 ms.  The trouble in robotics and remote sensing, is that you need to design for the worst case.

I have observed a dotNet video player on Windows 7, after a couple days of uptime, stop dead for two seconds, every 8 seconds - obviously not good for a remote sensing application.  Windows 10 latency is much improved, though still a little worse than Mac OS, which has 2 orders of magnitude worse latency than Linux - see below for why this is.

See this very good real-time performance analysis: https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html

(Intentional performance degradation was worst in Windows Vista and has been dialled back a bit since).

What is the Performance Problem with Windows and Mac OS?

It is not that MS and Apple don't know how to make a real-time OS.  They are not allowed to do it.

The reason why, is in the US export regulations.  Windows and Mac OS have a Mass Market Exemption (EAR 5D992.c).  This is necessary because MS, Apple and the US State Department simply cannot process billions of export licenses.

Some useful links:





The Mass Market exemption is described in the EAR (2-21.a., 1-5.A.4.a, 1-5.A.1.f, Cat 2, Cat 4, cat 6, cat 7) and MTCR (Group 6, 6-1.A., 6-19.A.1., 6-19.A.2.) regulations, and is defined in what a Mass Market OS is NOT allowed to do:
  • It is not allowed to do real-time processing of audio (Sonar).
  • It is not allowed to provide advanced networking and deep packet inspection (Network inspection).
  • It is not allowed to do precision tracking (of a missile or UAV).
  • It is not allowed to provide C4I video and meta data processing (Information from a missile, MALE or HALE UAV).
Items not specifically controlled, can also be controlled under section 744.3 of the EAR ("catch-all", or EPCI, controls). Items require an export license if they will be used in the design, development, production, or use of:
  • Rocket systems (including ballistic missile systems, space launch vehicles, and sounding rockets) or unmanned aerial vehicles (including cruise missile systems, target drones, and reconnaissance drones) capable of a range of at least 300 km for use in or by a country listed in Country Group D:4 (see  Supplement No. 1 to Part 738 of the EAR).
  • Any rocket system or unmanned aerial vehicles in a D:4 country where system characteristics or use are unknown.
  • Any rocket systems or unmanned aerial vehicles for the delivery of chemical, biological, or nuclear weapons to anywhere in the world, except by governmental programs for nuclear weapons delivery of the Nuclear Non-Proliferation Treaty Nuclear Weapons States that are also members of NATO.
So, in the extreme, if you use US chewing gum to glue together parts of a big missile, then the gum may need an export license.

Windows and Mac OS therefore contain special code to degrade the performance slightly when you try to process multiple video and audio streams - see the above paragraphs on latency.  With a single stream it works, so it is good for home use, or for a musical band, but it is not great for high performance systems.

If you would use Windows or Mac OS for one of the above disallowed functions, then MS and Apple are not allowed to support you and in the extreme, they may even be forced to completely boycott your country (North Korea, Iran...).

Free Open Source Software (BSD, Linux...) is covered under ITAR (120.10(b)) and EAR (734.3(b)(3)(iii)) and is allowed to do the above without requiring export licenses.  (The horse bolted, so there is no point in trying to close the gate now).


So, which OS is best? 
It depends on what exactly you need to do with your system...
:)
Herman

Saturday, June 23, 2018

Compile The Latest Gstreamer From GIT

Compile The Latest gstreamer 1.15 on Ubuntu Linux 18.04 LTS

While working on a way to embed Key Length Value (KLV) metadata in a MPEG-2 TS video stream, I found that ffmpeg can copy and extract KLV, but cannot insert it.  There were some indications that the latest gstreamer has something under development, so I had to figure out how to compile gstreamer from the GIT repository, to get the latest mpegtsmux features.

The cryptic official gstreamer compile guide is here:
https://gstreamer.freedesktop.org/documentation/frequently-asked-questions/git.html#

As usual, the best way to do development work is on a virtual machine, so that you don't mess up your host.  I use Oracle Virtualbox on a Macbook Pro.  I downloaded Ubuntu Linux 18.04 LTS Server, made a 64 bit Virtualbox machine and installed the XFCE desktop, to get a light weight system that runs smoothly in a virtual environment.

The problem with the cryptic official guide is that it probably works on the machine of a developer that has been doing this for a few years, but on a fresh virtual machine, a whole zoo of dependencies are missing and will be discovered the hard way.

Install The GCC Compiler

If you haven't done so already, install a minimal desktop and the development tools:
$ sudo apt update 
$ sudo apt install xfce4
$ sudo apt install build-essential

Then log out and in again, to get your beautifully simple XFCE desktop with a minimum of toppings.

Prepare a Work Directory

Make a directory to work in:
$ cd
$ mkdir gstreamer
$ cd gstreamer

Dependencies

Set up all the dependencies that the official guide doesn't tell you about.   Some of these may pull in additional dependencies and others may not be strictly necessary, but it got me going:
$ sudo apt install gtk-doc-tools liborc-0.4-0 liborc-0.4-dev libvorbis-dev libcdparanoia-dev libcdparanoia0 cdparanoia libvisual-0.4-0 libvisual-0.4-dev libvisual-0.4-plugins libvisual-projectm vorbis-tools vorbisgain libopus-dev libopus-doc libopus0 libopusfile-dev libopusfile0 libtheora-bin libtheora-dev libtheora-doc libvpx-dev libvpx-doc libvpx? libqt5gstreamer-1.0-0 libgstreamer*-dev  libflac++-dev libavc1394-dev libraw1394-dev libraw1394-tools libraw1394-doc libraw1394-tools libtag1-dev libtagc0-dev libwavpack-dev wavpack

$ sudo apt install libfontconfig1-dev libfreetype6-dev libx11-dev libxext-dev libxfixes-dev libxi-dev libxrender-dev libxcb1-dev libx11-xcb-dev libxcb-glx0-dev

$ sudo apt install libxcb-keysyms1-dev libxcb-image0-dev libxcb-shm0-dev libxcb-icccm4-dev libxcb-sync0-dev libxcb-xfixes0-dev libxcb-shape0-dev libxcb-randr0-dev libxcb-render-util0-dev

$ sudo apt install libfontconfig1-dev libdbus-1-dev libfreetype6-dev libudev-dev

$ sudo apt install libasound2-dev libavcodec-dev libavformat-dev libswscale-dev libgstreamer*dev gstreamer-tools gstreamer*good gstreamer*bad

$ sudo apt install libicu-dev libsqlite3-dev libxslt1-dev libssl-dev

$ sudo apt install flex bison nasm

As you can see, the official guide is just ever so slightly insufficient.

Check Out Source Code From GIT

Now, after all the above preparations, you can check out the whole gstreamer extended family as in the official guide:
$ for module in gstreamer gst-plugins-base gst-plugins-good gst-plugins-ugly gst-plugins-bad gst-ffmpeg; do git clone git://anongit.freedesktop.org/gstreamer/$module ; done
...long wait...

BTW, if you do a long running process on another machine (real or virtual) over ssh, use screen:
$ ssh -t herman@server screen -R

Then, when the process is running, you can detach with ^aD and later reconnect to the screen session with the same command above.

See what happened:
$ ls
gst-ffmpeg  gst-plugins-bad  gst-plugins-base  gst-plugins-good  gst-plugins-ugly  gstreamer


Run The autogen.sh Scripts

Go into each directory and run ./autogen.sh.  If you get errors looking like 'nasm/yasm not found or too old... config.status: error: Failed to configure embedded Libav tree... configure failed', then of course you need to hunt down the missing package and add it with for example 'sudo apt install nasm', then try autogen.sh again.

Build and install the gstreamer and gst-plugins-base directories first, otherwise you will get a complaint about 'configure: Requested 'gstreamer-1.0 >= 1.15.0.1' but version of GStreamer is 1.14.0'.

You will get bazillions of compiler warnings, but should not get any errors.  All errors need to be fixed somehow and patches submitted upstream, otherwise you won't get a useful resulting program, but the warnings you can leave to the project developers - let them eat their own dog food.  To me, warnings is a sign of sloppy code and I don't want to fix the slop of young programmers who haven't learned better yet:

$ cd gstreamer; ./autogen.sh 
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-base; ./autogen.sh
$ make
$ sudo make install
$ cd ..

Gstreamer has plugins that are in various stages of development/neglect, called The Good, The Bad and The Ugly.  Sometimes there is even a Very Ugly version.  These two linked movies are rather more entertaining than compiling gstreamer, so that will give you something to do on your other screen.

$ cd gst-plugins-good; ./autogen.sh
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-bad; ./autogen.sh 
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-ugly; ./autogen.sh
$ make
$ sudo make install
$ cd ..
 
$ cd gst-ffmpeg; ./autogen.sh
$ make
$ sudo make install
$ cd ..

The Proof Of The Pudding

The mpegtsmux multiplexer can be used to insert KLV metadata into a video stream:
$ gst-inspect-1.0 mpegtsmux|grep klv
      meta/x-klv


I eventually figured out the syntax and there is now a complete example of how to take Humpty Dumpty apart and put him back together again, in here:
https://www.aeronetworks.ca/2018/05/mpeg-2-transport-streams.html

The above link explains this example pipeline below:

$ gst-launch-1.0 -e mpegtsmux name=mux ! filesink location=dayflightnew.ts \
filesrc location=dayflight.klv ! meta/x-klv ! mux. \
filesrc location=dayflight.ts ! 'video/x-h264, stream-format=byte-stream, alignment=au' ! mux.


Some more research is required to write a little application to prepare the meta data and I suggest that you string things together through sockets or FIFOs.


La Voila!

Herman




Friday, May 18, 2018

Video Distribution With MPEG-2 Transport Streams

FFMPEG MPEG-2 TS Encapsulation

An observation aircraft could be fitted with three or four cameras and a radar.  In addition to the multiple video streams, there are also Key, Length, Value (KLV) metadata consisting of the time and date, the GPS position of the aircraft, the speed, heading and altitude, the position that the cameras are staring at, the range to the target, as well as the audio intercom used by the pilots and observers.  All this information needs to be combined into a single stream for distribution, so that the relationship between the various information sources is preserved.


Example UAV Video from FFMPEG Project

When the stream is recorded and played back later, one must still be able to determine which GPS position corresponds to which frame for example.  If one would save the data in separate files, then that becomes very difficult.  In a stream, everything is interleaved in chunks, so one can open the stream at any point and tell immediately exactly what happened, when and where.

The MPEG-2 TS container is used to encapsulate video, audio and metadata according to STANAG 4609.  This is similar to the Matroska format used for movies, but a movie has only one video channel.

The utilities and their syntax required to manipulate encapsulated video streams is obscure and it is difficult to debug, since off the shelf video players do not support streams with multiple video substreams and will only play one of the substreams, with no way to select which one to play, since they were made for Hollywood movies, not STANAG 4609 movies.

After considerable head scratching, I finally figured out how to do it and even more important, how to test and debug it.  Using the Bash shell and a few basic utilities, it is possible to sit at any UNIX workstation and debug this complex stream wrapper and metadata puzzle interactively.  Once one has it all under control, one can write a C program to do it faster, or one can just leave it as a Bash script, once it is working, since it is is easy to maintain.

References

 

 Install the utilities

If you are using Debian or Ubuntu Linux, install the necessary tools with apt.  Other Linux distributions use dnf:
$ sudo apt install basez ffmpeg vlc mplayer espeak sox 

Note that these tests were done on Ubuntu Linux 18LTS.  You can obtain the latest FFMPEG version from Git, by following the compile guide referenced above.  If you are using Windows, well, good luck.

Capture video for test purposes

Capture the laptop camera to a MP4 file in the simplest way:
$ ffmpeg -f v4l2 -i /dev/video0 c1.mp4

Make 4 camera files with different video sizes, so that one can distinguish them later.  Also make four numbered cards and hold them up to the camera to see easily which is which:

$ ffmpeg -f v4l2 -framerate 25 -video_size vga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c1.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size svga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c2.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size xga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c3.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size uxga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c4.mp4

 

Playback methods

SDL raises an error, unless pix_fmt is explicitly specified during playback: "Unsupported pixel format yuvj422p"

Here is the secret to play video with ffmpeg and SDL:
$ ffmpeg -i s2.mp4 -pix_fmt yuv420p -f sdl "SDL OUT"

...and here is the secret to play video with ffmpeg and X:
$ ffmpeg -i s2.mp4 -f xv Screen1 -f xv Screen2 

With X, you can decode the video once and display it on multiple screens, without increasing the processor load.  If you are a Windows user - please don't cry...

Play video with ffplay:
$ ffplay s2.mp4

ffplay also uses SDL, but it doesn’t respect the -map option for stream playback selection.  Ditto for VLC and Mplayer.

You can also play video with gstreamer gst-play-1.0:
$ gst-play-1.0 dayflight.mpg

Some help with window_size / video_size:
-window_size vga
‘cif’ = 352x288
‘vga’ = 640x480
...

 

Map multiple video streams into one mpegts container

Documentation: https://trac.ffmpeg.org/wiki/Map

Map four video camera input files into one stream:
$ ffmpeg -i c1.mp4 -i c2.mp4 -i c3.mp4 -i c4.mp4 -map 0:v -map 1:v -map 2:v -map 3:v -c:v copy -f mpegts s4.mp4

 

See whether the mapping worked

Compare the file sizes:
$ ls -al
total 14224
drwxr-xr-x  2 herman herman    4096 May 18 13:19 .
drwxr-xr-x 16 herman herman    4096 May 18 11:19 ..
-rw-r--r--  1 herman herman 1113102 May 18 13:12 c1.mp4
-rw-r--r--  1 herman herman 2474584 May 18 13:13 c2.mp4
-rw-r--r--  1 herman herman 1305167 May 18 13:13 c3.mp4
-rw-r--r--  1 herman herman 2032543 May 18 13:14 c4.mp4
-rw-r--r--  1 herman herman 7621708 May 18 13:19 s4.mp4


The output file s4.mp4 size is the sum of the camera parts above.

 

Analyze the output stream file using ffmpeg

Run "ffmpeg -i INPUT" (not specify an output) to see what program IDs and stream IDs it contains:

$ ffmpeg -i s4.mp4
ffmpeg version 3.4.2-2 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.3.0-16ubuntu2)
  configuration: --prefix=/usr --extra-version=2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-
...snip...
Input #0, mpegts, from 's4.mp4':
  Duration: 00:00:16.60, start: 1.480000, bitrate: 3673 kb/s
  Program 1
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 640x480 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1[0x101]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 960x540 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:2[0x102]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1024x576 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:3[0x103]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc

Running ffmpeg with no output, shows the streams have different resolutions and corresponds to the original 4 files (640x480, 960x540, 1024x576, 1280x720).

 

Play or extract specific substreams

Play the best substream with SDL (uxga):
$ ffmpeg -i s4.mp4 -pix_fmt yuv420p -f sdl "SDL OUT"

Play the first substream (vga):
$ ffmpeg -i s4.mp4 -pix_fmt yuv420p -map v:0 -f sdl "SDL OUT"

Use -map v:0 till -map v:3 to play or extract the different video substreams.

Add audio and data to the mpegts stream:

Make two audio test files:
$ espeak “audio channel one, audio channel one, audio channel one” -w audio1.wav
$ espeak “audio channel two, audio channel two, audio channel two” -w audio2.wav


Convert the files from wav to m4a to be compliant with STANAG 4609:
$ ffmpeg -i audio1.wav -codec:a aac audio1.m4a
$ ffmpeg -i audio2.wav -codec:a aac audio2.m4a

Make two data test files:
$ echo “Data channel one. Data channel one. Data channel one.”>data1.txt
$ echo “Data channel two. Data channel two. Data channel two.”>data2.txt

 

Map video, audio and data into the mpegts stream

Map three video camera input files, two audio and one data stream into one mpegts stream:
$ ffmpeg -i c1.mp4 -i c2.mp4 -i c3.mp4 -i audio1.m4a -i audio2.m4a -f data -i data1.txt -map 0:v -map 1:v -map 2:v -map 3:a -map 4:a -map 5:d -c:v copy -c:d copy -f mpegts s6.mp4

The above shows that mapping data into a stream with ffmpeg doesn't actually work yet, but it does work with gstreamer - see below.

 

Verify the stream contents

See whether everything is actually in there:
$ ffmpeg -i s6.mp4
…snip...
[mpegts @ 0x55f2ba4e3820] start time for stream 5 is not set in estimate_timings_from_pts
Input #0, mpegts, from 's6.mp4':
  Duration: 00:00:16.62, start: 1.458189, bitrate: 2676 kb/s
  Program 1
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 640x480 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1[0x101]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 960x540 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:2[0x102]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1024x576 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:3[0x103](und): Audio: mp2 ([4][0][0][0] / 0x0004), 22050 Hz, mono, s16p, 160 kb/s
    Stream #0:4[0x104](und): Audio: mp2 ([4][0][0][0] / 0x0004), 22050 Hz, mono, s16p, 160 kb/s
    Stream #0:5[0x105]: Data: bin_data ([6][0][0][0] / 0x0006)

The ffmpeg analysis of the stream shows three video, two audio and one data substream.

 

Extract the audio and data from the stream

Extract and play one audio channel:
$ ffmpeg -i s6.mp4 -map a:0 aout1.m4a
$ ffmpeg -i aout1.m4a aout1.wav
$ play aout1.wav

and the other one:
$ ffmpeg -i s6.mp4 -map a:1 aout2.m4a
$ ffmpeg -i aout2.m4a aout2.wav
$ play aout2.wav

Extract the data

Extract the data using the -map d:0 parameter:
$ ffmpeg -i s6.mp4 -map d:0 -f data dout1.txt

...and nothing is copied.  The output file is zero length.

This means the original data was not inserted into the stream in the first place, so there is nothing to extract.

It turns out that while FFMPEG does support data copy, it doesn't support data insertion yet.  For the time being, one should either code it up in C using the API, or use Gstreamer to insert the data into the stream: https://developer.ridgerun.com/wiki/index.php/GStreamer_and_in-band_metadata#KLV_Key_Length_Value_Metadata

Extract KLV data from a real UAV video file

You can get a sample UAV observation file with video and metadata here:

$ wget http://samples.ffmpeg.org/MPEG2/mpegts-klv/Day%20Flight.mpg

Get rid of that stupid space in the file name:
$ mv Day[tab] DayFlight.mpg

The above file is perfect for meta data copy and extraction experiments:
$ ffmpeg -i DayFlight.mpg -map d:0 -f data dayflightklv.dat
...snip
 [mpegts @ 0x55cb74d6a900] start time for stream 1 is not set in estimate_timings_from_pts
Input #0, mpegts, from 'DayFlight.mpg':
  Duration: 00:03:14.88, start: 10.000000, bitrate: 4187 kb/s
  Program 1
    Stream #0:0[0x1e1]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720, 60 fps, 60 tbr, 90k tbn, 180k tbc
    Stream #0:1[0x1f1]: Data: klv (KLVA / 0x41564C4B)
Output #0, data, to 'dout2.txt':
  Metadata:
    encoder         : Lavf57.83.100
    Stream #0:0: Data: klv (KLVA / 0x41564C4B)
Stream mapping:
  Stream #0:1 -> #0:0 (copy)
Press [q] to stop, [?] for help
size=       1kB time=00:00:00.00 bitrate=N/A speed=   0x   
video:0kB audio:0kB subtitle:0kB other streams:1kB global headers:0kB muxing overhead: 0.000000%


Dump the KLV file in hexadecimal:
$ hexdump dayflightklv.dat
0000000 0e06 342b 0b02 0101 010e 0103 0001 0000
0000010 9181 0802 0400 8e6c 0320 8583 0141 0501
0000020 3d02 063b 1502 0780 0102 0b52 4503 4e4f
0000030 0e0c 6547 646f 7465 6369 5720 5347 3438
0000040 040d c44d bbdc 040e a8b1 fe6c 020f 4a1f
0000050 0210 8500 0211 4b00 0412 c820 7dd2 0413
0000060 ddfc d802 0414 b8fe 61cb 0415 8f00 613e
0000070 0416 0000 c901 0417 dd4d 2a8c 0418 beb1
0000080 f49e 0219 850b 0428 dd4d 2a8c 0429 beb1

...snip 

Sneak a peak for interesting text strings:

$ strings dayflightklv.dat
KLVA'   

BNZ
Bms
JUD
07FEB
5g|IG

...snip

Cool, it works!


Disassemble and Reassemble Humpty Dumpty

Here is a complete MPEG-2 TS video split and merge example, using a combination of FFMPEG and Gstreamer

Get a STANAG 4609 MPEG-2 TS reference stream file:
$ wget http://samples.ffmpeg.org/MPEG2/mpegts-klv/Day%20Flight.mpg
$ mv Day\ Flight.mpg dayflight.mpg


Play the video:
$ ffplay dayflight.mpg
$ gst-play-1.0 dayflight.mpg


Extract the dayflight video to a file, without transcoding it, using the copy codec:
$ ffmpeg -i dayflight.mpg -map v:0 -c copy dayflight.ts

Extract the dayflight metadata:
$ ffmpeg -i dayflight.mpg -map d:0 -f data dayflight.klv


Putting Humpty Dumpty back together again, is not so easy:  
$ gst-launch-1.0 -e mpegtsmux name=mux ! filesink location=dayflightnew.ts \
filesrc location=dayflight.klv ! meta/x-klv ! mux. \
filesrc location=dayflight.ts ! 'video/x-h264, stream-format=byte-stream, alignment=au' ! mux.


Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.451209108
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...


$ ls -al
-rw-rw-r--  1 herman herman       977 Jan 11 07:27 dayflight.klv
-rw-rw-r--  1 herman herman 102004664 Oct  6  2012 dayflight.mpg
-rw-rw-r--  1 herman herman 112750932 Jan 11 11:47 dayflightnew.ts
-rw-rw-r--  1 herman herman 106804116 Jan 11 07:53 dayflight.ts


That seems like it worked, but I am still not sure whether the data and video are interleaved correctly. 

Bah, humbug!  While the above worked fine on a virtual machine a few weeks ago, it doesn't work anymore on a real system.  Now I get various errors and the video doesn't want to synchronize on playback.  I have also found that gstreamer behaves differently when writing to files, vs writing to a UDP stream.  This is all very un-UNIX-like.  A system should not care whether it is using a file, a FIFO, or a stream, but it does.
Sooo, some more head-scratching is required to reliably reassemble Humpty Dumpty.

KLV Data Debugging

The KLV data is actually what got me started with this in the first place.   The basic problem is how to ensure that the GPS data is saved with the video, so that one can tell where the plane was and what it was looking at, when a recording is played back later.

The transport of KLV metadata over MPEG-2 transport streams in an asynchronous manner is defined in SMPTE RP 217 and MISB ST0601.8:
http://www.gwg.nga.mil/misb/docs/standards/ST0601.8.pdf

Here is a more human friendly description:
https://impleotv.com/2017/02/17/klv-encoded-metadata-in-stanag-4609-streams/

You can make a short form meta data KLV LS test message using the echo \\x command to output binary values to a file.  Working with binary data in Bash is problematic, but one just needs to know what the limitations are (zeroes, line feeds and carriage return characters may disappear for example):  Don't store binary data in a shell variable (use a file) and don't do shell arithmetic, use the calculator bc or awk instead.

The key, length and date are in this example, but I'm still working on the checksum calculation and the byte orders are probably not correct.  It only gives the general idea of how to do it at this point:

# Universal Key for Local Data Set
echo -en \\x06\\x0E\\x2B\\x34\\x02\\x0B\\x01\\x01 > klvdata.dat
echo -en \\x0E\\x01\\x03\\x01\\x01\\x00\\x00\\x00 >> klvdata.dat
# Length 76 bytes for short packet
echo -en \\x4c >> klvdata.dat
# Value: First ten bytes is the UNIX time stamp, tag 2, length 8, 8 byte time
echo -en \\x02\\x08 >> klvdata.dat
printf "%0d" "$(date +%s)" >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01 >> klvdata.dat
# Checksum tag 1, length 2
echo -en \\x01\\x02 >> klvdata.dat
# Calculate 2 byte sum with bc
echo -en \\x04\\x05 >> klvdata.dat

The UTC time stamp since Epoch 1 Jan 1970 must be the first data field:
$ printf "%0d" "$(date +%s)" | hexdump
0000000 3531 3632 3237 3838 3030              

The checksum is a doozy.  It is a 16 bit sum of everything excluding the sum itself and would need the help of the command line calculator bc.  One has to read two bytes at a time, swap them around (probably), then convert the binary to hex text, do the calculation in bc and eventually output the data in binary back to the file.  I would need a very big mug of coffee to get that working.

Multicast Routing

Note that multicast routing is completely different from unicast routing.  A multicast packet has no source and destination address.  Instead, it has a group address and something concocted from the host MAC.  To receive a stream, a host has to subscribe to the group with IGMP.

Here, there be dragons.

If you need to route video between two subnets, then you should consider sparing yourself the head-ache and rather use unicast streaming.  Otherwise, you would need an expensive switch from Cisco, or HPE, or OpenBSD with dvmrpd.

Linux multicast routing is not recommended, for three reasons: No documentation and unsupported, buggy router code.  Windows cannot route it at all and FreeBSD needs to be recompiled for multicast routing.  Only OpenBSD supports multicast routing out of the box.

Do not meddle in the affairs of dragons,
for you are crunchy
and taste good with ketchup.

Also consider that UDP multicast packets have a Time To Live of 1, meaning that they will be dropped at the first router.  Therefore a multicast router also has to increment the TTL.

If you need to use OpenBSD, do get a copy of Absolute OpenBSD - UNIX for the Practically Paranoid, by M.W. Lucas.

Embedded Solutions

Here is an interesting toy:  http://www.ampltd.com/products/pc104-h264-hdav2000klv/

I haven't tried it yet!
 

Five ways to Play Video Streams With Low Latency

You may sometimes find that a video stream seems to have 3 to 10 seconds of delay, making control of a camera payload practically impossible.   This delay is due to excessive buffering in the player.  The radios do not have enough memory to store 3 seconds of video, so don't blame it on the radio modems.

 

Play with ffplay if it is available:

$ ffplay --fast udp://224.0.1.10:5000

Sometimes, ffplay is not part of the FFMPEG installation.  If you have this problem and don't want to compile it from source, then you can use ffmpeg with SDL as below, which is what ffplay does also.

Play a stream using FFMPEG and SDL to render it to the default screen:
$ ffmpeg -i udp://224.0.1.10:5000 -f sdl -

You could also play the video with mplayer:
$ mplayer -benchmark udp://224.0.1.10:5000

You can likewise use gstreamer to play video and can easily play video with gst-play:
$ gst-play-1.0 udp://224.0.1.10:5000 
 
or with gst-launch:
$ gst-launch-1.0 udpsrc host=224.0.1.10 port=5000 ! autovideosink


La voila!

Herman