Tuesday, November 13, 2018

Care and Feeding of a Parabolic Reflector

If you want to listen to Jupiter sing, bounce a message off the Moon, talk to a Satellite, or a little unmanned Aircraft, you need a very high gain antenna.  An easy way to make one, is from an old C-band satellite TV, Big Ugly Dish (BUD).


To use an unknown dish, you need to find its focal point and then make a little antenna with a good front to back ratio, to use as a feed.

Focal Length

The focal point of a parabola is easy to find using some forgotten high school geometry:
  • Measure the diameter (D) and the depth (d) of the dish.
  • The focal length F = D^2 / 16 x d
Note that an offset feed dish is only half a parabola.  It is best to use a circular dish with centre feed.  There are millions of these things lying around, pick a good one.

When it is free, take two.
-- ancient Jewish proverb.

Feeding a Hungry Dish

Most satellites are rotating slowly, to improve their stability.  This means that an antenna needs to be circularly polarized, otherwise the signal will fade and fluctuate twice with each revolution.  This requires either a helical antenna, or a turnstile Yagi antenna.

A Yagi antenna tends to have a very low impedance, while a helical antenna tends to have a very high impedance.  The parasitic elements of a Yagi loads the active element, much like resistors in parallel.  One can use the same effect with a helical antenna, to reduce its impedance to something closer to a 50 Ohm co-axial feed cable.

A multifilar helical antenna can be tweaked to almost exactly 50 Ohm, by driving the one filament and leaving the other ones floating just like Yagi director elements.  The more floating parts, the lower the impedance gets.

Bifilar Helical Feed for WiFi ISM Band

An easy way to make a small helical antenna for the S-Band is with semi-rigid coaxial cable of 2.2 mm diameter (3 mm is more stiff, but still doable).


Twist and Shout

Cut two filaments.  Clamp them carefully onto a bread board and then glue a bunch of tooth picks across them with a hot glue gun.  Once you have finished building the antenna, twist the helix by hand and then remove the tooth picks.

I mounted the filaments into a little wood block glued to a circular FR4 PCB reflector.  Drill 2.5 mm holes into it, then glue the filaments into the wood.

The parasitic element is just standing there above the ground plane, seemingly doing nothing.

The driven element must be soldered to the centre of a 50 Ohm feed and the screen of the feed must be soldered to the reflector in a cut-out under the wood block.  It always requires some improvisations to make a helix, which is a large part of the 'fun'.

Once the helix is assembled, glue the block to the reflector.

Mount the feed at the end of a wooden dowel rod, at the focal point of the dish.  That is the easy part!

Helix Design

From the famous graph of Kraus, we get the following:
  • Frequency: 2450 MHz helical array
  • c=299792458 m/s
  • Wave length = 2.998x10^8 / 2450 MHz = 0.122 m
  • Axial Mode:
    • Circumference = 1.2 x 0.122 = 0.146 m
    • Pitch = 0.4 x 0.122 = 0.049 m
    • Turns = 1
  • Length of filament: sqrt(circumference^2 + pitch^2) x turns = 0.154 m

NEC2 Model

Here is the NEC2 model of the bifilar WiFi ISM band helical feed:

CM Bifilar 2.450 GHz ISM Band Helical Antenna with Parasitic Element
CM Copyright reserved, Herman Oosthuysen, 2018, GPL v2
CM
CM 2450 MHz helical array
CM c=299792458 m/s
CM Wave length = 2.998x10^8 / 2450 MHz = 0.122 m
CM WL/2 = 0.061 mm
CM WL/4 = 0.030 mm
CM Axial Mode:
CM Circumference = 1.2 x 0.122 = 0.146 m
CM Pitch = 0.4 x 0.122 = 0.049 m
CM Turns = 1
CM
CE
# Helix driven element
# Tag, Segments, Spacing, Length, Rx, Ry, Rx, Rx, d
GH     1     100   4.90E-02  4.90E-02   2.3E-02   2.3E-02   2.3E-02   2.3E-02   2.20E-03
# Parasitic helix element, 180 degrees rotated
GM     1     1     0.00E+00  0.00E+00   1.80E+02  0.00E+00  0.00E+00  0.00E+00  0.00E+00
# Ground plane
SM    20    20 -5.00E-02 -5.00E-02 -1.00E-03  5.00E-02 -5.00E-02 -1.00E-03  0.00E+00
SC     0     0  5.00E-02  5.00E-02 -1.00E-03  0.00E+00  0.00E+00  0.00E+00  0.00E+00
GE
# THIN WIRE KERNEL: NORMAL: 0; EXTENDED: -1
EK -1
# EXCITATION: I1 VOLTAGE: 0, I2 TAG: 1, I3 SEGMENT: 1, I4 ADMITTANCE: 0, F1: 1 VOLT, F2: 0 IMAGINARY
EX  0   1   1   0   1   0
# FREQUENCY: IFRQ LINEAR: 0, NFRQ STEPS: 41, BLANK, BLANK, FMHZ: 2350 MHz, DELFRK: 5 MHz
FR  0   41   0   0   2.35E+03   5
RP  0   91  120 1000     0.000     0.000     2.000     3.000 5.000E+03
EN


Radiation Pattern

The helix made from 2.2 mm semi-rigid coaxial cable, has a good front to back ratio of about 6 dB and a nearly flat frequency response over the 2.4 GHz band.

Execute the simulation with xnec2c:
$ xnec2c -i filename.nec



The parasitic element does its thing remarkably well, resulting in an impedance of 48 Ohm (inductive), which is a near perfect match to a 50 ohm coaxial line.  The inductive impedance doesn't matter - it just causes a phase shift.

Note that the frequency is very high and the wavelength is very short.  Therefore, if you change anything by as little as half a millimeter, the results will be completely different.  

If you want to use a different size semi rigid co-ax from your cable junk box to make the helix, then you will need to spend a couple hours tweaking the helix parameters (width and spacing), to get the impedance back to about 50 Ohm again.


La voila!

Herman



Monday, September 24, 2018

Tactical UAV Communications

I started to write a book on Tactical UAV Communications and Mission Systems.  The book seems to have a life of its own and it grows in fits and starts.  It is based on my own experience working on very expensive toys in various shapes and sizes, fixed and rotary wing, in multiple places around the globe.

Sokol Altius

The few aircraft and other pictures in the book are all of UAVs and equipment that I didn't work on, since I am not allowed to show you anything that I actually did!  These pictures serve as illustrations for educational purposes only and provide a little free advertisement to the companies concerned.

Some chapters in the book are based on articles that were already published on this web site, while others are new.

Here is an early PDF copy for those who are interested: UAV-Comms-0.7.pdf
(It will take a little while to download from my $10 Droplet server and some days it is a bit slooow...)

La voila!

Herman

Wednesday, July 25, 2018

Patch Antenna Design with NEC2

The older free Numerical Electromagnetic Code version 2 (NEC2) from Lawrence Livermore Lab assumes an air dielectric.  This makes it hard (but not impossible) for a radio amateur to experiment with Printed Circuit Board Patch antennas and micro strip lines.


Air Spaced Patch Antenna Radiation Pattern


You could use the free ASAP simulation program, which handles thin dielectrics, you could shell out a few hundred Dollars for a copy of NEC4, You could buy GEMACS if you live in the USA, or you could add distributed capacitors to a NEC2 model with LD cards (hook up one capacitor in the middle of each element.), but that is far too much money/trouble for most.

Air Dielectric Patch 

The obvious lazy solution is to accept the limitation and make an air dielectric patch antenna.

An advantage of using air dielectric, is that the antenna will be more efficient, since it will be physically bigger and it will have less loss, since the air isn't heated up by RF, so there is no dielectric loss.

An air spaced patch can be made of tin plate from a coffee can with a pair of tin snips.  A coffee can doesn't cost much and it comes with free contents which can be taken orally or intravenously...

Once you are done experimenting, you can get high quality copper platelets from an EMI/RFI can manufacturer such as Tech Etch, ACE UK and others.


Wire Grid With Feed Point

This grid is not square.  The length is slightly shorter than the width, to avoid getting weird standing waves which will disturb the pattern.   Making these things is part design and part art.  You need to run lots of experiments to get a feel for it.  It may take a few days.  You need lots of patience.  If the pattern looks like a weird undersea creature, then it means that the design is unstable and it will not work in practice.

Find the range where the radiation pattern looks pleasing with a well defined rounded main lobe and the gain is reasonable and go for the middle, so that you get a design that is not ridiculously sensitive and can be built successfully.  It doesn't help to design an antenna with super high gain and then when you build it, you only get a small fraction thereof, due to parasitic and tolerance effects - rather design something that is repeatable and not easily disturbed.

Ground Plane

To model a patch antenna, you need to design two elements, the patch and the ground plane.  The ground plane needs to be a bit bigger than the patch.  The distance between the two is extremely critical and it is important that you can easily vary the gap to find the sweet spot where you get the desired antenna pattern.  With a patch antenna, varying the height by only one millimeter, has a large effect on the pattern.

The NEC ground plane GN card is always at the origin Z = 0.  If you model the patch as a grid of wires, then changing the height above this ground is a very laborious job.  A grid with 21 x 21 wires has 84 values of Z.  You need a programmer's editor with a macro feature to change all that, without going nuts in the process.  It would be much easier if the antenna grid could be kept still and the ground plane shifted up or down instead.

It turns out that the Surface Patch feature of NEC can be successfully misused as a ground plane.  Make a ground plane with GN 1 and make a surface patch and compare the radiation patterns - you'll see they are the same.

Normally, something modeled with SP cards must be a fully enclosed volume, but it works perfectly as a two dimensional ground plane if the antenna is always above it, with nothing below.  The height of a multi patch surface 'ground plane' can be altered by changing only three values of Z, which is rather easier than the 84 Z heights in the wire grid.

Wire Grid

You could model the patch using SP cards, but then you need to define all 6 sides of the 3D plate, which is just as much hassle as making a wire grid with GW cards.  You could also make a wire grid by starting with one little two segment wire and careful use of GM cards, to rotate it into a little cross and replicate it to the side and down, but then it becomes hard to figure out where to put the feed point, since the tag numbers of the wires become unknown after using GM cards.

In the end, I modelled the example patch grid using GW cards, since it is rather mindless to do and then defined the feed point on wire #16.  If you used the replication method, then define a tiny 1 segment, 1 mm long vertical wire, with the (x,y) co-ordinates calculated to be exactly on a grid wire, without having to know what the tag number of that wire is.  For this method, I assign a high number (1000) to the tiny feed wire tag, so I can tie a transmission line TL card to it.

You will see the logic in this approach once you try to make a multi patch array by rotating and translating the first patch with multiple GM cards and then sit and stare at the screen and wonder where the heck to put the feeds.

Parallel Plate Capacitor

A patch antenna is a parallel plate capacitor.

 Smith Chart - Capacitive Load

Whereas a Helical Antenna is inductive, a Patch is capacitive and you got to live with it.  The impedance on the edge is very high and can be made more reasonable by offsetting the feed point about 30% from the edge, but whatever you do, it will be capacitive, on the edge of the Smith chart.  For best results, you may need to add an antenna matching circuit to a patch array antenna.

Design Formulas

Designing an air dielectric patch antenna turned out to be very simple.  Whereas a PCB patch requires a complex formula to describe it, due to the edge effects that are through the air, vs the main field that is through the dielectric - with an air spaced patch, everything is through air and all complications disappear in a puff of magic.

Where c is the speed of light and f is the design frequency:
  • The wavelength WL = c / f
  • The width of the patch W = WL / 2
  • The length of the patch L = 0.49 x W
  • The feed point F = 0.3 x L
The height above ground is best determined experimentally and will be a few millimeters.

If you start with say a 10 mm gap and gradually reduce the height, then after a while you will find a spot where the calculations explode and the radiation plot becomes a big round ball (cocoanec), or just a black screen (xnec2c).  This is the point where the antenna resonates.  For this patch, it happens at 5 mm height.  The optimal pattern is achieved when the gap is one or two mm wider than that, at 6 or 7 mm - simple.

The design frequency should be 3% higher than the desired frequency.  

When you build an antenna, there are always other things in close proximity that loads it: Metal parts, glue, spacers, cables, etc.  All these things will make the antenna operate at a slightly lower frequency than what it was designed for.  Therefore design for a slightly higher frequency and then it will be spot on.

PCB Dielectrics Modeled With NEC2

If you really want to make a Printed Circuit Board (PCB) antenna, then you need to use a special type of Teflon (PTFE) PCB that has a controlled dielectric value.  Ordinary fibre glass and epoxy resin FR4 has a relative permittivity that varies wildly from 4.2 to 4.7, this is too much for consistent reproducible results.  Read this for details: https://www.arrowpcb.com/laminate-material-types.php

You need to find a PCB house, look at the available materials and then design the antenna accordingly.  For microwave RF applications, pure PTFE on a fiberglass substrate, with a relative permittivity ε0 of 2.1 and Loss of 0.0009, is the best available in wide commercial use.  Calculate the capacitance of a little elemental square with the simple thin parallel plate formula:
C = ε0A/d

You can simulate the dielectric in NEC2 by attaching a load (LD) card with a small capacitor as calculated above to the middle of each element - calculating all the co-ordinates will keep you busy for a while!   The NEC2 simulation result should be quite accurate when you add all these little parasitic capacitors.  The easier way to handle it is to create one little element and then use GM cards to rotate and replicate the elements in two dimensions to make a patch, without having to calculate hundreds of x,y co-ordinates, which would drive any sane person up a wall.


Signal speed is inversely proportional to the square root of the dielectric constant. A low dielectric constant will result in a high signal propagation speed and a high dielectric constant will result in a much slower signal propagation speed.  This has a very large effect on the dimensions of the antenna.

The problem is that you can only vary the patch to ground spacing in a few discrete steps, since it is determined by the thickness of the chosen PCB, which is typically 0.2, 0.8, 1.6 or 3.2 mm.  You can vary the length and width in the simulation using a geometry scale GS card, but scaling will also change the spacing, so then you have to modify the position of the ground plane to get the model back to the fixed thickness of the PCB.  So nothing is ever easy with this program.

Example Patch Antenna

Here is a set of NEC2 cards for an air dielectric 33 cm Ham band or 900 MHz ISM band patch antenna made from a tin or copper rectangle, a few mm above a somewhat larger ground plane:

CM Surface Patch Antenna
CM Copyright reserved, GPL v2, Herman Oosthuysen, July 2018
CM
CM 940 MHz (915 + 3%)
CM H=7 mm, W=160 (80), L=156 (78)
CE
#
# Active Element: 21x21 Wires in a Rectangle
# X axis
# GW Tag NS X1 Y1 Z1 X2 Y2 Z2 Radius
GW 1  21 -8.00E-02 -7.80E-02 0.00E+00 +8.00E-02 -7.80E-02 0.00E+00 1.00E-03
GW 2  21 -8.00E-02 -7.02E-02 0.00E+00 +8.00E-02 -7.02E-02 0.00E+00 1.00E-03
GW 3  21 -8.00E-02 -6.24E-02 0.00E+00 +8.00E-02 -6.24E-02 0.00E+00 1.00E-03
GW 4  21 -8.00E-02 -5.46E-02 0.00E+00 +8.00E-02 -5.46E-02 0.00E+00 1.00E-03
GW 5  21 -8.00E-02 -4.68E-02 0.00E+00 +8.00E-02 -4.68E-02 0.00E+00 1.00E-03
GW 6  21 -8.00E-02 -3.90E-02 0.00E+00 +8.00E-02 -3.90E-02 0.00E+00 1.00E-03
GW 7  21 -8.00E-02 -3.12E-02 0.00E+00 +8.00E-02 -3.12E-02 0.00E+00 1.00E-03
GW 8  21 -8.00E-02 -2.34E-02 0.00E+00 +8.00E-02 -2.34E-02 0.00E+00 1.00E-03
GW 9  21 -8.00E-02 -1.56E-02 0.00E+00 +8.00E-02 -1.56E-02 0.00E+00 1.00E-03
GW 10 21 -8.00E-02 -7.80E-03 0.00E+00 +8.00E-02 -7.80E-03 0.00E+00 1.00E-03
GW 11 21 -8.00E-02 +0.00E+00 0.00E+00 +8.00E-02 +0.00E+00 0.00E+00 1.00E-03
GW 12 21 -8.00E-02 +7.80E-03 0.00E+00 +8.00E-02 +7.80E-03 0.00E+00 1.00E-03
GW 13 21 -8.00E-02 +1.56E-02 0.00E+00 +8.00E-02 +1.56E-02 0.00E+00 1.00E-03
GW 14 21 -8.00E-02 +2.34E-02 0.00E+00 +8.00E-02 +2.34E-02 0.00E+00 1.00E-03
GW 15 21 -8.00E-02 +3.12E-02 0.00E+00 +8.00E-02 +3.12E-02 0.00E+00 1.00E-03
GW 16 21 -8.00E-02 +3.90E-02 0.00E+00 +8.00E-02 +3.90E-02 0.00E+00 1.00E-03
GW 17 21 -8.00E-02 +4.68E-02 0.00E+00 +8.00E-02 +4.68E-02 0.00E+00 1.00E-03
GW 18 21 -8.00E-02 +5.46E-02 0.00E+00 +8.00E-02 +5.46E-02 0.00E+00 1.00E-03
GW 19 21 -8.00E-02 +6.24E-02 0.00E+00 +8.00E-02 +6.24E-02 0.00E+00 1.00E-03
GW 20 21 -8.00E-02 +7.02E-02 0.00E+00 +8.00E-02 +7.02E-02 0.00E+00 1.00E-03
GW 21 21 -8.00E-02 +7.80E-02 0.00E+00 +8.00E-02 +7.80E-02 0.00E+00 1.00E-03
#
# Y axis
# GW Tag NS X1 Y1 Z1 X2 Y2 Z2 Radius
GW 22 21 -8.00E-02 -7.80E-02 0.00E+00 -8.00E-02 +7.80E-02 0.00E+00 1.00E-03
GW 23 21 -7.20E-02 -7.80E-02 0.00E+00 -7.20E-02 +7.80E-02 0.00E+00 1.00E-03
GW 24 21 -6.40E-02 -7.80E-02 0.00E+00 -6.40E-02 +7.80E-02 0.00E+00 1.00E-03
GW 25 21 -5.60E-02 -7.80E-02 0.00E+00 -5.60E-02 +7.80E-02 0.00E+00 1.00E-03
GW 26 21 -4.80E-02 -7.80E-02 0.00E+00 -4.80E-02 +7.80E-02 0.00E+00 1.00E-03
GW 27 21 -4.00E-02 -7.80E-02 0.00E+00 -4.00E-02 +7.80E-02 0.00E+00 1.00E-03
GW 28 21 -3.20E-02 -7.80E-02 0.00E+00 -3.20E-02 +7.80E-02 0.00E+00 1.00E-03
GW 29 21 -2.40E-02 -7.80E-02 0.00E+00 -2.40E-02 +7.80E-02 0.00E+00 1.00E-03
GW 30 21 -1.60E-02 -7.80E-02 0.00E+00 -1.60E-02 +7.80E-02 0.00E+00 1.00E-03
GW 31 21 -8.00E-03 -7.80E-02 0.00E+00 -8.00E-03 +7.80E-02 0.00E+00 1.00E-03
GW 32 21 +0.00E-00 -7.80E-02 0.00E+00 +0.00E+00 +7.80E-02 0.00E+00 1.00E-03
GW 33 21 +8.00E-03 -7.80E-02 0.00E+00 +8.00E-03 +7.80E-02 0.00E+00 1.00E-03
GW 34 21 +1.60E-02 -7.80E-02 0.00E+00 +1.60E-02 +7.80E-02 0.00E+00 1.00E-03
GW 35 21 +2.40E-02 -7.80E-02 0.00E+00 +2.40E-02 +7.80E-02 0.00E+00 1.00E-03
GW 36 21 +3.20E-02 -7.80E-02 0.00E+00 +3.20E-02 +7.80E-02 0.00E+00 1.00E-03
GW 37 21 +4.00E-02 -7.80E-02 0.00E+00 +4.00E-02 +7.80E-02 0.00E+00 1.00E-03
GW 38 21 +4.80E-02 -7.80E-02 0.00E+00 +4.80E-02 +7.80E-02 0.00E+00 1.00E-03
GW 39 21 +5.60E-02 -7.80E-02 0.00E+00 +5.60E-02 +7.80E-02 0.00E+00 1.00E-03
GW 40 21 +6.40E-02 -7.80E-02 0.00E+00 +6.40E-02 +7.80E-02 0.00E+00 1.00E-03
GW 41 21 +7.20E-02 -7.80E-02 0.00E+00 +7.20E-02 +7.80E-02 0.00E+00 1.00E-03
GW 42 21 +8.00E-02 -7.80E-02 0.00E+00 +8.00E-02 +7.80E-02 0.00E+00 1.00E-03
#
# Ground plane
# H = 5 mm, Feed = 16
# Frequency 940.000 MHz
# Resonance; the calculation explodes
#
# H = 7 mm, Feed = 16
# Frequency 940.000 MHz
# Feedpoint(1) - Z: (0.116 + i 133.600)    I: (0.0000 - i 0.0075)     VSWR(Zo=50 Ω): 99.0:1
# Antenna is in free space.
# Directivity:  7.68 dB
# Max gain: 12.54 dBi (azimuth 270 deg., elevation 60 deg.)
#
# SM NX NY X1 Y1 Z1 X2 Y2 Z2
# SC  0  0 X3 Y3 Z3
SM 25 25 -1.00E-01 -1.00E-01 -7.00E-03  +1.00E-01 -1.00E-01 -7.00E-03
SC  0  0 +1.00E-01 +1.00E-01 -7.00E-03
#
# Frequency 850.000 MHz - 3 dB down
# Feedpoint(1) - Z: (0.176 + i 129.320)    I: (0.0000 - i 0.0077)     VSWR(Zo=50 Ω): 99.0:1
# Antenna is in free space.
# Directivity:  7.42 dB
# Max gain: 9.54 dBi (azimuth 270 deg., elevation 60 deg.)
#
GE
#
# Frequency 940 MHz
FR     0     1     0      0   9.40E+02
#
# Excitation with voltage source
# EX 0 Tag Segment 0 1Volt
EX     0     16     11      0         1
#
# Plot 360 degrees
RP     0    90    90   1000         0         0         4         4      0
EN


Now you can go and get a coffee can and tin snips and have fun.  The trick is to space the tin plate with paper or plastic washers and glue it to the ground plane with two or four hot glue blobs on the corners, then after hardening, remove the spacers.

For more information on what exactly to do with the contents of the coffee can, you can read this https://2b-alert-web.bhsai.org/2b-alert-web/login.xhtml

Once you have the first rectangular patch working in simulation, you can explore cutting the corners, or making slots in it, to get circular polarization for Satcom use.  You could also try drilling holes in two opposing corners and using those for little nylon bolts.  That could provide robust mounting and circular polarization, in one swell foop.

Circular Polarized Patch Array

With careful use of GM cards, you can replicate and rotate the patch and create an array of 4, 9 or 16 patches and then tie them together with transmission line TL cards (the skew faint lines between the feed points on the below picture).  You can make the EM field rotate right or left depending on whether you feed at patch 1 or at patch 4.


A 24 dBi Quad Patch Array


An advantage of a quad array, is that the impedance is much reduced, so you can hook it up with garden variety co-ax.

Obtaining 24 dBi from only four patches is very good - very well optimized.  Typical commercial quad patch antennas will yield 17 to 21 dBi.

A large patch array, could create a very high gain assembly - a pencil beam - the complete design of which would require an export license, due to the Wassenaar agreement on dual use items.  Therefore I'll rather just stop here with this article and not provide the complete design, before a black helicopter starts to follow me around.
;)

La Voila!

Herman

Friday, June 29, 2018

Mac or BSD

The eternal question:  Which is better - EMACS or Vi?
OK, this post is actually about the other eternal question!
As I use Linux, Mac, Open and Free BSD, I think I can answer objectively:
Both OpenBSD and FreeBSD are reasonably easy to download and install and run on pretty much anything. At least, I have not found a server/desktop/laptop computer that it would not run on.  I even ran OpenBSD on a Sparc Station - remember those?

OpenBSD

Theo De Raadt has a 'cut the nonsense' mentality so OpenBSD is simpler, with a smaller repository of programs, about 30,000 packages. However, with a little effort, you can install FreeBSD software on OpenBSD to get the rest. After a few days of use, you will know how.

FreeBSD

FreeBSD can also with some effort, run Linux programs and you can use a virtualizer to run other systems, so you are never locked into one thing.
In general, OpenBSD feels a lot like Slackware Linux: Simple and very fast.

MacOS

Compared to OpenBSD, Dragonfly and Slackware, some distributions look fancy and are very slow - there are many reasons why. MacOS obviously falls into the fancy and slow category. So if you want a Mac replacement then you first need to decide whether you want a fancy or a fast system.
My preference is to install a reasonably fast system on the host, then use a virtualizer for experiments and work and I frequently run multiple systems at the same time.  All the BSDs are good for that, be it Open, Free or Mac.
My home use system is a Macbook Pro running the latest MacOS with the Macports and Homebrew software repositories.  I even have the XFCE desktop installed, so when I get annoyed with the overbearing Mac GUI, I run XFCE, to get a weirdly satisfying Linux-Mac hybrid.


Linux

Linux is the step child of UNIX, which took over the world.  Of the Top 500 Super Computers, all now run Linux.  My work system is an ancient Dell T420 running the latest Fedora Linux on the host.  All my machines have Virtualbox and a zoo of virtual machines for the rest.

Note that the Mandatory Access Control security systems on Red Hat and Debian distributions slow them down a lot (in the order of 50%).  If you have to have a fast and responsive system and can afford to trade it for security, then turn SELinux or AppArmor off.

Latency

For the control and remote sensing systems of robots, aircraft and rockets, the worst case OS latency matters very much.  For low latency, nothing beats Linux, since the whole kernel and all spinlocks are pre-emptible.

On average, all OS's have the same interrupt service latency - a few tens of nanoseconds.  However, every once in a while, the latency will be much worse.  In the case of Linux, the worst case will be below 1 ms, but for Win10, it can can be 20 ms and for Win7, 800 ms.  The trouble in robotics and remote sensing, is that you need to design for the worst case.

I have observed a dotNet video player on Windows 7, after a couple days of uptime, stop dead for two seconds, every 8 seconds - obviously not good for a remote sensing application.  Windows 10 latency is much improved, though still a little worse than MacOS, which has 2 orders of magnitude worse latency than Linux.

See this analysis: https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html

The answer? It depends on what exactly you need to do with your system...
:)
Herman

Saturday, June 23, 2018

Compile The Latest Gstreamer From GIT

Compile The Latest gstreamer 1.15 on Ubuntu Linux 18.04 LTS

While working on a way to embed Key Length Value (KLV) metadata in a MPEG-2 TS video stream, I found that ffmpeg can copy and extract KLV, but cannot insert it.  There were some indications that the latest gstreamer has something under development, so I had to figure out how to compile gstreamer from the GIT repository, to get the latest mpegtsmux features.

The cryptic official gstreamer compile guide is here:
https://gstreamer.freedesktop.org/documentation/frequently-asked-questions/git.html#

As usual, the best way to do development work is on a virtual machine, so that you don't mess up your host.  I use Oracle Virtualbox on a Macbook Pro.  I downloaded Ubuntu Linux 18.04 LTS Server, made a 64 bit Virtualbox machine and installed the XFCE desktop, to get a light weight system that runs smoothly in a virtual environment.

The problem with the cryptic official guide is that it probably works on the machine of a developer that has been doing this for a few years, but on a fresh virtual machine, a whole zoo of dependencies are missing and will be discovered the hard way.

Install The GCC Compiler

If you haven't done so already, install a minimal desktop and the development tools:
$ sudo apt update 
$ sudo apt install xfce4
$ sudo apt install build-essential

Then log out and in again, to get your beautifully simple XFCE desktop with a minimum of toppings.

Prepare a Work Directory

Make a directory to work in:
$ cd
$ mkdir gstreamer
$ cd gstreamer

Dependencies

Set up all the dependencies that the official guide doesn't tell you about.   Some of these may pull in additional dependencies and others may not be strictly necessary, but it got me going:
$ sudo apt install gtk-doc-tools liborc-0.4-0 liborc-0.4-dev libvorbis-dev libcdparanoia-dev libcdparanoia0 cdparanoia libvisual-0.4-0 libvisual-0.4-dev libvisual-0.4-plugins libvisual-projectm vorbis-tools vorbisgain libopus-dev libopus-doc libopus0 libopusfile-dev libopusfile0 libtheora-bin libtheora-dev libtheora-doc libvpx-dev libvpx-doc libvpx? libqt5gstreamer-1.0-0 libgstreamer*-dev  libflac++-dev libavc1394-dev libraw1394-dev libraw1394-tools libraw1394-doc libraw1394-tools libtag1-dev libtagc0-dev libwavpack-dev wavpack

$ sudo apt install libfontconfig1-dev libfreetype6-dev libx11-dev libxext-dev libxfixes-dev libxi-dev libxrender-dev libxcb1-dev libx11-xcb-dev libxcb-glx0-dev

$ sudo apt install libxcb-keysyms1-dev libxcb-image0-dev libxcb-shm0-dev libxcb-icccm4-dev libxcb-sync0-dev libxcb-xfixes0-dev libxcb-shape0-dev libxcb-randr0-dev libxcb-render-util0-dev

$ sudo apt install libfontconfig1-dev libdbus-1-dev libfreetype6-dev libudev-dev

$ sudo apt install libasound2-dev libavcodec-dev libavformat-dev libswscale-dev libgstreamer*dev gstreamer-tools gstreamer*good gstreamer*bad

$ sudo apt install libicu-dev libsqlite3-dev libxslt1-dev libssl-dev

$ sudo apt install flex bison nasm

As you can see, the official guide is just ever so slightly insufficient.

Check Out Source Code From GIT

Now, after all the above preparations, you can check out the whole gstreamer extended family as in the official guide:
$ for module in gstreamer gst-plugins-base gst-plugins-good gst-plugins-ugly gst-plugins-bad gst-ffmpeg; do git clone git://anongit.freedesktop.org/gstreamer/$module ; done
...long wait...

See what happened:
$ ls
gst-ffmpeg  gst-plugins-bad  gst-plugins-base  gst-plugins-good  gst-plugins-ugly  gstreamer


Run The autogen.sh Scripts

Go into each directory and run ./autogen.sh.  If you get errors looking like 'nasm/yasm not found or too old... config.status: error: Failed to configure embedded Libav tree... configure failed', then of course you need to hunt down the missing package and add it with for example 'sudo apt install nasm', then try autogen.sh again.

Build and install the gstreamer and gst-plugins-base directories first, otherwise you will get a complaint about 'configure: Requested 'gstreamer-1.0 >= 1.15.0.1' but version of GStreamer is 1.14.0'.

You will get bazillions of compiler warnings, but should not get any errors.  All errors need to be fixed somehow and patches submitted upstream, otherwise you won't get a useful resulting program, but the warnings you can leave to the project developers - let them eat their own dog food.  To me, warnings is a sign of sloppy code and I don't want to fix the slop of young programmers who haven't learned better yet:

$ cd gstreamer; ./autogen.sh 
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-base; ./autogen.sh
$ make
$ sudo make install
$ cd ..

Gstreamer has plugins that are in various stages of development/neglect, called The Good, The Bad and The Ugly.  Sometimes there is even a Very Ugly version.  These two linked movies are rather more entertaining than compiling gstreamer, so that will give you something to do on your other screen.

$ cd gst-plugins-good; ./autogen.sh
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-bad; ./autogen.sh 
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-ugly; ./autogen.sh
$ make
$ sudo make install
$ cd ..
 
$ cd gst-ffmpeg; ./autogen.sh
$ make
$ sudo make install
$ cd ..

The Proof Of The Pudding

Maybe mpegtsmux will work:
$ gst-inspect-1.0 mpegtsmux|grep klv
      meta/x-klv


To feed data into mpegtsmux, one needs the appsrc pad:
$ gst-inspect-1.0 appsrc
Factory Details:
  Rank                     none (0)
  Long-name                AppSrc
  Klass                    Generic/Source
  Description              Allow the application to feed buffers to a pipeline
 

One would need to stick a queue in there also, to decouple the video from the metadata.

Some more research is required to write a little application for this.


La Voila!

Herman




Friday, May 18, 2018

Video Distribution With MPEG-2 Transport Streams

FFMPEG MPEG-2 TS Encapsulation

An observation aircraft could be fitted with three or four cameras and a radar.  In addition to the multiple video streams, there are also Key, Length, Value (KLV) metadata consisting of the time and date, the GPS position of the aircraft, the speed, heading and altitude, the position that the cameras are staring at, the range to the target, as well as the audio intercom used by the pilots and observers.  All this information needs to be combined into a single stream for distribution, so that the relationship between the various information sources is preserved.


Example UAV Video from FFMPEG Project

When the stream is recorded and played back later, one must still be able to determine which GPS position corresponds to which frame for example.  If one would save the data in separate files, then that becomes very difficult.  In a stream, everything is interleaved in chunks, so one can open the stream at any point and tell immediately exactly what happened, when and where.

The MPEG-2 TS container is used to encapsulate video, audio and metadata according to STANAG 4609.  This is similar to the Matroska format used for movies, but a movie has only one video channel.

The utilities and their syntax required to manipulate encapsulated video streams is obscure and it is difficult to debug, since off the shelf video players do not support streams with multiple video substreams and will only play one of the substreams, with no way to select which one to play, since they were made for Hollywood movies, not STANAG 4609 movies.

After considerable head scratching, I finally figured out how to do it and even more important, how to test and debug it.  Using the Bash shell and a few basic utilities, it is possible to sit at any UNIX workstation and debug this complex stream wrapper and metadata puzzle interactively.  Once one has it all under control, one can write a C program to do it faster, or one can just leave it as a Bash script, once it is working, since it is is easy to maintain.

References


Install the utilities

If you are using Debian or Ubuntu Linux, install the necessary tools with apt.  Other Linux distributions use dnf:
$ sudo apt install basez ffmpeg vlc mplayer espeak sox 

Note that these tests were done on Ubuntu Linux 18LTS.  You can obtain the latest FFMPEG version from Git, by following the compile guide referenced above.  If you are using Windows, well, good luck.

Capture video for test purposes

Capture the laptop camera to a MP4 file in the simplest way:
$ ffmpeg -f v4l2 -i /dev/video0 c1.mp4

Make 4 camera files with different video sizes, so that one can distinguish them later.  Also make four numbered cards and hold them up to the camera to see easily which is which:

$ ffmpeg -f v4l2 -framerate 25 -video_size vga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c1.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size svga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c2.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size xga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c3.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size uxga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c4.mp4

 

Playback methods

SDL raises an error, unless pix_fmt is explicitly specified during playback: "Unsupported pixel format yuvj422p"

Here is the secret to play video with ffmpeg and SDL:
$ ffmpeg -i s2.mp4 -pix_fmt yuv420p -f sdl "SDL OUT"

...and here is the secret to play video with ffmpeg and X:
$ ffmpeg -i s2.mp4 -f xv Screen1 -f xv Screen2 

With X, you can decode the video once and display it on multiple screens, without increasing the processor load.  If you are a Windows user - please don't cry...

Play video with ffplay:
$ ffplay s2.mp4

ffplay also uses SDL, but it doesn’t respect the -map option for stream playback selection.  Ditto for VLC and Mplayer.

Some help with window_size / video_size:
-window_size vga
‘cif’ = 352x288
‘vga’ = 640x480
...

 

Map multiple video streams into one mpegts container

Documentation: https://trac.ffmpeg.org/wiki/Map

Map four video camera input files into one stream:
$ ffmpeg -i c1.mp4 -i c2.mp4 -i c3.mp4 -i c4.mp4 -map 0:v -map 1:v -map 2:v -map 3:v -c:v copy -f mpegts s4.mp4

 

See whether the mapping worked

Compare the file sizes:
$ ls -al
total 14224
drwxr-xr-x  2 herman herman    4096 May 18 13:19 .
drwxr-xr-x 16 herman herman    4096 May 18 11:19 ..
-rw-r--r--  1 herman herman 1113102 May 18 13:12 c1.mp4
-rw-r--r--  1 herman herman 2474584 May 18 13:13 c2.mp4
-rw-r--r--  1 herman herman 1305167 May 18 13:13 c3.mp4
-rw-r--r--  1 herman herman 2032543 May 18 13:14 c4.mp4
-rw-r--r--  1 herman herman 7621708 May 18 13:19 s4.mp4


The output file s4.mp4 size is the sum of the camera parts above.

 

Analyze the output stream file using ffmpeg

Run "ffmpeg -i INPUT" (not specify an output) to see what program IDs and stream IDs it contains:

$ ffmpeg -i s4.mp4
ffmpeg version 3.4.2-2 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.3.0-16ubuntu2)
  configuration: --prefix=/usr --extra-version=2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-
...snip...
Input #0, mpegts, from 's4.mp4':
  Duration: 00:00:16.60, start: 1.480000, bitrate: 3673 kb/s
  Program 1
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 640x480 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1[0x101]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 960x540 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:2[0x102]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1024x576 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:3[0x103]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc

Running ffmpeg with no output, shows the streams have different resolutions and corresponds to the original 4 files (640x480, 960x540, 1024x576, 1280x720).

 

Play or extract specific substreams

Play the best substream with SDL (uxga):
$ ffmpeg -i s4.mp4 -pix_fmt yuv420p -f sdl "SDL OUT"

Play the first substream (vga):
$ ffmpeg -i s4.mp4 -pix_fmt yuv420p -map v:0 -f sdl "SDL OUT"

Use -map v:0 till -map v:3 to play or extract the different video substreams.

Add audio and data to the mpegts stream:

Make two audio test files:
$ espeak “audio channel one, audio channel one, audio channel one” -w audio1.wav
$ espeak “audio channel two, audio channel two, audio channel two” -w audio2.wav


Convert the files from wav to m4a to be compliant with STANAG 4609:
$ ffmpeg -i audio1.wav -codec:a aac audio1.m4a
$ ffmpeg -i audio2.wav -codec:a aac audio2.m4a

Make two data test files:
$ echo “Data channel one. Data channel one. Data channel one.”>data1.txt
$ echo “Data channel two. Data channel two. Data channel two.”>data2.txt

 

Map video, audio and data into the mpegts stream

Map three video camera input files, two audio and one data stream into one mpegts stream:
$ ffmpeg -i c1.mp4 -i c2.mp4 -i c3.mp4 -i audio1.m4a -i audio2.m4a -f data -i data1.txt -map 0:v -map 1:v -map 2:v -map 3:a -map 4:a -map 5:d -c:v copy -c:d copy -f mpegts s6.mp4

(This shows that mapping data into a stream doesn't actually work yet - see below!) 

 

Verify the stream contents

See whether everything is actually in there:
$ ffmpeg -i s6.mp4
…snip...
[mpegts @ 0x55f2ba4e3820] start time for stream 5 is not set in estimate_timings_from_pts
Input #0, mpegts, from 's6.mp4':
  Duration: 00:00:16.62, start: 1.458189, bitrate: 2676 kb/s
  Program 1
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 640x480 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1[0x101]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 960x540 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:2[0x102]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1024x576 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:3[0x103](und): Audio: mp2 ([4][0][0][0] / 0x0004), 22050 Hz, mono, s16p, 160 kb/s
    Stream #0:4[0x104](und): Audio: mp2 ([4][0][0][0] / 0x0004), 22050 Hz, mono, s16p, 160 kb/s
    Stream #0:5[0x105]: Data: bin_data ([6][0][0][0] / 0x0006)

The ffmpeg analysis of the stream shows three video, two audio and one data substream.

 

Extract the audio and data from the stream

Extract and play one audio channel:
$ ffmpeg -i s6.mp4 -map a:0 aout1.m4a
$ ffmpeg -i aout1.m4a aout1.wav
$ play aout1.wav

and the other one:
$ ffmpeg -i s6.mp4 -map a:1 aout2.m4a
$ ffmpeg -i aout2.m4a aout2.wav
$ play aout2.wav

Extract the data

Extract the data using the -map d:0 parameter:
$ ffmpeg -i s6.mp4 -map d:0 -f data dout1.txt

...and nothing is copied.  The output file is zero length.

This means the original data was not inserted into the stream in the first place, so there is nothing to extract.

It turns out that while FFMPEG does support data copy, it doesn't support data insertion yet.  For the time being, one should either code it up in C using the API, or use Gstreamer to insert the data into the stream: https://developer.ridgerun.com/wiki/index.php/GStreamer_and_in-band_metadata#KLV_Key_Length_Value_Metadata

Extract KLV data from a real UAV video file

You can get a sample UAV observation file with video and metadata here:

$ wget http://samples.ffmpeg.org/MPEG2/mpegts-klv/Day%20Flight.mpg

Get rid of that stupid space in the file name:
$ mv Day[tab] DayFlight.mpg

The above file is perfect for meta data copy and extraction experiments:
$ ffmpeg -i DayFlight.mpg -map d:0 -f data dayflightklv.dat
...snip
 [mpegts @ 0x55cb74d6a900] start time for stream 1 is not set in estimate_timings_from_pts
Input #0, mpegts, from 'DayFlight.mpg':
  Duration: 00:03:14.88, start: 10.000000, bitrate: 4187 kb/s
  Program 1
    Stream #0:0[0x1e1]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720, 60 fps, 60 tbr, 90k tbn, 180k tbc
    Stream #0:1[0x1f1]: Data: klv (KLVA / 0x41564C4B)
Output #0, data, to 'dout2.txt':
  Metadata:
    encoder         : Lavf57.83.100
    Stream #0:0: Data: klv (KLVA / 0x41564C4B)
Stream mapping:
  Stream #0:1 -> #0:0 (copy)
Press [q] to stop, [?] for help
size=       1kB time=00:00:00.00 bitrate=N/A speed=   0x   
video:0kB audio:0kB subtitle:0kB other streams:1kB global headers:0kB muxing overhead: 0.000000%


Dump the KLV file in hexadecimal:
$ hexdump dayflightklv.dat
0000000 0e06 342b 0b02 0101 010e 0103 0001 0000
0000010 9181 0802 0400 8e6c 0320 8583 0141 0501
0000020 3d02 063b 1502 0780 0102 0b52 4503 4e4f
0000030 0e0c 6547 646f 7465 6369 5720 5347 3438
0000040 040d c44d bbdc 040e a8b1 fe6c 020f 4a1f
0000050 0210 8500 0211 4b00 0412 c820 7dd2 0413
0000060 ddfc d802 0414 b8fe 61cb 0415 8f00 613e
0000070 0416 0000 c901 0417 dd4d 2a8c 0418 beb1
0000080 f49e 0219 850b 0428 dd4d 2a8c 0429 beb1

...snip 

Sneak a peak for interesting text strings:

$ strings dayflightklv.dat
KLVA'   

BNZ
Bms
JUD
07FEB
5g|IG

...snip

Cool, it works!

KLV Data Debugging

The KLV data is actually what got me started with this in the first place.   The basic problem is how to ensure that the GPS data is saved with the video, so that one can tell where the plane was and what it was looking at, when a recording is played back later.

The transport of KLV metadata over MPEG-2 transport streams in an asynchronous manner is defined in SMPTE RP 217 and MISB ST0601.8:
http://www.gwg.nga.mil/misb/docs/standards/ST0601.8.pdf

Here is a more human friendly description:
https://impleotv.com/2017/02/17/klv-encoded-metadata-in-stanag-4609-streams/

You can make a short form meta data KLV LS test message using the echo \\x command to output binary values to a file.  Working with binary data in Bash is problematic, but one just needs to know what the limitations are (zeroes, line feeds and carriage return characters may disappear for example):  Don't store binary data in a shell variable (use a file) and don't do shell arithmetic, use the calculator bc or awk instead.

The key, length and date are in this example, but I'm still working on the checksum calculation and the byte orders are probably not correct.  It only gives the general idea of how to do it at this point:

# Universal Key for Local Data Set
echo -en \\x06\\x0E\\x2B\\x34\\x02\\x0B\\x01\\x01 > klvdata.dat
echo -en \\x0E\\x01\\x03\\x01\\x01\\x00\\x00\\x00 >> klvdata.dat
# Length 76 bytes for short packet
echo -en \\x4c >> klvdata.dat
# Value: First ten bytes is the UNIX time stamp, tag 2, length 8, 8 byte time
echo -en \\x02\\x08 >> klvdata.dat
printf "%0d" "$(date +%s)" >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01 >> klvdata.dat
# Checksum tag 1, length 2
echo -en \\x01\\x02 >> klvdata.dat
# Calculate 2 byte sum with bc
echo -en \\x04\\x05 >> klvdata.dat

The UTC time stamp since Epoch 1 Jan 1970 must be the first data field:
$ printf "%0d" "$(date +%s)" | hexdump
0000000 3531 3632 3237 3838 3030              

The checksum is a doozy.  It is a 16 bit sum of everything excluding the sum itself and would need the help of the command line calculator bc.  One has to read two bytes at a time, swap them around (probably), then convert the binary to hex text, do the calculation in bc and eventually output the data in binary back to the file.  I would need a very big mug of coffee to get that working.

Multicast Routing

Note that multicast routing is completely different from unicast routing.  A multicast packet has no source and destination address.  Instead, it has a group address and something concocted from the host MAC.  To receive a stream, a host has to subscribe to the group with IGMP.

Here, there be dragons.

If you need to route video between two subnets, then you should consider sparing yourself the head-ache and rather use unicast streaming.  Otherwise, you would need an expensive switch from Cisco, or HPE, or OpenBSD with dvmrpd.

Linux multicast routing is not recommended, for three reasons: No documentation and unsupported, buggy router code.  Windows cannot route it at all and FreeBSD needs to be recompiled for multicast routing.  Only OpenBSD supports multicast routing out of the box.

Do not meddle in the affairs of dragons,
for you are crunchy
and taste good with ketchup.

Also consider that UDP multicast packets have a Time To Live of 1, meaning that they will be dropped at the first router.  Therefore a multicast router also has to increment the TTL.

If you need to use OpenBSD, do get a copy of Absolute OpenBSD - UNIX for the Practically Paranoid, by M.W. Lucas.


Sigh...

Herman

Saturday, April 7, 2018

Raspberry Pi Video Streaming

I would like to send video over a satellite modem, but these things are as slow as 1990s era dial-up modems.  HD video with the H.264 codec streams at 2 to 3 Mbps, so the amount of data must be reduced by a factor of ten or twenty for low speed satcom.

Instead of running at 30 fps, one should stream at 1 or 2 fps and  most off the shelf video encoder/decoder devices don't know how to do that, so I dug a Raspberry Pi v3 and a v1.2 camera out of my toy box, installed gstreamer and started tinkering on my Mac to gain some experience with the problem.   

Of course one can do the exact same thing on a Linux laptop PC, but what would be the fun in that?

Figure 1 - A Low Rate Video Test Pattern

With the gstreamer videorate plugin, one can change the frame rate to almost any value and cranking it down to 1 or 2 fps is no problem.  One could go down to a few frames per minute, but super slow streaming could cause a playback synchronization issue, because the player error handler may time out before it manages to synchronize.

Also note that satcom systems spoof the TCP ACK packets locally, to speed things up a bit.  This means that TCP and UDP work the same over a satcom link.

Bake a Pi

Get your RPi3 from here:
https://www.sparkfun.com/products/13826

Download a Raspbian image zip file from here:
https://www.raspberrypi.org/downloads/

Open a terminal and unzip with:
$ ark --batch filename.zip

Write the image to a SD card:
$ dd if=filename.img of=/dev/mmcblk0p

Mount the rootfs partition on the SD card, then enable sshd in /etc/rc.local.
Add the following line at the bottom of rc.local, just before the exit statement:
systemctl start ssh

Mount the boot partition on the SD card, then configure the file cmdline.txt to use traditional ethernet device names, so that the ethernet device will be named eth0, the way the UNIX gods intended:
Add "net.ifnames=0 biosdevname=0" to the end of cmdline.txt.

Boot Up

Put the SD card in the Pi,  boot up and configure it:
$ sudo raspi-config
Update
Expand root file system
Set hostname to videopi
Enable camera


Add a static IP address in addition to that assigned by DHCP:
$ sudo ip addr add 192.168.1.10/24 dev eth0

Install video utilities:
$ sudo apt-get ffmpeg
$ sudo apt-get install gstreamer1.0-plugins-*

 

The Rpi Camera

The default kernel includes the v4l2 driver and the latest raspbian image includes the v4l2 utilities (e.g. v4l2-ctl)

Camera Test:
$ raspistill -o pic.jpg

Load the V4L2 module
$ sudo modprobe bcm2835-v4l2

Add the above to /etc/rc.local to enable the /dev/video0 device at startup.

 

Multicast Configuration

http://unixadminschool.com/blog/2014/03/rhel-what-is-multicast-and-how-to-configure-network-interface-with-multicast-address/

Enable multicasting:
$ sudo ifconfig eth0 multicast

Add a multicast route, since without it, multicasting will not work:
$ sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0

$ sudo ip address show
eth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:eb:9b:b3:e0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.104/24 brd 192.168.1.255 scope global enxb827eb9bb3e0
       valid_lft forever preferred_lft forever
    inet 224.0.1.10/32 scope global enxb827eb9bb3e0
       valid_lft forever preferred_lft forever
    inet 192.168.1.4/24 scope global secondary enxb827eb9bb3e0
       valid_lft forever preferred_lft forever
    inet6 fe80::e774:95b:c83c:6e32/64 scope link
       valid_lft forever preferred_lft forever


And check the route settings with
$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0

eth0 224.0.0.0       0.0.0.0         240.0.0.0       U     0      0        0 eth0

# netstat -g
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
enxb827eb9      1      224.0.0.251
enxb827eb9      1      224.0.0.1
wlan0           1      224.0.0.1
lo              1      ip6-allnodes
lo              1      ff01::1
eth0            1      ff02::fbeth0            1      ff02::1:ff3c:6e32eth0            1      ip6-allnodeseth0            1      ff01::1
wlan0           1      ip6-allnodes
wlan0           1      ff01::1

 

Configuration of rc.local

The bottom of /etc/rc.local should look like this:
# Start the SSH daemon
systemctl start ssh

# Load the V4L2 camera device driver
sudo modprobe bcm2835-v4l2

# Add a static IP address in addition to that assigned by DHCP
ip addr add 192.168.1.10/24 dev
eth0
# Enable multicasting:
ifconfig
eth0 multicast

# Add a multicast route
route add -net 224.0.0.0 netmask 240.0.0.0 dev
eth0
exit 0

Now, when you restart the Pi, it should be ready to stream video.  You could edit the above on the Pi with the nano editor, or move the SD card back to your desktop computer, mount the rootfs partition and edit the file there.  This a MAJOR advantage of the Pi architecture: If you mess something up and the Pi won't boot, then you can fix the SD card based system on another machine.  Also, to replicate a Pi, just make a backup and copy the SD card.

Streaming Video

The problem with setting up a streaming system is that there are many pads and each pad has many options.  These options don't necessarily work together and finding a combination that does approximately what you need, can be very time consuming.

However, the defaults usually work.  So the best approach is to make a basic stream, get it to work and only then start to experiment, while keeping careful notes of what works and what doesn't.

Howtos:
http://www.einarsundgren.se/gstreamer-basic-real-time-streaming-tutorial/

https://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README#n251


If necessary, install gstreamer:
$ sudo apt-get install gstreamer1.0-tools

Install all gstreamer plugins:
$ sudo apt-get install gst*plugin*

The simplest test:
$ gst-launch-1.0 videotestsrc ! autovideosink
and
$ gst-launch-1.0 4l2src device=/dev/video0 ! autovideosink


A simple raw stream give me ‘message too long’ errors.  The solution is the ‘chopmydata’ plugin.

 

MJPEG Streaming Examples

To verify that the Pi is streaming, run tcpdump on a second terminal:
$ sudo tcpdump -nlX port 5000
 
Stream a test pattern with motion jpeg:
$ gst-launch-1.0 videotestsrc ! jpegenc ! chopmydata max-size=9000 ! udpsink host=224.0.1.10 port=5000

If you enable the WiFi access point feature with raspi-config, then the Pi can make a fairly decent home security, or toy drone camera, which only needs a power cable.   With multicast streaming, you can connect from multiple computers on the LAN simultaneously.

Stream the Pi Camera with motion jpeg:
$ gst-launch-1.0 v4l2src device=/dev/video0 ! jpegenc ! chopmydata max-size=9000 ! udpsink host=224.0.1.10 port=5000

If tcpdump shows that the Pi is streaming, then start a player on another machine.
Play the video with ffplay on another machine:
$ ffplay -f mjpeg udp://224.0.1.10:5000

The VLC player should also work, but stupid players like MS Media Player or Apple Quicktime, will not be able to figure out what to do with a simple raw UDP stream.  These players need RTP to tell them what to do.

Also note that RTP needs an ODD port number, while everything else need EVEN port numbers.  I am not to reason why...

MJPEG Low Frame Rate Examples

https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-videorate.html

https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good/html/gst-plugins-good-plugins-jpegenc.html

The basic framerate pad for 10 fps is:
! video/x-raw,framerate=10/1 !

You can also scale it at the same time:
! video/x-raw,width=800,height=600,framerate=10/1 !

When you slow things down a lot, then every frame is different.  Consequently the H.264 codec will not work well, so I rather selected the Motion JPEG jpegenc codec.

This low rate videorate stream works:
gst-launch-1.0 v4l2src device=/dev/video0 ! \
 video/x-raw,width=800,height=600,framerate=10/1 ! jpegenc !\
 chopmydata max-size=9000 ! udpsink host=224.0.1.10 port=5000

Play it with ffplay on another machine:
$ ffplay -f mjpeg udp://224.0.1.10:5000

You may have to install FFMPEG on the player machines:
$ sudo apt-get install ffmpeg

There is a statically compiled version of FFMPEG for Windows.  Search online for "zeranoe ffmpeg" to find it.

If you make the frame rate very slow, then it will take ffplay very long to synchronize, but it should eventually pop up and play.

Example H.264 MPEG-2 TS Pipelines

The H.264 codec should only be used raw, with Matroska, MP4/QuickTime or MPEG-2 TS encapsulation:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-ugly-plugins/html/gst-plugins-ugly-plugins-x264enc.html

This raw x264 stream works:
$ gst-launch-1.0 videotestsrc num-buffers=1000 ! x264enc ! udpsink host=224.0.1.10 port=5000

It plays with this:
$ ffplay -f h264 udp://224.0.1.10:5000

This x264 MPEG-2 TS encapsulated stream works, but with too much latency:

$ gst-launch-1.0 videotestsrc  ! x264enc ! mpegtsmux ! udpsink host=224.0.1.10 port=5000

or
$ gst-launch-1.0 v4l2src device=/dev/video0 ! x264enc ! mpegtsmux ! udpsink host=224.0.1.10 port=5000

It plays with this:
$ ffplay udp://224.0.1.10:5000

There are various ways to optimize:
bitrate=128 - for low rate coding

tune=zerolatency - to prevent look-ahead and get frames out ASAP

An optimized x264 pad could look like this:

! x264enc bitrate=512 speed-preset=superfast tune=zerolatency !

The zerolatency parameter actually helps:
$ gst-launch-1.0 v4l2src device=/dev/video0 ! x264enc speed-preset=superfast tune=zerolatency ! udpsink host=224.0.1.10 port=5000

With playback like this:
$ ffplay -f h264 -vf "setpts=PTS/4" udp://224.0.1.:5000

However, the moment I add the mpegtsmux, it is too much for the Pi to handle.   One would need to hook up a second Pi to convert the raw stream to an encapsulated MPEG-2 TS stream, or use a faster computer.

Processor Load

I found that the processor load is about 30% with MJPEG, so a little Pi is perfectly fine for streaming video from a single camera, if one uses a simple codec.

The x264 codec is a hungry beast and consumes 360% CPU according to top, which means all 4 cores are running balls to the wall, indicating that this codec is not really suitable for a little Pi v3 processor.  Nevertheless, it shows that one doesn't have to do H.264 encoding in a FPGA.  It can be done on a small embedded processor.

Multicast Routing

Note that multicast routing is completely different from unicast routing.  A multicast packet has no source and destination address.  Instead, it has a group address and something concocted from the host MAC.  To receive a stream, a host has to subscribe to the group with IGMP.

Here, there be dragons.

If you need to route video between two subnets, then you should consider sparing yourself the head-ache and rather use unicast streaming.  Otherwise, you would need an expensive switch from Cisco, or HPE, or OpenBSD with dvmrpd.

Linux multicast routing is not recommended, for three reasons: No documentation and unsupported, buggy router code.  Windows cannot route it at all and FreeBSD needs to be recompiled for multicast routing.  Only OpenBSD supports multicast routing out of the box.

Do not meddle in the affairs of dragons,
for you are crunchy
and taste good with ketchup.

Also consider that UDP multicast packets have a Time To Live of 1, meaning that they will be dropped at the first router.  Therefore a multicast router also has to increment the TTL.

If you need to use OpenBSD, do get a copy of Absolute OpenBSD - UNIX for the Practically Paranoid, by M.W. Lucas.

Errata

Note that if you don't put a framerate pad in a simple stream, then the presentation time stamps in the stream are wrong/missing, causing the video to play back in slow motion, which can be very befuddling to the uninitiated.

This workaround can make a simple stream play in real-time:
$ ffplay -f mjpeg -vf "setpts=PTS/2“ udp://224.0.1.10:5000

Sometimes, ffplay is not part of the FFMPEG installation.  If you have this problem and don't want to compile it from source, then you can use ffmpeg with SDL as below, which is what ffplay does also.

Play a stream using FFMPEG and SDL to render it to the default screen:
$ ffmpeg -i udp://224.0.1.10:5000 -f sdl -

You could also play the video with Mplayer:
$ mplayer -benchmark udp://224.0.1.10:5000

Of course you can use gstreamer to play it, but I prefer using a different tool for playback as a kind of error check.

I also could not get the WiFi device to work as an access point with hostapd.

Some more head scratching is required.


La Voila!

Herman