Friday, June 29, 2018

Mac or BSD

The eternal question:  Which is better - EMACS or Vi?
OK, this post is actually about the other eternal question!
As I use Linux, Mac, Open and Free BSD, I think I can answer objectively:
Both OpenBSD and FreeBSD are reasonably easy to download and install and run on pretty much anything. At least, I have not found a server/desktop/laptop computer that it would not run on.  I even ran OpenBSD on a Sparc Station - remember those?


Theo De Raadt has a 'cut the nonsense' mentality so OpenBSD is simpler, with a smaller repository of programs, about 30,000 packages. However, with a little effort, you can install FreeBSD software on OpenBSD to get the rest. After a few days of use, you will know how. 


FreeBSD can also with some effort, run Linux programs and you can use a virtualizer to run other systems, so you are never locked into one thing.
In general, OpenBSD feels a lot like Slackware Linux: Simple and very fast.


By comparison, other distributions look fancy and are very slow - there are many reasons why. MacOS obviously falls into the fancy and slow category. So if you want a Mac replacement then you first need to decide whether you want a fancy or a fast system.
My preference is to install a reasonably fast system on the host, then use a virtualizer for experiments and work and I frequently run multiple systems at the same time.  All the BSDs are good for that, be it Open, Free or Mac.
My home use system is a Macbook Pro running the latest MacOS with the Macports and Homebrew software repositories.  I even have the XFCE desktop installed, so when I get annoyed with the overbearing Mac GUI, I run XFCE, to get a weirdly satisfying Linux-Mac hybrid.


Linux is the step child of UNIX, which took over the world.  Of the Top 500 Super Computers, all now run Linux.  My work system is an ancient Dell T420 running the latest Fedora Linux on the host.  All my machines have Virtualbox and a zoo of virtual machines for the rest.
The answer? It depends...

Saturday, June 23, 2018

Compile The Latest Gstreamer From GIT

Compile The Latest gstreamer 1.15 on Ubuntu Linux 18.04 LTS

While working on a way to embed Key Length Value (KLV) metadata in a MPEG-2 TS video stream, I found that ffmpeg can copy and extract KLV, but cannot insert it.  There were some indications that the latest gstreamer has something under development, so I had to figure out how to compile gstreamer from the GIT repository, to get the latest mpegtsmux features.

The cryptic official gstreamer compile guide is here:

As usual, the best way to do development work is on a virtual machine, so that you don't mess up your host.  I use Oracle Virtualbox on a Macbook Pro.  I downloaded Ubuntu Linux 18.04 LTS Server, made a 64 bit Virtualbox machine and installed the XFCE desktop, to get a light weight system that runs smoothly in a virtual environment.

The problem with the cryptic official guide is that it probably works on the machine of a developer that has been doing this for a few years, but on a fresh virtual machine, a whole zoo of dependencies are missing and will be discovered the hard way.

Install The GCC Compiler

If you haven't done so already, install a minimal desktop and the development tools:
$ sudo apt update 
$ sudo apt install xfce4
$ sudo apt install build-essential

Then log out and in again, to get your beautifully simple XFCE desktop with a minimum of toppings.

Prepare a Work Directory

Make a directory to work in:
$ cd
$ mkdir gstreamer
$ cd gstreamer


Set up all the dependencies that the official guide doesn't tell you about.   Some of these may pull in additional dependencies and others may not be strictly necessary, but it got me going:
$ sudo apt install gtk-doc-tools liborc-0.4-0 liborc-0.4-dev libvorbis-dev libcdparanoia-dev libcdparanoia0 cdparanoia libvisual-0.4-0 libvisual-0.4-dev libvisual-0.4-plugins libvisual-projectm vorbis-tools vorbisgain libopus-dev libopus-doc libopus0 libopusfile-dev libopusfile0 libtheora-bin libtheora-dev libtheora-doc libvpx-dev libvpx-doc libvpx? libqt5gstreamer-1.0-0 libgstreamer*-dev  libflac++-dev libavc1394-dev libraw1394-dev libraw1394-tools libraw1394-doc libraw1394-tools libtag1-dev libtagc0-dev libwavpack-dev wavpack

$ sudo apt install libfontconfig1-dev libfreetype6-dev libx11-dev libxext-dev libxfixes-dev libxi-dev libxrender-dev libxcb1-dev libx11-xcb-dev libxcb-glx0-dev

$ sudo apt install libxcb-keysyms1-dev libxcb-image0-dev libxcb-shm0-dev libxcb-icccm4-dev libxcb-sync0-dev libxcb-xfixes0-dev libxcb-shape0-dev libxcb-randr0-dev libxcb-render-util0-dev

$ sudo apt install libfontconfig1-dev libdbus-1-dev libfreetype6-dev libudev-dev

$ sudo apt install libasound2-dev libavcodec-dev libavformat-dev libswscale-dev libgstreamer*dev gstreamer-tools gstreamer*good gstreamer*bad

$ sudo apt install libicu-dev libsqlite3-dev libxslt1-dev libssl-dev

$ sudo apt install flex bison nasm

As you can see, the official guide is just ever so slightly insufficient.

Check Out Source Code From GIT

Now, after all the above preparations, you can check out the whole gstreamer extended family as in the official guide:
$ for module in gstreamer gst-plugins-base gst-plugins-good gst-plugins-ugly gst-plugins-bad gst-ffmpeg; do git clone git://$module ; done
...long wait...

See what happened:
$ ls
gst-ffmpeg  gst-plugins-bad  gst-plugins-base  gst-plugins-good  gst-plugins-ugly  gstreamer

Run The Scripts

Go into each directory and run ./  If you get errors looking like 'nasm/yasm not found or too old... config.status: error: Failed to configure embedded Libav tree... configure failed', then of course you need to hunt down the missing package and add it with for example 'sudo apt install nasm', then try again.

Build and install the gstreamer and gst-plugins-base directories first, otherwise you will get a complaint about 'configure: Requested 'gstreamer-1.0 >=' but version of GStreamer is 1.14.0'.

You will get bazillions of compiler warnings, but should not get any errors.  All errors need to be fixed somehow and patches submitted upstream, otherwise you won't get a useful resulting program, but the warnings you can leave to the project developers - let them eat their own dog food.  To me, warnings is a sign of sloppy code and I don't want to fix the slop of young programmers who haven't learned better yet:

$ cd gstreamer; ./ 
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-base; ./
$ make
$ sudo make install
$ cd ..

Gstreamer has plugins that are in various stages of development/neglect, called The Good, The Bad and The Ugly.  Sometimes there is even a Very Ugly version.  These two linked movies are rather more entertaining than compiling gstreamer, so that will give you something to do on your other screen.

$ cd gst-plugins-good; ./
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-bad; ./ 
$ make
$ sudo make install
$ cd ..

$ cd gst-plugins-ugly; ./
$ make
$ sudo make install
$ cd ..
$ cd gst-ffmpeg; ./
$ make
$ sudo make install
$ cd ..

The Proof Of The Pudding

Maybe mpegtsmux will work:
$ gst-inspect-1.0 mpegtsmux|grep klv

To feed data into mpegtsmux, one needs the appsrc pad:
$ gst-inspect-1.0 appsrc
Factory Details:
  Rank                     none (0)
  Long-name                AppSrc
  Klass                    Generic/Source
  Description              Allow the application to feed buffers to a pipeline

One would need to stick a queue in there also, to decouple the video from the metadata.

Some more research is required to write a little application for this.

La Voila!


Friday, May 18, 2018

Video Distribution With MPEG-2 Transport Streams

FFMPEG MPEG-2 TS Encapsulation

An observation aircraft could be fitted with three or four cameras and a radar.  In addition to the multiple video streams, there are also Key, Length, Value (KLV) metadata consisting of the time and date, the GPS position of the aircraft, the speed, heading and altitude, the position that the cameras are staring at, the range to the target, as well as the audio intercom used by the pilots and observers.  All this information needs to be combined into a single stream for distribution, so that the relationship between the various information sources is preserved.

Example UAV Video from FFMPEG Project

When the stream is recorded and played back later, one must still be able to determine which GPS position corresponds to which frame for example.  If one would save the data in separate files, then that becomes very difficult.  In a stream, everything is interleaved in chunks, so one can open the stream at any point and tell immediately exactly what happened, when and where.

The MPEG-2 TS container is used to encapsulate video, audio and metadata according to STANAG 4609.  This is similar to the Matroska format used for movies, but a movie has only one video channel.

The utilities and their syntax required to manipulate encapsulated video streams is obscure and it is difficult to debug, since off the shelf video players do not support streams with multiple video substreams and will only play one of the substreams, with no way to select which one to play, since they were made for Hollywood movies, not STANAG 4609 movies.

After considerable head scratching, I finally figured out how to do it and even more important, how to test and debug it.  Using the Bash shell and a few basic utilities, it is possible to sit at any UNIX workstation and debug this complex stream wrapper and metadata puzzle interactively.  Once one has it all under control, one can write a C program to do it faster, or one can just leave it as a Bash script, once it is working, since it is is easy to maintain.


Install the utilities

If you are using Debian or Ubuntu Linux, install the necessary tools with apt.  Other Linux distributions use dnf:
$ sudo apt install basez ffmpeg vlc mplayer espeak sox 

Note that these tests were done on Ubuntu Linux 18LTS.  You can obtain the latest FFMPEG version from Git, by following the compile guide referenced above.  If you are using Windows, well, good luck.

Capture video for test purposes

Capture the laptop camera to a MP4 file in the simplest way:
$ ffmpeg -f v4l2 -i /dev/video0 c1.mp4

Make 4 camera files with different video sizes, so that one can distinguish them later.  Also make four numbered cards and hold them up to the camera to see easily which is which:

$ ffmpeg -f v4l2 -framerate 25 -video_size vga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c1.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size svga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c2.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size xga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c3.mp4
$ ffmpeg -f v4l2 -framerate 25 -video_size uxga -pix_fmt yuv420p -i /dev/video0 -vcodec h264 c4.mp4


Playback methods

SDL raises an error, unless pix_fmt is explicitly specified during playback: "Unsupported pixel format yuvj422p"

Here is the secret to play video with ffmpeg and SDL:
$ ffmpeg -i s2.mp4 -pix_fmt yuv420p -f sdl "SDL OUT"

...and here is the secret to play video with ffmpeg and X:
$ ffmpeg -i s2.mp4 -f xv Screen1 -f xv Screen2 

With X, you can decode the video once and display it on multiple screens, without increasing the processor load.  If you are a Windows user - please don't cry...

Play video with ffplay:
$ ffplay s2.mp4

ffplay also uses SDL, but it doesn’t respect the -map option for stream playback selection.  Ditto for VLC and Mplayer.

Some help with window_size / video_size:
-window_size vga
‘cif’ = 352x288
‘vga’ = 640x480


Map multiple video streams into one mpegts container


Map four video camera input files into one stream:
$ ffmpeg -i c1.mp4 -i c2.mp4 -i c3.mp4 -i c4.mp4 -map 0:v -map 1:v -map 2:v -map 3:v -c:v copy -f mpegts s4.mp4


See whether the mapping worked

Compare the file sizes:
$ ls -al
total 14224
drwxr-xr-x  2 herman herman    4096 May 18 13:19 .
drwxr-xr-x 16 herman herman    4096 May 18 11:19 ..
-rw-r--r--  1 herman herman 1113102 May 18 13:12 c1.mp4
-rw-r--r--  1 herman herman 2474584 May 18 13:13 c2.mp4
-rw-r--r--  1 herman herman 1305167 May 18 13:13 c3.mp4
-rw-r--r--  1 herman herman 2032543 May 18 13:14 c4.mp4
-rw-r--r--  1 herman herman 7621708 May 18 13:19 s4.mp4

The output file s4.mp4 size is the sum of the camera parts above.


Analyze the output stream file using ffmpeg

Run "ffmpeg -i INPUT" (not specify an output) to see what program IDs and stream IDs it contains:

$ ffmpeg -i s4.mp4
ffmpeg version 3.4.2-2 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.3.0-16ubuntu2)
  configuration: --prefix=/usr --extra-version=2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-
Input #0, mpegts, from 's4.mp4':
  Duration: 00:00:16.60, start: 1.480000, bitrate: 3673 kb/s
  Program 1
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 640x480 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1[0x101]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 960x540 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:2[0x102]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1024x576 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:3[0x103]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc

Running ffmpeg with no output, shows the streams have different resolutions and corresponds to the original 4 files (640x480, 960x540, 1024x576, 1280x720).


Play or extract specific substreams

Play the best substream with SDL (uxga):
$ ffmpeg -i s4.mp4 -pix_fmt yuv420p -f sdl "SDL OUT"

Play the first substream (vga):
$ ffmpeg -i s4.mp4 -pix_fmt yuv420p -map v:0 -f sdl "SDL OUT"

Use -map v:0 till -map v:3 to play or extract the different video substreams.

Add audio and data to the mpegts stream:

Make two audio test files:
$ espeak “audio channel one, audio channel one, audio channel one” -w audio1.wav
$ espeak “audio channel two, audio channel two, audio channel two” -w audio2.wav

Convert the files from wav to m4a to be compliant with STANAG 4609:
$ ffmpeg -i audio1.wav -codec:a aac audio1.m4a
$ ffmpeg -i audio2.wav -codec:a aac audio2.m4a

Make two data test files:
$ echo “Data channel one. Data channel one. Data channel one.”>data1.txt
$ echo “Data channel two. Data channel two. Data channel two.”>data2.txt


Map video, audio and data into the mpegts stream

Map three video camera input files, two audio and one data stream into one mpegts stream:
$ ffmpeg -i c1.mp4 -i c2.mp4 -i c3.mp4 -i audio1.m4a -i audio2.m4a -f data -i data1.txt -map 0:v -map 1:v -map 2:v -map 3:a -map 4:a -map 5:d -c:v copy -c:d copy -f mpegts s6.mp4

(This shows that mapping data into a stream doesn't actually work yet - see below!) 


Verify the stream contents

See whether everything is actually in there:
$ ffmpeg -i s6.mp4
[mpegts @ 0x55f2ba4e3820] start time for stream 5 is not set in estimate_timings_from_pts
Input #0, mpegts, from 's6.mp4':
  Duration: 00:00:16.62, start: 1.458189, bitrate: 2676 kb/s
  Program 1
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 640x480 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1[0x101]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 960x540 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:2[0x102]: Video: h264 (High 4:2:2) ([27][0][0][0] / 0x001B), yuvj422p(pc, progressive), 1024x576 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:3[0x103](und): Audio: mp2 ([4][0][0][0] / 0x0004), 22050 Hz, mono, s16p, 160 kb/s
    Stream #0:4[0x104](und): Audio: mp2 ([4][0][0][0] / 0x0004), 22050 Hz, mono, s16p, 160 kb/s
    Stream #0:5[0x105]: Data: bin_data ([6][0][0][0] / 0x0006)

The ffmpeg analysis of the stream shows three video, two audio and one data substream.


Extract the audio and data from the stream

Extract and play one audio channel:
$ ffmpeg -i s6.mp4 -map a:0 aout1.m4a
$ ffmpeg -i aout1.m4a aout1.wav
$ play aout1.wav

and the other one:
$ ffmpeg -i s6.mp4 -map a:1 aout2.m4a
$ ffmpeg -i aout2.m4a aout2.wav
$ play aout2.wav

Extract the data

Extract the data using the -map d:0 parameter:
$ ffmpeg -i s6.mp4 -map d:0 -f data dout1.txt

...and nothing is copied.  The output file is zero length.

This means the original data was not inserted into the stream in the first place, so there is nothing to extract.

It turns out that while FFMPEG does support data copy, it doesn't support data insertion yet.  For the time being, one should either code it up in C using the API, or use Gstreamer to insert the data into the stream:

Extract KLV data from a real UAV video file

You can get a sample UAV observation file with video and metadata here:

$ wget

Get rid of that stupid space in the file name:
$ mv Day[tab] DayFlight.mpg

The above file is perfect for meta data copy and extraction experiments:
$ ffmpeg -i DayFlight.mpg -map d:0 -f data dayflightklv.dat
 [mpegts @ 0x55cb74d6a900] start time for stream 1 is not set in estimate_timings_from_pts
Input #0, mpegts, from 'DayFlight.mpg':
  Duration: 00:03:14.88, start: 10.000000, bitrate: 4187 kb/s
  Program 1
    Stream #0:0[0x1e1]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720, 60 fps, 60 tbr, 90k tbn, 180k tbc
    Stream #0:1[0x1f1]: Data: klv (KLVA / 0x41564C4B)
Output #0, data, to 'dout2.txt':
    encoder         : Lavf57.83.100
    Stream #0:0: Data: klv (KLVA / 0x41564C4B)
Stream mapping:
  Stream #0:1 -> #0:0 (copy)
Press [q] to stop, [?] for help
size=       1kB time=00:00:00.00 bitrate=N/A speed=   0x   
video:0kB audio:0kB subtitle:0kB other streams:1kB global headers:0kB muxing overhead: 0.000000%

Dump the KLV file in hexadecimal:
$ hexdump dayflightklv.dat
0000000 0e06 342b 0b02 0101 010e 0103 0001 0000
0000010 9181 0802 0400 8e6c 0320 8583 0141 0501
0000020 3d02 063b 1502 0780 0102 0b52 4503 4e4f
0000030 0e0c 6547 646f 7465 6369 5720 5347 3438
0000040 040d c44d bbdc 040e a8b1 fe6c 020f 4a1f
0000050 0210 8500 0211 4b00 0412 c820 7dd2 0413
0000060 ddfc d802 0414 b8fe 61cb 0415 8f00 613e
0000070 0416 0000 c901 0417 dd4d 2a8c 0418 beb1
0000080 f49e 0219 850b 0428 dd4d 2a8c 0429 beb1


Sneak a peak for interesting text strings:

$ strings dayflightklv.dat



Cool, it works!

KLV Data Debugging

The KLV data is actually what got me started with this in the first place.   The basic problem is how to ensure that the GPS data is saved with the video, so that one can tell where the plane was and what it was looking at, when a recording is played back later.

The transport of KLV metadata over MPEG-2 transport streams in an asynchronous manner is defined in SMPTE RP 217 and MISB ST0601.8:

Here is a more human friendly description:

You can make a short form meta data KLV LS test message using the echo \\x command to output binary values to a file.  Working with binary data in Bash is problematic, but one just needs to know what the limitations are (zeroes, line feeds and carriage return characters may disappear for example):  Don't store binary data in a shell variable (use a file) and don't do shell arithmetic, use the calculator bc or awk instead.

The key, length and date are in this example, but I'm still working on the checksum calculation and the byte orders are probably not correct.  It only gives the general idea of how to do it at this point:

# Universal Key for Local Data Set
echo -en \\x06\\x0E\\x2B\\x34\\x02\\x0B\\x01\\x01 > klvdata.dat
echo -en \\x0E\\x01\\x03\\x01\\x01\\x00\\x00\\x00 >> klvdata.dat
# Length 76 bytes for short packet
echo -en \\x4c >> klvdata.dat
# Value: First ten bytes is the UNIX time stamp, tag 2, length 8, 8 byte time
echo -en \\x02\\x08 >> klvdata.dat
printf "%0d" "$(date +%s)" >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09 >> klvdata.dat
echo -en \\x00\\x01 >> klvdata.dat
# Checksum tag 1, length 2
echo -en \\x01\\x02 >> klvdata.dat
# Calculate 2 byte sum with bc
echo -en \\x04\\x05 >> klvdata.dat

The UTC time stamp since Epoch 1 Jan 1970 must be the first data field:
$ printf "%0d" "$(date +%s)" | hexdump
0000000 3531 3632 3237 3838 3030              

The checksum is a doozy.  It is a 16 bit sum of everything excluding the sum itself and would need the help of the command line calculator bc.  One has to read two bytes at a time, swap them around (probably), then convert the binary to hex text, do the calculation in bc and eventually output the data in binary back to the file.  I would need a very big mug of coffee to get that working.



Saturday, April 7, 2018

Raspberry Pi Video Streaming

I would like to send video over a satellite modem, but these things are as slow as 1990s era dial-up modems.  HD video with the H.264 codec streams at 2 to 3 Mbps, so the amount of data must be reduced by a factor of ten or twenty for low speed satcom.

Instead of running at 30 fps, one should stream at 1 or 2 fps and  most off the shelf video encoder/decoder devices don't know how to do that, so I dug a Raspberry Pi v3 and a v1.2 camera out of my toy box, installed gstreamer and started tinkering on my Mac to gain some experience with the problem.   

Of course one can do the exact same thing on a Linux laptop PC, but what would be the fun in that?

Figure 1 - A Low Rate Video Test Pattern

With the gstreamer videorate plugin, one can change the frame rate to almost any value and cranking it down to 1 or 2 fps is no problem.  One could go down to a few frames per minute, but super slow streaming could cause a playback synchronization issue, because the player error handler may time out before it manages to synchronize.

Also note that satcom systems spoof the TCP ACK packets locally, to speed things up a bit.  This means that TCP and UDP work the same over a satcom link.

Bake a Pi

Get your RPi3 from here:

Download a Raspbian image zip file from here:

Open a terminal and unzip with:
$ ark --batch

Write the image to a SD card:
$ dd if=filename.img of=/dev/mmcblk0p

Mount the rootfs partition on the SD card, then enable sshd in /etc/rc.local.
Add the following line at the bottom of rc.local, just before the exit statement:
systemctl start ssh

Mount the boot partition on the SD card, then configure the file cmdline.txt to use traditional ethernet device names, so that the ethernet device will be named eth0, the way the UNIX gods intended:
Add "net.ifnames=0 biosdevname=0" to the end of cmdline.txt.

Boot Up

Put the SD card in the Pi,  boot up and configure it:
$ sudo raspi-config
Expand root file system
Set hostname to videopi
Enable camera

Add a static IP address in addition to that assigned by DHCP:
$ sudo ip addr add dev eth0

Install video utilities:
$ sudo apt-get ffmpeg
$ sudo apt-get install gstreamer1.0-plugins-*


The Rpi Camera

The default kernel includes the v4l2 driver and the latest raspbian image includes the v4l2 utilities (e.g. v4l2-ctl)

Camera Test:
$ raspistill -o pic.jpg

Load the V4L2 module
$ sudo modprobe bcm2835-v4l2

Add the above to /etc/rc.local to enable the /dev/video0 device at startup.


Multicast Configuration

Enable multicasting:
$ sudo ifconfig eth0 multicast

Add a multicast route, since without it, multicasting will not work:
$ sudo route add -net netmask dev eth0

$ sudo ip address show
eth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:eb:9b:b3:e0 brd ff:ff:ff:ff:ff:ff
    inet brd scope global enxb827eb9bb3e0
       valid_lft forever preferred_lft forever
    inet scope global enxb827eb9bb3e0
       valid_lft forever preferred_lft forever
    inet scope global secondary enxb827eb9bb3e0
       valid_lft forever preferred_lft forever
    inet6 fe80::e774:95b:c83c:6e32/64 scope link
       valid_lft forever preferred_lft forever

And check the route settings with
$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface   U     0      0        0

eth0       U     0      0        0 eth0

# netstat -g
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1
enxb827eb9      1
enxb827eb9      1
wlan0           1
lo              1      ip6-allnodes
lo              1      ff01::1
eth0            1      ff02::fbeth0            1      ff02::1:ff3c:6e32eth0            1      ip6-allnodeseth0            1      ff01::1
wlan0           1      ip6-allnodes
wlan0           1      ff01::1


Configuration of rc.local

The bottom of /etc/rc.local should look like this:
# Start the SSH daemon
systemctl start ssh

# Load the V4L2 camera device driver
sudo modprobe bcm2835-v4l2

# Add a static IP address in addition to that assigned by DHCP
ip addr add dev
# Enable multicasting:
eth0 multicast

# Add a multicast route
route add -net netmask dev
exit 0

Now, when you restart the Pi, it should be ready to stream video.  You could edit the above on the Pi with the nano editor, or move the SD card back to your desktop computer, mount the rootfs partition and edit the file there.  This a MAJOR advantage of the Pi architecture: If you mess something up and the Pi won't boot, then you can fix the SD card based system on another machine.  Also, to replicate a Pi, just make a backup and copy the SD card.

Streaming Video

The problem with setting up a streaming system is that there are many pads and each pad has many options.  These options don't necessarily work together and finding a combination that does approximately what you need, can be very time consuming.

However, the defaults usually work.  So the best approach is to make a basic stream, get it to work and only then start to experiment, while keeping careful notes of what works and what doesn't.


If necessary, install gstreamer:
$ sudo apt-get install gstreamer1.0-tools

Install all gstreamer plugins:
$ sudo apt-get install gst*plugin*

The simplest test:
$ gst-launch-1.0 videotestsrc ! autovideosink
$ gst-launch-1.0 4l2src device=/dev/video0 ! autovideosink

A simple raw stream give me ‘message too long’ errors.  The solution is the ‘chopmydata’ plugin.


MJPEG Streaming Examples

To verify that the Pi is streaming, run tcpdump on a second terminal:
$ sudo tcpdump -nlX port 5000
Stream a test pattern with motion jpeg:
$ gst-launch-1.0 videotestsrc ! jpegenc ! chopmydata max-size=9000 ! udpsink host= port=5000

If you enable the WiFi access point feature with raspi-config, then the Pi can make a fairly decent home security, or toy drone camera, which only needs a power cable.   With multicast streaming, you can connect from multiple computers on the LAN simultaneously.

Stream the Pi Camera with motion jpeg:
$ gst-launch-1.0 v4l2src device=/dev/video0 ! jpegenc ! chopmydata max-size=9000 ! udpsink host= port=5000

If tcpdump shows that the Pi is streaming, then start a player on another machine.
Play the video with ffplay on another machine:
$ ffplay -f mjpeg udp://

The VLC player should also work, but stupid players like MS Media Player or Apple Quicktime, will not be able to figure out what to do with a simple raw UDP stream.  These players need RTP to tell them what to do.

Also note that RTP needs an ODD port number, while everything else need EVEN port numbers.  I am not to reason why...

MJPEG Low Frame Rate Examples

The basic framerate pad for 10 fps is:
! video/x-raw,framerate=10/1 !

You can also scale it at the same time:
! video/x-raw,width=800,height=600,framerate=10/1 !

When you slow things down a lot, then every frame is different.  Consequently the H.264 codec will not work well, so I rather selected the Motion JPEG jpegenc codec.

This low rate videorate stream works:
gst-launch-1.0 v4l2src device=/dev/video0 ! \
 video/x-raw,width=800,height=600,framerate=10/1 ! jpegenc !\
 chopmydata max-size=9000 ! udpsink host= port=5000

Play it with ffplay on another machine:
$ ffplay -f mjpeg udp://

You may have to install FFMPEG on the player machines:
$ sudo apt-get install ffmpeg

There is a statically compiled version of FFMPEG for Windows.  Search online for "zeranoe ffmpeg" to find it.

If you make the frame rate very slow, then it will take ffplay very long to synchronize, but it should eventually pop up and play.

Example H.264 MPEG-2 TS Pipelines

The H.264 codec should only be used raw, with Matroska, MP4/QuickTime or MPEG-2 TS encapsulation:

This raw x264 stream works:
$ gst-launch-1.0 videotestsrc num-buffers=1000 ! x264enc ! udpsink host= port=5000

It plays with this:
$ ffplay -f h264 udp://

This x264 MPEG-2 TS encapsulated stream works, but with too much latency:

$ gst-launch-1.0 videotestsrc  ! x264enc ! mpegtsmux ! udpsink host= port=5000

$ gst-launch-1.0 v4l2src device=/dev/video0 ! x264enc ! mpegtsmux ! udpsink host= port=5000

It plays with this:
$ ffplay udp://

There are various ways to optimize:
bitrate=128 - for low rate coding

tune=zerolatency - to prevent look-ahead and get frames out ASAP

An optimized x264 pad could look like this:

! x264enc bitrate=512 speed-preset=superfast tune=zerolatency !

The zerolatency parameter actually helps:
$ gst-launch-1.0 v4l2src device=/dev/video0 ! x264enc speed-preset=superfast tune=zerolatency ! udpsink host= port=5000

With playback like this:
$ ffplay -f h264 -vf "setpts=PTS/4" udp://224.0.1.:5000

However, the moment I add the mpegtsmux, it is too much for the Pi to handle.   One would need to hook up a second Pi to convert the raw stream to an encapsulated MPEG-2 TS stream, or use a faster computer.

Processor Load

I found that the processor load is about 30% with MJPEG, so a little Pi is perfectly fine for streaming video from a single camera, if one uses a simple codec.

The x264 codec is a hungry beast and consumes 360% CPU according to top, which means all 4 cores are running balls to the wall, indicating that this codec is not really suitable for a little Pi v3 processor.  Nevertheless, it shows that one doesn't have to do H.264 encoding in a FPGA.  It can be done on a small embedded processor.


Note that if you don't put a framerate pad in a simple stream, then the presentation time stamps in the stream are wrong/missing, causing the video to play back in slow motion, which can be very befuddling to the uninitiated.

This workaround can make a simple stream play in real-time:
$ ffplay -f mjpeg -vf "setpts=PTS/2“ udp://

Sometimes, ffplay is not part of the FFMPEG installation.  If you have this problem and don't want to compile it from source, then you can use ffmpeg with SDL as below, which is what ffplay does also.

Play a stream using FFMPEG and SDL to render it to the default screen:
$ ffmpeg -i udp:// -f sdl -

You could also play the video with Mplayer:
$ mplayer -benchmark udp://

Of course you can use gstreamer to play it, but I prefer using a different tool for playback as a kind of error check.

I also could not get the WiFi device to work as an access point with hostapd.

Some more head scratching is required.

La Voila!


Thursday, April 5, 2018

Open Sourcery

I recently encountered a nice looking video encoder/decoder device by a Canadian company called Haivision, which seemed to be able to do exactly what we needed.  These devices use OpenEmbedded Linux to do video streaming, so that one can take an SDI camera and stream the video over ethernet with MPEG-2 TS and decode it again on the other end.

Multicast Route Bug

Unfortunately, we found a multicast configuration bug:  The unit lacks a multicast route.

The multicast route is explained in the route man page: 

route add -net netmask dev eth0
This is an obscure one documented so people know how to do it. This sets all of the class D (multicast) IP routes to go via "eth0". This is the correct normal configuration line with a multicasting kernel.
It can be fixed easily if I could log in with SSH and add the route setup in /etc/rc.d/rc.local.  

Chain Gang Software

However, after some backing and forthing with their customer support, it transpired that Haivision used a Chinese company to build the devices and that Haivision does not have the source code or the root passwords for the devices.  Consequently, one can only use it for unicast streaming and multicast streaming won't work as intended.

The devices are probably good enough for the majority of users, but it is useless for our specialized application, so the devices are now in a junk box and we'll make our own solutions with general purpose embedded computers and gstreamer.

Free Software

The General Public License was intended to prevent this problem, but if people are uninformed about it, then they cannot exercise their rights.

Oh, well, what the hell!
-- The Bomber, Catch 22, by Joseph Heller

Porcupine Pi

One solution is to buy a $400 SDI to USB adaptor, plug it into a $30 Raspberry Pi computer and install Raspbian Linux operating system.  A few hours of tinkering with gstreamer and we have vanilla soup.

La voila!


Saturday, January 6, 2018

Beyond the Intel Meltdown Bug

The Intel Page Table Bug has spurned many discussions and a lot of complicated hard work is going into mitigating it, but I am not convinced that the software mitigations are sufficient The problem will only really be solved once Intel fixes the bungle in their processor microcode (which is currently causing processor crashes causing the bungled Intel fixes to be withdrawn by Dell and HP), or make new silicon.

I have written more than one assembler, linker and debugger when I was younger and computers were simpler.  I have found and fixed a bug in an Intel C compiler and reported the same bug in GCC.  I have also written a few device drivers for Linux and Windows, so I am experienced with digging deep down in the computer weeds.  However, I am not unique - there are thousands of other people with the same experience and the problem is that some of these people are not very nice...

The issue that I see with software patches in the Compilers, OS and Web Browsers, is that an attacker need not use a new patched Compiler, OS or Web Browser.  

If an attacker is smart enough to understand and use the side effects of faulty processor instructions to read someone else's data, then he is smart enough to write a piece of assembly code by hand and insert it inline in a C code wrapper to exploit it, or he can simply use an older unpatched version of the C compiler/OS/Browser.

The other problem is that there are hundreds of different Intel processors and there is no easy way for ordinary mortals to tell which Intel processor is inside their computer.  The sticker on the outside of a box is purely a marketing sticker with no useful information on it.  Consequently, trying to figure out which of the hundreds of patches you need is a fool's errand.  Therefore the easiest solution is to buy a new AMD based computer.

One Way Mirror

Intel is trying their level best to throw sand in the eyes of the press, by pointing out minor flaws in the AMD processor line.  Yes, AMD processors have some bugs too, but the Intel Meltdown bug is extremely dangerous.  An attacker can rent a $5 virtual server in a data centre and then read the data of other users, while the victims have no way to tell that they are being spied upon.  

It is like having a one-way mirror in your bedroom - you cannot tell whether there are prying eyes on the other side.

I fear that all the hard work that was done by BSD, Amazon, Google, Microsoft, Apple, Redhat, Linus and others, will only stop script kiddies and will not stop a determined attacker.

Sell, Sell, Sell...

In 1997, there was the Intel Pentium F00F bug.  If one would stick the code "F0 0F C7 C8" into an inline assembly statement, then a faulty Pentium processor would immediately halt.  The problem that we have now, is much, much worse, but same as the F00F bug, it will only be completely solved once everyone threw their old Intel computers away and bought new AMD computers, or new Intel ones made with new silicon. 

If you haven't followed the lead of the Intel CEO and swapped your Intel stock for AMD stock, then you better do so post haste, since nobody in their right mind should buy Intel processors, until they properly fixed this mess.


Oh well, at least my little Raspberry Pi is fine and maybe I should go and dig my Beaglebone Black out of the junk box too...

Écrasez l'infâme!


Wednesday, December 27, 2017

Parasitic Quadrifilar Helical Antenna

This article was reprinted in OSCAR News, March 2018:

If you want to receive Satellite Weather Pictures, then you need a decent antenna, otherwise you will receive more noise than picture.

For polar orbit satellites, one needs an antenna with a mushroom shaped radiation pattern.  It needs to have strong gain towards the horizon where the satellites are distant, less gain upwards where they are close and as little as possible downwards, which would be wasted and a source of noise.  Most satellites are spin stabilized and therefore the antenna also needs circular polarization, otherwise the received signal will flutter as the antennas rotate through nulls.

The helical antenna, first proposed by Kraus in 1948, is the natural solution to circular polarized satellite communications.  It is a simple twisted wire - there seems to be nothing to it.  Various papers have been published on helix antennas, so the operation is pretty well understood.

Therefore, it is amazing that after more than half a century, there is still a new twist to the helical antenna...

Backfire Helix

The backfired quadrifilar helix array is especially popular for amateur satellite communications, but the results reported by Chris van Lindt and Julian Moss (G4ILO) regarding the antenna drawing on the right, left me curious and wondering whether we are dealing with an internet myth, or a comedy of errors, or a design that is too sensitive to build easily.   

Chris reported that the QFH exhibits nulls that are useful for tuning out terrestrial interference.

How can an omni-directional antenna have a null?  That comment rang a huge alarm bell in my mind that the commonly used QFH antenna was not designed or built right.  To figure out what is going on, I modeled the QFH in NEC2.

First of all, I don't really like backfire helices, because that is not the way that Kraus intended them to be implemented and because much power is lost in the forward direction which will then hit the ground, while you are trying to talk to the sky.  The Kraus helix design calls for an earth plane / reflector, which will project the back lobe forward.

Without the reflector, the radiation pattern of a helix is very messy, but since that is what lots of people are using, I modelled it this way.

A model is never 100% the same as a real antenna, but the NEC cards presented below allows any true card carrying radio/computer geek (a.k.a. radioham) to easily play around with it and get  a feeling for the critical antenna constraints, before building one.

The helical antenna work published by Kraus in 1948, shows that a thin helix radiates in normal mode, while a fat helix radiates in axial mode, as shown in his famous angle vs circumference graph. 

Simple Thin Helix in Free Space - No Reflector

The picture above, shows what a single turn thin helix radiation pattern looks like if there is no reflector - an upside down mushroom.  The bulb at the bottom is turned skyward when the thing is flipped over in a backfire configuration, while the twirl at the top is then pointed to the ground.  So while in backfire mode it is nicely circular polarized and nicely omni-directional, there is nevertheless significant radiation towards the ground.

I plotted these with CocoaNEC on a Macbook (since it makes the prettiest plots) and it cannot rotate a helix in the x or y axis, so if you want to flip it, you got to turn your computer around.  CocoaNEC also cannot handle a half turn helix, so I used one full turn.  You could use xnec2c on Linux or BSD for the full set of NEC2 helix options, at the cost of uglier graphics.

Helical Arrays

A monofilar helix is a very long and unwieldy thing.  It is easier to handle a shorter antenna and there are various ways to achieve that.

Every half wavelength, the current in an antenna goes to zero.  When the current goes to zero, it doesn't matter if the wire is open or closed circuit, so one can cut an antenna every half wave length and it will still work the same.  Similarly a long helix could be considered to be an array of identical little helices in a row.  One could even take these little helices and put them side by side and it will still work the same, or one can rotate and interleave them into a multifilar helix.

The main problem with a multifilar helix is hooking the filaments up with the correct phasing.

Bifilar Helix

In a bifilar design, the one helix is rotated through 180 degrees.  It also needs to be driven with a signal that is rotated 180 degrees.  This is easy to do with a balun.

Connect the centre wire to one helix, the shield to the other and then wind five to ten turns in the coax feed to increase the impedance of the sleeve.   That makes a simple infinite balun.

Quadrifilar Helix

A quad design is the same idea as the bifilar, with four helices each rotated by 0, 90, 180 and 270 degrees.  A quad design is nice and compact, but getting the phasing right is much more of a chore.  A 1/4 wave length of coaxial cable will give a 90 degree phase shift.  This is easy to do for a hobbyist, since all you need is a calculator and a ruler.

QFH - 4 Phased Driven Elements

Most of the QFH designs on the wild wild web however, use one short and one long loop of wire (As from the design for the OSCAR 7 satellite).  The idea is to make two helices that are too long (inductive) and two helices that are too short (capacitive), then hook them up in parallel.  One loop then leads 45 degrees, while the other one lags 45 degrees electrically, thus giving a 90 degree phase shift.  See this

However, if the wire dimensions are not exactly right, then it will be anything but - especially the capacitance.  Hence that comment about the handy nulls in the omni-directional pattern...

NEC2 model of a QFH with Transmission Line Phasing:
CM Quad Helix Antenna
CM Copyright reserved, Herman Oosthuysen, 2017, GPL v2
CM 2 meter helical dipole array
CM 137 MHz
CM c=299792458 m/s
CM WL = 2188 mm, r=348 mm
CM WL/2 = 1094 mm
CM WL/4 = 547 mm
CM Max Segments is 10,000 / 40 mm = 250
CM Diameter = 378 mm
CM Radius = 189 mm
CM Length = 570 mm
CM Turns = 1
CM Turn spacing = 570 x 2 = 1140 mm
GH 1 50 1.14E+00 1.14E+01 1.89E-01 1.89E-01 1.89E-01 1.89E-01 1.00E-03
GM 1   1        0        0      90        0        0        0        0
GM 1   1        0        0      90        0        0        0        0
GM 1   1        0        0      90        0        0        0        0
TL 1 1 2 1 50 0.547 0 0 0 0
TL 2 1 3 1 50 0.547 0 0 0 0
TL 3 1 4 1 50 0.547 0 0 0 0
FR     0     0     1      0  1.37E+02         0         0         0         0         0
EX     0     0     1      0  1.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00
RP     0    91   120   1000         0         0         2         3      5000

The NEC model is actually not complicated, but you need to read the manual to understand it.  I defined one helix with a GH card, then replicated and rotated it 3 times with GM cards.  The phasing is done with three transmission line (TL) cards. The first helix is excited with 1 Volt using an EX card and the last thing is the radiation pattern (RP) card.

BTW, the NEC2 manual is here:

Quadrifilar Parasitic Helix

Another way to get the phasing right, is to ignore it altogether!

If you make a quad and only drive one helix and leave the other three floating as parasitic elements (same as on a Yagi-Uda antenna), it will work almost exactly the same as when you actively drive them.  Most importantly, it will work much better than if you would drive them wrongly!
QFH - 1 Driven, 3 Parasitic

The above plot shows a quadrifilar helix in free space with one driven element and three parasitic elements.  This plot doesn't look much different from the one above it and it eliminates a major head-ache, so you can then set your phasers to stun.

The NEC model is the same, just remove the three transmission lines.


The antenna god (a.k.a. Kraus) intended helices to work with reflectors.  If we expand the model to include a ground plane, the pattern turns right side up and the stem of the mushroom (almost) disappears, leaving only the bulb, so all the energy goes the right way, providing another dB or two of gain.

QFH - 1 Driven, 3 Parasitic, Reflector

It is the same as the one above, but you don't need to crick your neck.

NEC2 Model Including a Reflector:
CM Quad Helix Antenna with Parasitic Elements
CM Copyright reserved, Herman Oosthuysen, 2017, GPL v2
CM 2 meter helical array
CM 137 MHz
CM c=299792458 m/s
CM WL = 2188 mm, r=348 mm
CM WL/2 = 1094 mm
CM WL/4 = 547 mm
CM Max Segments is 10,000 / 40 mm = 250
CM Diameter = 378 mm
CM Radius = 189 mm
CM Length = 570 mm
CM Turns = 1
CM Turn spacing = 570 x 2 = 1140 mm
GH 1 50 1.14E+00 1.14E+01 1.89E-01 1.89E-01 1.89E-01 1.89E-01 1.00E-03
GM 1   1        0        0      90        0        0        0        0
GM 1   1        0        0      90        0        0        0        0
GM 1   1        0        0      90        0        0        0        0
GN 1
FR     0     0     1      0  1.37E+02         0         0         0         0         0
EX     0     0     1      0  1.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00
RP     0    91   120   1000         0         0         2         3      5000

The reflector is modeled here as a ground plane card (GN 1).   This works on CocoaNEC, but with Xnec2c, you need to shift the helix up by one or two millimeters to avoid a short circuit, or define a multi patch surface with two SM and SC cards slightly below z=0.


It therefore turns out that a 2 meter band, 146 MHz QFH antenna is actually easy to build, even easier than anyone imagined, simply by ignoring the phasing problem altogether:  

Wind four helices, each 1027 mm long, around a former 300 to 400 mm diameter, connect one up to a 50 Ohm coax and leave the other three floating as parasitic elements.

For good measure, add a reflector, connect it to the screen of the co-ax and put the thing right side up as Kraus intended.

Similarly, you could make a helical array with any number of filaments and get any amount of gain (practically up to about 15 dBi), but the quad neatly solves the impedance matching problem, since it has an impedance of about 40 Ohms and can be hooked up with garden variety RG-58 co-ax without bothering with a tuning element.

Circular Polarization

The electrical field is forced to rotate clockwise, when looking up at the sky, by the helix rotation.  To confirm that you do the right hand polarization correctly, get a nice big wood screw.  If the helix uses a reflector, then it needs to follow a normal right handed screw.  If the helix is backfired without a reflector, then it needs to be opposite to the right handed screw.  

A wrong way polarized antenna will cause a big drop in signal strength.  Opposite polarization is effectively a permanent null pointed at the satellite. 

A right handed bolt will never fit in a left handed nut - unless you use a big hammer...

La Voila!

Herman Oosthuysen