Lesson 7 : Decode, save, visualize, analyze and re-transmit

Download lesson [here]

In this example, we do simultaneously a lot of stuff, namely, save the stream to disk, decode it to bitmap, visualize it in two different x windows, pass the decoded frames to an OpenCV analyzer and re-transmit the stream to a multicast address.

Only a single connection to the IP camera is required and the stream is decoded only once.

The filtergraph looks like this:

main branch
(LiveThread:livethread) --> {ForkFrameFilter3: fork_filter}
                                         |
                            branch 1 <---+
                                         |
                            branch 2 <---+
                                         |
                            branch 3 <---+

branch 1 : recast
-->> (LiveThread:livethread2_1)

branch 2 : save to disk
--> (FileFrameFilter:file_filter_2)

branch 3 : decode
-->> {AVThread:avthread_3} ------------+
                                       |
  {ForkFrameFilter: fork_filter_3} <---+
                 |
      branch 3.1 +--->> (OpenGLThread:glthread_3_1) --> to two x-windows
                 |
      branch 3.2 +----> {IntervalFrameFilter: interval_filter_3_2} --> {SwScaleFrameFilter: sws_filter_3_2} --> {RGBSharedMemFrameFilter: shmem_filter_3_2}

There is a new naming convention: the names of filters, threads and fifos are tagged with “_branch_sub-branch”.

Programming the filtergraph tree is started as usual, from the outer leaves, moving towards the main branch:

# *** branch 1 ***
livethread2_1    =LiveThread("livethread2_1")
live2_in_filter  =livethread2_1.getFrameFilter()

# *** branch 2 ***
file_filter_2    =FileFrameFilter("file_filter_2")

# *** branch 3.1 ***
glthread_3_1     =OpenGLThread("glthread")
gl_in_filter_3_1 =glthread_3_1.getFrameFilter()

# *** branch 3.2 ***
image_interval=1000  # YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
width  =1920//4      # CPU YUV => RGB interpolation
height =1080//4      # CPU YUV => RGB interpolation
shmem_name    ="lesson_4"      # This identifies posix shared memory - must be unique
shmem_buffers =10              # Size of the shmem ringbuffer

shmem_filter_3_2    =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter_3_2      =SwScaleFrameFilter("sws_filter", width, height, shmem_filter_3_2)
interval_filter_3_2 =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter_3_2)

# *** branch 3 ***
fork_filter_3  =ForkFrameFilter("fork_3",gl_in_filter_3_1,interval_filter_3_2)
avthread_3     =AVThread("avthread_3",fork_filter_3)
av3_in_filter  =avthread_3.getFrameFilter()

# *** main branch ***
livethread  =LiveThread("livethread_1")
fork_filter =ForkFrameFilter3("fork_filter",live2_in_filter,file_filter_2,av3_in_filter)

The full code can be downloaded from [here].

The OpenCV client program for reading shared memory can be found from [lesson 4].

Testing the shared multicast stream was explained in [lesson 5].