Making emacs theme and terminal theme play along nicely

I have been using emacs for more than six months now and still had a few things which were annoying me as I had not really completely configured it. Today I fixed all the things which were annoying me.

When editing from the command line, I use to resort to using nano or more recently vim. I was using nano as the text editor for editing my git logs, patches and as an editor for mutt too. Why not use emacs for all of them? Well if you open single emacs instances all the time, it takes time to load the .emacs or init.el file and even if you invoke it as emacs -Q -nw it is not really worthwhile. What is required is to run emacs in daemon mode. I created a systemd service file to start emacs in daemon mode at startup with this script taken from the Arch wiki.

[Unit]
Description=Emacs: the extensible, self documenting text editor

[Service]
Type=forking
ExecStart=/usr/bin/emacs --daemon
ExecStop=/usr/bin/emacsclient --eval "(kill emacs)"
Restart=always

[Install]
WantedBy=default.target

This results in emacs automatically being started in daemon/server mode. Added a few aliases to my .zshrc file as below.

alias e="emacsclient -t"
alias ec="emacsclient -c"

Now I can open emacs from command line as “e filename”. And it will open quickly now. So all good? No, still one problem remained. I use solarized theme for my terminal and mutt while I use zenburn for emacs. On opening a file as above, the color themes I guess superimpose or mix and result in a very bad display of text. If I run emacs -Q -nw to open a file it would be alright, as the -Q option does not lead emacs to load it’s init.el file. I now needed a way to tell emacsclient to not apply any theme while being opened on a terminal. After some searching, adding the below snippet to init.el resulted in what I wanted. So if I run a GUI emacs, zenburn theme is used, if I use emacs in terminal, no background theme gets applied.

(defun on-frame-open (frame)
  (if (not (display-graphic-p frame))
      (set-face-background 'default "unspecified-bg" frame)))
(on-frame-open (selected-frame))
(add-hook 'after-make-frame-functions 'on-frame-open)

And viola……

Configuring Mutt and Emacs with AUCTeX

Lately I have been very busy with my MS studies and finding it difficult to find time for anything else. One of the things I have had to do is submit one assignment atleast every week. It is expected that you turn in those assignments in PDF format and no, a shoddily word to PDF converted document doesn’t look anywhere professional. I had to pick up LaTeX quickly for the task and it is fucking awesome. I could never get formatting right whenever I had to use MS Word, though yes I never took the time to really learn it either, but, use of command specifiers kinda thingy to get what you want in LaTeX seems easy to me.

Now, having been using Emacs for the past few months, what better tool to use for churning out documents with LaTeX than Emacs and AUCTeX combo!

http://mathieu.3maisons.org/wordpress/how-to-configure-emacs-and-auctex-to-work-with-a-pdf-viewer

So, this link helped me get Emacs with AUCTeX running without any problem. In case you are not a fan of Emacs, have a look at Gummi. I did my first two assignments with it and it was damn simple to use. No clutter, simple and you see the preview on the side as you type. And in case you would like to use external packages for LaTeX in gummi, refer the below link.

http://www.pamelatoman.net/blog/2013/11/using-additional-packages-with-gummi/

I have also been using Mutt as my mail client for a while. Not having to use the mouse really makes working with stuff faster, be it Emacs or Mutt. These two links should make setting up Mutt easier for anybody.

http://pbrisbin.com/posts/mutt_gmail_offlineimap/

http://pbrisbin.com/posts/two_accounts_in_mutt/

Command line for the win!!!!!!! And thanks to both of the guys and Pamela Toman for their informative articles.

.emacs

I started using Emacs a while back and can’t believe how I worked without it in gedit. One can customise Emacs anyway one likes. Here is my .emacs file which enables gtags mode for C files, indentation for C source files, named sessions saving/naming, automatic brace pairings, maximise on opening emacs and changing frames with shift and arrow keys. Do note that nothing in the below .emacs file is my work, it has all been collected from different places on the internet.

Most helpful has been: http://scottfrazersblog.blogspot.in/2009/12/emacs-named-desktop-sessions.html

(custom-set-variables
;; custom-set-variables was added by Custom.
;; If you edit it by hand, you could mess it up, so be careful.
;; Your init file should contain only one such instance.
;; If there is more than one, they won't work right.
'(ansi-color-names-vector
["#212526" "#ff4b4b" "#b4fa70" "#fce94f" "#729fcf" "#e090d7" "#8cc4ff" "#eeeeec"])
'(custom-enabled-themes (quote (deeper-blue)))
'(inhibit-startup-screen t))
(custom-set-faces
;; custom-set-faces was added by Custom.
;; If you edit it by hand, you could mess it up, so be careful.
;; Your init file should contain only one such instance.
;; If there is more than one, they won't work right.
)

(when (>= emacs-major-version 24)
  (require 'package)
  (package-initialize)
  (add-to-list 'package-archives '("melpa" . "http://melpa.milkbox.net/packages/") t)
  )

(add-hook 'c-mode-hook 'ggtags-mode)

(desktop-save-mode 1)

(require 'cl)

(setq-default c-basic-offset 4 c-default-style "linux")
(setq-default tab-width 4 indent-tabs-mode t)

(require 'autopair)
(autopair-global-mode)

(custom-set-variables
 '(initial-frame-alist (quote ((fullscreen . maximized)))))

(global-auto-revert-mode t)

(windmove-default-keybindings)

(setq windmove-wrap-around t)

(add-to-list 'load-path "~/.emacs.d/lisp/")

(require 'sr-speedbar)

(defun c-lineup-arglist-tabs-only (ignored)
 "Line up argument lists by tabs, not spaces"
 (let* ((anchor (c-langelem-pos c-syntactic-element))
   (column (c-langelem-2nd-pos c-syntactic-element))
   (offset (- (1+ column) anchor))
   (steps (floor offset c-basic-offset)))
  (* (max steps 1)
     c-basic-offset)))

(add-hook 'c-mode-common-hook
          (lambda ()
          ;; Add kernel style
          (c-add-style
           "linux-tabs-only"
           '("linux" (c-offsets-alist
                      (arglist-cont-nonempty
                       c-lineup-gcc-asm-reg
                       c-lineup-arglist-tabs-only))))))

(add-hook 'c-mode-hook
          (lambda ()
            (let ((filename (buffer-file-name)))
            ;; Enable kernel mode for the appropriate files
            (when (and filename
                       (string-match (expand-file-name "~/src/linux-trees")
                                      filename))
             (setq indent-tabs-mode t)
             (c-set-style "linux-tabs-only")))))

(require 'desktop)

(defvar my-desktop-session-dir
 (concat (getenv "HOME") "/.emacs.d/desktop-sessions/")
 "*Directory to save desktop sessions in")

(defvar my-desktop-session-name-hist nil
 "Desktop session name history")

(defun my-desktop-save (&optional name)
 "Save desktop by name."
 (interactive)
 (unless name
   (setq name (my-desktop-get-session-name "Save session" t)))
 (when name
   (make-directory (concat my-desktop-session-dir name) t)
   (desktop-save (concat my-desktop-session-dir name) t)))

(defun my-desktop-save-and-clear ()
  "Save and clear desktop."
  (interactive)
  (call-interactively 'my-desktop-save)
  (desktop-clear)
  (setq desktop-dirname nil))

(defun my-desktop-read (&optional name)
  "Read desktop by name."
  (interactive)
  (unless name
    (setq name (my-desktop-get-session-name "Load session")))
  (when name
    (desktop-clear)
    (desktop-read (concat my-desktop-session-dir name))))

(defun my-desktop-change (&optional name)
  "Change desktops by name."
  (interactive)
  (let ((name (my-desktop-get-current-name)))
    (when name
      (my-desktop-save name))
    (call-interactively 'my-desktop-read)))

(defun my-desktop-name ()
  "Return the current desktop name."
  (interactive)
  (let ((name (my-desktop-get-current-name)))
    (if name
        (message (concat "Desktop name: " name))
      (message "No named desktop loaded"))))

(defun my-desktop-get-current-name ()
  "Get the current desktop name."
  (when desktop-dirname
    (let ((dirname (substring desktop-dirname 0 -1)))
      (when (string= (file-name-directory dirname) my-desktop-session-dir)
        (file-name-nondirectory dirname)))))

(defun my-desktop-get-session-name (prompt &optional use-default)
  "Get a session name."
  (let* ((default (and use-default (my-desktop-get-current-name)))
         (full-prompt (concat prompt (if default
                                         (concat " (default " default "): ")
                                       ": "))))
  (completing-read full-prompt (and (file-exists-p my-desktop-session-dir)
                                    (directory-files my-desktop-session-dir))
                   nil nil nil my-desktop-session-name-hist default)))

(defun my-desktop-kill-emacs-hook ()
  "Save desktop before killing emacs."
  (when (file-exists-p (concat my-desktop-session-dir "last-session"))
    (setq desktop-file-modtime
          (nth 5 (file-attributes (desktop-full-file-name (concat my-desktop-session-dir "last-session"))))))
  (my-desktop-save "last-session"))

(add-hook 'kill-emacs-hook 'my-desktop-kill-emacs-hook)

How is required information passed to device drivers in Linux kernel

First the basic is how do you provide information like memory addresses from which the driver will do the register read/writes? Now, depending on the kernel version, this can be different. The older kernels and drivers used the platform architecture to specify/get this information. New kernels and drivers rely on device trees for this information.

So let me show how this is done for both.

static struct resource foo_resource[] = {
[0] = {
      .start  = foo_BASE_TEG,         /* address */
      .end    = foo_BASE_TEG + 0xff,  /* data */
      .flags  = IORESOURCE_MEM,
      },
[1] = {
      /* interrupt assigned during initialisation */
      .flags  = IORESOURCE_IRQ | IORESOURCE_IRQ_LOWEDGE,
      }
};

static struct foo_platform_data foo_platdata = {
      .osc_freq = 24000000
};

static struct platform_device foo_device = {
     .name = "foo_platform",
     .id   = 0,
     .num_resources  = ARRAY_SIZE(foo_resource),
     .resource       = foo_resource,
     .dev            = {
               .platform_data = &foo_platdata,
         }
};

Something like above will be specified in a board file for the hardware. The resource structure specifies the memory address with .start and .end specifier. This in turn is passed in the platform_device structure. When your driver loads at kernel boot up, in init() the platform driver register function matches the name specified in the driver structure and uses the data passed from platform device above. Or if a probe() is directly called by the kernel on boot up, the platform_device pointer passed to the probe call, can be used to retrieve the platform data. The oscillator frequency was specified in the platform data above, but, any such data can be specified and then accessed in the driver. If pdev is the pointer in the probe call, the platform data will be accessible with pdev->dev.platform_data.The pointer to the resource structure can be had with a call to platform_get_resource. Once the resouce structure pointer is available, an ioremap call will return the address to be used from that point onwards, which will be assigned to an iomem variable. Any writes or read you from here on will be based off the memory address you got in the iomem variable above. The readl and writel functions are used for the ARM architecture to read or write to registers. Now you may not note this functions directly in a driver, as a lot many times the drivers are build around functionality provided by a subsystem, but, ultimately in the back end these functions will be used.

Have a look at the “can” related structures here

http://git.toradex.com/cgit/linux-toradex.git/tree/arch/arm/mach-tegra/board-colibri_t20.c?h=tegra

and then have a look at the driver code below, especially the probe() call

http://git.toradex.com/cgit/linux-toradex.git/tree/drivers/mtd/maps/tegra_nor.c?h=tegra

In recent kernels, device trees are used. For example, an ADC peripheral have a device tree node specification as below

adc0: adc@4003b000
{
    compatible = "fsl,vf610-adc";
    reg = <0x4003b000 0x1000>;
    interrupts = <0 53 0x04>;
    clocks = <&clks VF610_CLK_ADC0>;
    clock-names = "adc";
    status = "disabled";
    #io-channel-cells = <1>;
};

So, 0x4003b000 is the starting address of the peripheral. Have a look here http://lxr.free-electrons.com/source/drivers/iio/adc/vf610_adc.c. The of_device_id matches against the name specified in the node and uses the information from the node. Have a look at the driver, probe() function especially and this is pretty simple. Shows how the memory information is read in from the node and used further. Also, one can clearly see the readl() and writel() calls. Do note that however not all drivers will use readl and writely directly, but, have several cascaded pointer calls which will ultimately result in a call to readl or writel. The data in a device tree node can be retrieved with device tree node helper functions.

Setting up Yocto/Poky for Beagleboard-xM/Beagleboard/Beaglebone

I have used Buildroot before for setting up environment for Beagleboard, but OpenEmbedded or Yocto gives much more power with regard to the number of packages you can build or do your own customizations. From here on i am assuming that you have a separate beagle directory in which you are doing this. A knowledge of git and your smartness ūüėČ is assumed.

Clone the poky repository with git

git clone git://git.yoctoproject.org/poky

Enter the poky directory and clone the “meta-ti” layer. This layer will be required for Beagle specific builds.

git clone git://git.yoctoproject.org/meta-ti meta-ti

Clone the meta-openembedded, openembedded-core and meta-qt5 layers, while in the poky directory

git clone git://git.openembedded.org/openembedded-core openembedded-core

git clone git://git.openembedded.org/meta-openembedded meta-openembedded

git clone git://github.com/meta-qt5/meta-qt5 meta-qt5

In each of the git cloned repositories, select the branch you want to work with. If you do not select a branch, all of them will be with the default master branch. For example, you can select the dora or daisy branch.

While in the beagle directory, run source poky/oe-init-build-env poky-build . The poky-build directory is where all your build will take place, downloads will happen and all the packages and images will reside.

So, now your beagle directory will have two directories inside poky, poky-build. The poky directory will have the various meta layers.

After running the source command above, do not exit the terminal or switch to a different terminal or directory. The script which was run did the necessary task of setting up the environment variables required for the build.

Open the conf/bblayers.conf file with an editor like nano or gedit. Add the required entries to have this file exactly as below.

# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
LCONF_VERSION = "6"

BBPATH = "${TOPDIR}"
BBFILES ?= ""

BBLAYERS ?= " \
  /home/sanchayan/beagle/poky/meta \
  /home/sanchayan/beagle/poky/meta-yocto \
  /home/sanchayan/beagle/poky/meta-yocto-bsp \
  /home/sanchayan/beagle/poky/meta-ti \
  /home/sanchayan/beagle/poky/meta-qt5 \
  /home/sanchayan/beagle/poky/openembedded-core \
  /home/sanchayan/beagle/poky/meta-openembedded/meta-ruby \
  /home/sanchayan/beagle/poky/meta-openembedded/meta-oe \
  "
BBLAYERS_NON_REMOVABLE ?= " \
  /home/sanchayan/beagle/poky/meta \
  /home/sanchayan/beagle/poky/meta-yocto \
  "

Open a different terminal and go to the openembedded-core in poky directory.  Make a conf directory and add a layer.conf file as below.

# We have a conf and classes directory, append to BBPATH
BBPATH .= ":${LAYERDIR}"

# We have a recipes directory, add to BBFILES
BBFILES += "${LAYERDIR}/recipes*/*/*.bb ${LAYERDIR}/recipes*/*/*.bbappend"

BBFILE_COLLECTIONS += "openembedded-core"
BBFILE_PATTERN_openembedded-core := "^${LAYERDIR}/"
BBFILE_PRIORITY_openembedded-core = "4"

Now go to the meta-openembedded directory in poky. Make a conf directory and add a layer.conf file as below.

# We have a conf and classes directory, append to BBPATH
BBPATH .= ":${LAYERDIR}"

# We have a recipes directory, add to BBFILES
BBFILES += "${LAYERDIR}/recipes*/*/*.bb ${LAYERDIR}/recipes*/*/*.bbappend"

BBFILE_COLLECTIONS += "meta-openembedded"
BBFILE_PATTERN_meta-openembedded := "^${LAYERDIR}/"
BBFILE_PRIORITY_meta-openembedded = "5"

Go to the terminal in which you got into the poky-build directory after running source/oe-init-build-env. Add the following to the conf/local.conf.

BB_NUMBER_THREADS = "4"
PARALLEL_MAKE = "-j 4"
INHERIT += "rm_work"
IMAGE_INSTALL_append = " \
            packagegroup-core-x11 \
            libx11 \            
            qtbase \
            qt3d \
            qtconnectivity \
            qtmultimedia \             
            qtserialport \            
            qtwebsockets \
            qtsvg \
            qtx11extras \
              "

The BB_NUMBER_THREADS and PARALLEL_MAKE in my local.conf is as per the fact that i have a quad core machine. Set it as per your machine configuration. Also, set the MACHINE variable in the file. I set it to MACHINE ?= “beagleboard”. Just add this line below the default. The IMAGE_INSTALL_append will add the packages specified to any image we build and we are going to do a minimal build. You can add the packages you like.

First look for a specific package you like at the below link. Do select the relevant branch as per your branch selection in the start of the tutorial.

http://layers.openembedded.org/layerindex/branch/master/recipes/

After this, check the layer in which that package recipe resides. Clone the layer in the same way we added the meta-ti or meta-qt5 layers and add them to the bblayers.conf file. If the layer has a dependency you need to clone and add the relevant dependency layer too. I wanted to build qt5, so i added the meta-qt5 layer. If you want to build cherokee, you need to add the meta-webserver layer in which the cherokee recipe resides.

Some packages fail due to a fetch failure. This is because a particular define for a url is not there in Yocto which Openembedded uses.

Add the following to meta/classes/mirrors.bbclass and meta/conf/bitbake.conf in the poky source tree respectively. Make sure you add it at the right place.

${SAVANNAH_GNU_MIRROR} http://download-mirror.savannah.gnu.org/releases \n \
${SAVANNAH_NONGNU_MIRROR} http://download-mirror.savannah.nongnu.org/releases \n \

SAVANNAH_GNU_MIRROR = “http://download-mirror.savannah.gnu.org/releases&#8221;
SAVANNAH_NONGNU_MIRROR = “http://download-mirror.savannah.nongnu.org/releases&#8221;

A patch for the poky tree to do the above is below, which you can apply with git.

diff --git a/meta/classes/mirrors.bbclass b/meta/classes/mirrors.bbclass
index 1fd7cd8..1dd6cd6 100644
--- a/meta/classes/mirrors.bbclass
+++ b/meta/classes/mirrors.bbclass
@@ -19,8 +19,10 @@ ${DEBIAN_MIRROR}    ftp://ftp.si.debian.org/debian/pool \n \
 ${DEBIAN_MIRROR}    ftp://ftp.es.debian.org/debian/pool \n \
 ${DEBIAN_MIRROR}    ftp://ftp.se.debian.org/debian/pool \n \
 ${DEBIAN_MIRROR}    ftp://ftp.tr.debian.org/debian/pool \n \
-${GNU_MIRROR}    ftp://mirrors.kernel.org/gnu \n \
+${GNU_MIRROR}        ftp://mirrors.kernel.org/gnu \n \
 ${KERNELORG_MIRROR}    http://www.kernel.org/pub \n \
+${SAVANNAH_GNU_MIRROR} http://download-mirror.savannah.gnu.org/releases \n \
+${SAVANNAH_NONGNU_MIRROR} http://download-mirror.savannah.nongnu.org/releases \n \
 ftp://ftp.gnupg.org/gcrypt/     ftp://ftp.franken.de/pub/crypt/mirror/ftp.gnupg.org/gcrypt/ \n \
 ftp://ftp.gnupg.org/gcrypt/     ftp://ftp.surfnet.nl/pub/security/gnupg/ \n \
 ftp://ftp.gnupg.org/gcrypt/     http://gulus.USherbrooke.ca/pub/appl/GnuPG/ \n \
diff --git a/meta/conf/bitbake.conf b/meta/conf/bitbake.conf
index b3786a7..29ed3d3 100644
--- a/meta/conf/bitbake.conf
+++ b/meta/conf/bitbake.conf
@@ -568,6 +568,8 @@ KERNELORG_MIRROR = "http://kernel.org/pub"
 SOURCEFORGE_MIRROR = "http://downloads.sourceforge.net"
 XLIBS_MIRROR = "http://xlibs.freedesktop.org/release"
 XORG_MIRROR = "http://xorg.freedesktop.org/releases"
+SAVANNAH_GNU_MIRROR = "http://download-mirror.savannah.gnu.org/releases"
+SAVANNAH_NONGNU_MIRROR = "http://download-mirror.savannah.nongnu.org/releases"
 
 # You can use the mirror of your country to get faster downloads by putting
 #  export DEBIAN_MIRROR = "ftp://ftp.de.debian.org/debian/pool"
diff --git a/meta/recipes-core/images/core-image-minimal.bb b/meta/recipes-core/images/core-image-minimal.bb
index 9716274..13f9127 100644
--- a/meta/recipes-core/images/core-image-minimal.bb
+++ b/meta/recipes-core/images/core-image-minimal.bb
@@ -8,5 +8,5 @@ LICENSE = "MIT"
 
 inherit core-image
 
-IMAGE_ROOTFS_SIZE ?= "8192"
+#IMAGE_ROOTFS_SIZE ?= "8192"
 

Now, you can build an image for your board by doing bitbake core-image-minimal. The generated files and images will be in poky-build/tmp/deploy/images/beagleboard.

Follow this link and transfer the files to the SD card. I don’t know why but putting the uImage in boot directory doesn’t work. Put the uImage in the boot directory which is in the root filesystem.

https://www.yoctoproject.org/downloads/bsps/dora15/beagleboard

Now, plug in the SD card and boot. It boots very quickly. You are supposed to be connected to the debug serial port. For some reason Ethernet and all USB ports don’t work. I am trying to figure out why, will update as soon as i do. X also doesn’t seem to work on running startx.

If you would like to setup qtcreator and use qt5, build meta-toolchain-qt5 and follow the below link. The link is not exactly for qt5, but, can be used for qt5 setup for beagle. No need to follow the relocation related stuff on the link.

http://developer.toradex.com/how-to/how-to-set-up-qt-creator-to-cross-compile-for-embedded-linux

Playing .wav/mp3 file using gstreamer in code

You can also clone this with

git clone https://github.com/SanchayanMaity/gstreamer-audio-playback.git

Though i used this on a Toradex Colibri Vybrid module, you can use the same on a Beagleboard or desktop with the correct setup.

/*
Notes for compilation:
1. For compiling the code along with the Makefile given, a OE setup is mandatory.
2. Before compiling, change the paths as per the setup of your environment.

Please refer the Gstreamer Application Development Manual at the below link before proceeding further
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html

Comprehensive documentation for Gstreamer
http://gstreamer.freedesktop.org/documentation/

The following elements/plugins/packages are expected to be in the module image for this to work
gstreamer
gst-plugins-base
gst-plugins-good-wavparse
gst-plugins-good-alsa
gst-plugins-good-audioconvert
gst-plugins-ugly-mad

Pipeline to play .wav audio file from command line
gst-launch filesrc location="location of file" ! wavparse ! alsasink 

Pipeline to play .mp3 audio file from command line
gst-launch filesrc location="location of file" ! mad ! audioconvert ! alsasink 

It is also assumed that the USB to Audio device is the only audio device being used on the system, if not the
"device" parameter for alsasink will change and the parameter to be used needs to be checked with cat /proc/asound/cards,
which then needs to be set as follows

In gstreamer pipeline 

Pipeline to play .wav audio file from command line
gst-launch filesrc location="location of file" ! wavparse ! alsasink device=hw:1,0

Pipeline to play .mp3 audio file from command line
gst-launch filesrc location="location of file" ! mad ! audioconvert ! alsasink device=hw:1,0

In code initialisation in init_audio_playback_pipeline
g_object_set (G_OBJECT (data->alsasink), "device", "hw:0,0", NULL);
                            OR
g_object_set (G_OBJECT (data->alsasink), "device", "hw:1,0", NULL);

The pipeline will ideally remain the same for a different audio device, only the device parameter for alsasink will change
*/

#include <gstreamer-0.10/gst/gst.h>
#include <gstreamer-0.10/gst/gstelement.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>

#define NUMBER_OF_BYTES_FOR_FILE_LOCATION    256

volatile gboolean exit_flag = FALSE;

typedef struct  
{
    GstElement *file_source;
    GstElement *pipeline;
    GstElement *audio_decoder;    
    GstElement *audioconvert;
    GstElement *alsasink;    
    GstElement *bin_playback;    
    GstBus *bus;
    GstMessage *message;        
    gchar filelocation[NUMBER_OF_BYTES_FOR_FILE_LOCATION];
}gstData;

gstData gstreamerData;

// Create the pipeline element
gboolean create_pipeline(gstData *data)
{        
    data->pipeline = gst_pipeline_new("audio_pipeline");    
    if (data->pipeline == NULL)
    {            
        return FALSE;
    }
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    return TRUE;
}

// Callback function for dynamically linking the "wavparse" element and "alsasink" element
void on_pad_added (GstElement *src_element, GstPad *src_pad, gpointer data)
{
    g_print ("\nLinking dynamic pad between wavparse and alsasink\n");

    GstElement *sink_element = (GstElement *) data;     // Is alsasink
    GstPad *sink_pad = gst_element_get_static_pad (sink_element, "sink");
    gst_pad_link (src_pad, sink_pad);

    gst_object_unref (sink_pad);
    src_element = NULL;     // Prevent "unused" warning here
}

// Setup the pipeline
gboolean init_audio_playback_pipeline(gstData *data)
{
    if (data == NULL)
        return FALSE;
        
    data->file_source = gst_element_factory_make("filesrc", "filesource");    
    
    if (strstr(data->filelocation, ".mp3"))
    {
        g_print ("\nMP3 Audio decoder selected\n");
        data->audio_decoder = gst_element_factory_make("mad", "audiomp3decoder");
    }
    
    if (strstr(data->filelocation, ".wav"))
    {
        g_print ("\nWAV Audio decoder selected\n");
        data->audio_decoder = gst_element_factory_make("wavparse", "audiowavdecoder");
    }
        
    data->audioconvert = gst_element_factory_make("audioconvert", "audioconverter");    
    
    data->alsasink = gst_element_factory_make("alsasink", "audiosink");
    
    if ( !data->file_source || !data->audio_decoder || !data->audioconvert || !data->alsasink )
    {
        g_printerr ("\nNot all elements for audio pipeline were created\n");
        return FALSE;
    }    
    
    // Uncomment this if you want to see some debugging info
    //g_signal_connect( data->pipeline, "deep-notify", G_CALLBACK( gst_object_default_deep_notify ), NULL );    
    
    g_print("\nFile location: %s\n", data->filelocation);
    g_object_set (G_OBJECT (data->file_source), "location", data->filelocation, NULL);            
    
    data->bin_playback = gst_bin_new ("bin_playback");    
    
    if (strstr(data->filelocation, ".mp3"))
    {
        gst_bin_add_many(GST_BIN(data->bin_playback), data->file_source, data->audio_decoder, data->audioconvert, data->alsasink, NULL);
    
        if (gst_element_link_many (data->file_source, data->audio_decoder, NULL) != TRUE)
        {
            g_printerr("\nFile source and audio decoder element could not link\n");
            return FALSE;
        }
    
        if (gst_element_link_many (data->audio_decoder, data->audioconvert, NULL) != TRUE)
        {
            g_printerr("\nAudio decoder and audio converter element could not link\n");
            return FALSE;
        }
    
        if (gst_element_link_many (data->audioconvert, data->alsasink, NULL) != TRUE)
        {
            g_printerr("\nAudio converter and audio sink element could not link\n");
            return FALSE;
        }
    }
    
    if (strstr(data->filelocation, ".wav"))
    {
        gst_bin_add_many(GST_BIN(data->bin_playback), data->file_source, data->audio_decoder, data->alsasink, NULL);
    
        if (gst_element_link_many (data->file_source, data->audio_decoder, NULL) != TRUE)
        {
            g_printerr("\nFile source and audio decoder element could not link\n");
            return FALSE;
        }
    
        // Avoid checking of return value for linking of "wavparse" element and "alsasink" element
        // Refer http://stackoverflow.com/questions/3656051/unable-to-play-wav-file-using-gstreamer-apis
        
        gst_element_link_many (data->audio_decoder, data->alsasink, NULL);
        
        g_signal_connect(data->audio_decoder, "pad-added", G_CALLBACK(on_pad_added), data->alsasink);    
    }    
    
    return TRUE;
}

// Starts the pipeline
gboolean start_playback_pipe(gstData *data)
{
    // http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstElement.html#gst-element-set-state
    gst_element_set_state (data->pipeline, GST_STATE_PLAYING);
    while(gst_element_get_state(data->pipeline, NULL, NULL, GST_CLOCK_TIME_NONE) != GST_STATE_CHANGE_SUCCESS);    
    return TRUE;
}

// Add the pipeline to the bin
gboolean add_bin_playback_to_pipe(gstData *data)
{
    if((gst_bin_add(GST_BIN (data->pipeline), data->bin_playback)) != TRUE)
    {
        g_print("\nbin_playback not added to pipeline\n");
        return FALSE;    
    }
    
    if(gst_element_set_state (data->pipeline, GST_STATE_NULL) == GST_STATE_CHANGE_SUCCESS)
    {        
        return TRUE;
    }
    else
    {
        g_print("\nFailed to set pipeline state to NULL\n");
        return FALSE;        
    }
}

// Disconnect the pipeline and the bin
void remove_bin_playback_from_pipe(gstData *data)
{
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    gst_element_set_state (data->bin_playback, GST_STATE_NULL);
    if((gst_bin_remove(GST_BIN (data->pipeline), data->bin_playback)) != TRUE)
    {
        g_print("\nbin_playback not removed from pipeline\n");
    }    
}

// Cleanup
void delete_pipeline(gstData *data)
{
    if (data->pipeline)
        gst_element_set_state (data->pipeline, GST_STATE_NULL);    
    if (data->bus)
        gst_object_unref (data->bus);
    if (data->pipeline)
        gst_object_unref (data->pipeline);    
}

// Function for checking the specific message on bus
// We look for EOS or Error messages
gboolean check_bus_cb(gstData *data)
{
    GError *err = NULL;                
    gchar *dbg = NULL;   
          
    g_print("\nGot message: %s\n", GST_MESSAGE_TYPE_NAME(data->message));
    switch(GST_MESSAGE_TYPE (data->message))
    {
        case GST_MESSAGE_EOS:       
            g_print ("\nEnd of stream... \n\n");
            exit_flag = TRUE;
            break;

        case GST_MESSAGE_ERROR:
            gst_message_parse_error (data->message, &err, &dbg);
            if (err)
            {
                g_printerr ("\nERROR: %s\n", err->message);
                g_error_free (err);
            }
            if (dbg)
            {
                g_printerr ("\nDebug details: %s\n", dbg);
                g_free (dbg);
            }
            exit_flag = TRUE;
            break;

        default:
            g_printerr ("\nUnexpected message of type %d\n", GST_MESSAGE_TYPE (data->message));
            break;
    }
    return TRUE;
}

int main(int argc, char *argv[])
{    
    if (argc != 2)
    {
        g_print("\nUsage: ./audiovf /home/root/filename.mp3\n");
        g_print("Usage: ./audiovf /home/root/filename.wav\n");
        g_print("Note: Number of bytes for file location: %d\n\n", NUMBER_OF_BYTES_FOR_FILE_LOCATION);
        return FALSE;
    }
    
    if ((!strstr(argv[1], ".mp3")) && (!strstr(argv[1], ".wav")))
    {
        g_print("\nOnly mp3 & wav files can be played\n");
        g_print("Specify the mp3 or wav file to be played\n");
        g_print("Usage: ./audiovf /home/root/filename.mp3\n");
        g_print("Usage: ./audiovf /home/root/filename.wav\n");
        g_print("Note: Number of bytes for file location: %d\n\n", NUMBER_OF_BYTES_FOR_FILE_LOCATION);
        return FALSE;
    }    
    
    // Initialise gstreamer. Mandatory first call before using any other gstreamer functionality
    gst_init (&argc, &argv);
    
    memset(gstreamerData.filelocation, 0, sizeof(gstreamerData.filelocation));
    strcpy(gstreamerData.filelocation, argv[1]);        
    
    if (!create_pipeline(&gstreamerData))
        goto err;        
    
    if(init_audio_playback_pipeline(&gstreamerData))
    {    
        if(!add_bin_playback_to_pipe(&gstreamerData))
            goto err;        
        
        if(start_playback_pipe(&gstreamerData))
        {
            gstreamerData.bus = gst_element_get_bus (gstreamerData.pipeline);
            
            while (TRUE)
            {
                if (gstreamerData.bus)
                {    
                    // Check for End Of Stream or error messages on bus
                    // The global exit_flag will be set in case of EOS or error. Exit if the flag is set
                    gstreamerData.message = gst_bus_poll (gstreamerData.bus, GST_MESSAGE_EOS | GST_MESSAGE_ERROR, -1);
                    if(GST_MESSAGE_TYPE (gstreamerData.message))
                    {
                        check_bus_cb(&gstreamerData);
                    }
                    gst_message_unref (gstreamerData.message);            
                }            
                
                if (exit_flag)
                    break;            
                
                sleep(1);                
            }                    
        }    
        remove_bin_playback_from_pipe(&gstreamerData);                    
    }    

err:    
    delete_pipeline(&gstreamerData);
    
    return TRUE;
}

A simple Makefile for compiling the code. You need to change the path as per your OE setup.

#Notes for compilation:
#1. For compiling the code with this Makefile, a OE setup is mandatory.
#2. Before compiling, change the paths as per the setup of your environment.

CC = ${HOME}/oe-core/build/out-eglibc/sysroots/x86_64-linux/usr/bin/armv7ahf-vfp-neon-angstrom-linux-gnueabi/arm-angstrom-linux-gnueabi-gcc
INCLUDES = "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/include" "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/include/glib-2.0" "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/lib/glib-2.0/include" "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/include/gstreamer-0.10" "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/include/libxml2"
LIB_PATH = "-L${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/lib"
LDFLAGS = -lpthread -lgobject-2.0 -lglib-2.0 -lgstreamer-0.10 -lgstapp-0.10
CFLAGS = -O3 -g --sysroot=${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf 

all:
    ${CC} ${CFLAGS} ${INCLUDES} ${LIB_PATH} ${LDFLAGS} -o audiovf audiovf.c

clean:
    rm -rf audiovf

Multithreaded Facial Recognition with OpenCV

It has been quiet a while since i have been maintaining this blog and giving some information and codes to work with. Lately i started noticing that this can become tedious. So from now on i will try to give access to the projects or work i do using git. I had a git account since September of 2013, but, never got around to using it.

This project is a modification of the facial recognition project which is given with Mastering OpenCV with Practical Computer Vision. The book is available with Packtpub and Amazon. The code base is here https://github.com/MasteringOpenCV and is maintained by Shervin Emami.

I was trying to do the same on a Toradex NVidia Tegra3 based Colibri T30 module which has four CPU cores. The code is single threaded and as such doesn’t detect faces if the training process is going on. I made changes to this, so that even while the training process is on going, it can still detect faces. And mind you, the training process can go on for quiet a while if there are more than 3-4 faces. So, this is basically a two threaded version of the main code along with a few more changes as per my personal requirement. You can actually go one step further to utilize three cores, though right now i can’t recall what was suppose to be the job of the third core.

I do apologize for the code not being very clean. At first i was trying to use the threading facility available with C++, but since i am no C++ expert i ran into problems which i wasn’t able to fix quickly. Decided to use pthreads, which i am much more familiar and comfortable with. You will find the C++ threading part which i was trying to do, commented out. Once i get some C++ mastery using Bruce Eckel’s Thinking in C++, i will try to do it cleanly in just C++ and clean it or clean it anyways when i get time.

You can clone the project with:

git clone https://github.com/SanchayanMaity/MultithreadedFaceRecognition.git

You need to modify the Makefile to compile the project and use it on your platform which can be a PC or an embedded board. Please do note that this project will be useful only if you are running this on a platform which has two cores or more.

Hope you guys find it useful. Cheers! And git and Linus are awesome.

Extracting frame from a gstreamer pipeline and displaying it with OpenCV

Not much to write or say in this post. I was trying to extract a frame from the gstreamer pipeline and then display it with OpenCV.

There are two approaches in the code below.

1. Register a callback function whenever a new buffer becomes available with appsink and then use a locking mechanism to synchronize the extraction of the frame and display in the main thread.

2. The second one is to extract the buffer yourself in a while loop in the main thread.

The first one is active in the code below and the second one commented out. To enable the first mechanism, uncomment the mutex locking and signal connect mechanism and comment out the pull buffer call related stuff in the while loop.

Learn more about gstreamer from http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html and especially refer section 19.

For some reason, i am experiencing a memory leak issue with the below code (more so with the fist approach) and haven’t got around and being able to fix it. Also, for your platform the gstreamer pipeline elements will be different. Another problem was, i get x-raw-yuv data from my gstreamer source element and i am only able to display the black and white image with OpenCV. Nonetheless, i thought this might be useful and may be someone can also point out the error to me. Not a gstreamer expert by any means.


#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv/cv.h>
#include <gstreamer-0.10/gst/gst.h>
#include <gstreamer-0.10/gst/gstelement.h>
#include <gstreamer-0.10/gst/app/gstappsink.h>
#include <iostream>
#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>

using namespace std;
using namespace cv;

/* Structure to contain all our information, so we can pass it around */
typedef struct _CustomData
{
    GstElement *appsink;
    GstElement *colorSpace;    
    GstElement *pipeline;
    GstElement *vsource_capsfilter, *mixercsp_capsfilter, *cspappsink_capsfilter;
    GstElement *mixer_capsfilter;
    GstElement *bin_capture;
    GstElement *video_source, *deinterlace;     
    GstElement *nv_video_mixer;    
    GstPad *pad;
    GstCaps *srcdeinterlace_caps, *mixercsp_caps, *cspappsink_caps;    
    GstBus *bus;
    GstMessage *msg;        
}gstData;

GstBuffer* buffer;        

pthread_mutex_t threadMutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t waitForGstBuffer = PTHREAD_COND_INITIALIZER; 

/* Global variables */
CascadeClassifier face_cascade;
IplImage *frame = NULL;     
string window_name =         "Toradex Face Detection Demo";
String face_cascade_name =    "/home/root/haarcascade_frontalface_alt2.xml";
const int BORDER =             8;          // Border between GUI elements to the edge of the image.

template <typename T> string toString(T t)
{
    ostringstream out;
    out << t;
    return out.str();
}

// Draw text into an image. Defaults to top-left-justified text, but you can give negative x coords for right-justified text,
// and/or negative y coords for bottom-justified text
// Returns the bounding rect around the drawn text
Rect drawString(Mat img, string text, Point coord, Scalar color, float fontScale = 0.6f, int thickness = 1, int fontFace = FONT_HERSHEY_COMPLEX)
{
    // Get the text size & baseline.
    int baseline = 0;
    Size textSize = getTextSize(text, fontFace, fontScale, thickness, &baseline);
    baseline += thickness;

    // Adjust the coords for left/right-justified or top/bottom-justified.
    if (coord.y >= 0) {
        // Coordinates are for the top-left corner of the text from the top-left of the image, so move down by one row.
        coord.y += textSize.height;
    }
    else {
        // Coordinates are for the bottom-left corner of the text from the bottom-left of the image, so come up from the bottom.
        coord.y += img.rows - baseline + 1;
    }
    // Become right-justified if desired.
    if (coord.x < 0) {
        coord.x += img.cols - textSize.width + 1;
    }

    // Get the bounding box around the text.
    Rect boundingRect = Rect(coord.x, coord.y - textSize.height, textSize.width, baseline + textSize.height);

    // Draw anti-aliased text.
    putText(img, text, coord, fontFace, fontScale, color, thickness, CV_AA);

    // Let the user know how big their text is, in case they want to arrange things.
    return boundingRect;
}

void create_pipeline(gstData *data)
{
    data->pipeline = gst_pipeline_new ("pipeline");
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
}

gboolean CaptureGstBuffer(GstAppSink *sink, gstData *data)
{            
    //g_signal_emit_by_name (sink, "pull-buffer", &buffer);
    pthread_mutex_lock(&threadMutex);
    buffer = gst_app_sink_pull_buffer(sink);
    if (buffer)
    {        
        frame = cvCreateImage(cvSize(720, 576), IPL_DEPTH_16U, 3);
        if (frame == NULL)
        {
            g_printerr("IplImageFrame is null.\n");
        }
        else
        {
            //buffer = gst_app_sink_pull_buffer(sink);
            frame->imageData = (char*)GST_BUFFER_DATA(buffer);        
            if (frame->imageData == NULL)
            {
                g_printerr("IplImage data is null.\n");        
            }
        }        
        pthread_cond_signal(&waitForGstBuffer);            
    }            
    pthread_mutex_unlock(&threadMutex);
    return TRUE;
}

gboolean init_video_capture(gstData *data)
{    
    data->video_source = gst_element_factory_make("v4l2src", "video_source_live");
    data->vsource_capsfilter = gst_element_factory_make ("capsfilter", "vsource_cptr_capsfilter");
    data->deinterlace = gst_element_factory_make("deinterlace", "deinterlace_live");
    data->nv_video_mixer = gst_element_factory_make("nv_omx_videomixer", "nv_video_mixer_capture");    
    data->mixercsp_capsfilter = gst_element_factory_make ("capsfilter", "mixercsp_capsfilter");
    data->colorSpace = gst_element_factory_make("ffmpegcolorspace", "csp");        
    data->cspappsink_capsfilter = gst_element_factory_make ("capsfilter", "cspappsink_capsfilter");
    data->appsink = gst_element_factory_make("appsink", "asink");
        
    if (!data->video_source || !data->vsource_capsfilter || !data->deinterlace || !data->nv_video_mixer || !data->mixercsp_capsfilter || !data->appsink \
        || !data->colorSpace || !data->cspappsink_capsfilter)
    {
        g_printerr ("Not all elements for video were created.\n");
        return FALSE;
    }        
    
    g_signal_connect( data->pipeline, "deep-notify", G_CALLBACK( gst_object_default_deep_notify ), NULL );        
    
    gst_app_sink_set_emit_signals((GstAppSink*)data->appsink, true);
    gst_app_sink_set_drop((GstAppSink*)data->appsink, true);
    gst_app_sink_set_max_buffers((GstAppSink*)data->appsink, 1);    
    
    data->srcdeinterlace_caps = gst_caps_from_string("video/x-raw-yuv, width=(int)720, height=(int)576, format=(fourcc)I420, framerate=(fraction)1/1");        
    if (!data->srcdeinterlace_caps)
        g_printerr("1. Could not create media format string.\n");        
    g_object_set (G_OBJECT (data->vsource_capsfilter), "caps", data->srcdeinterlace_caps, NULL);
    gst_caps_unref(data->srcdeinterlace_caps);        
    
    data->mixercsp_caps = gst_caps_from_string("video/x-raw-yuv, width=(int)720, height=(int)576, format=(fourcc)I420, framerate=(fraction)1/1, pixel-aspect-ratio=(fraction)1/1");    
    if (!data->mixercsp_caps)
        g_printerr("2. Could not create media format string.\n");        
    g_object_set (G_OBJECT (data->mixercsp_capsfilter), "caps", data->mixercsp_caps, NULL);
    gst_caps_unref(data->mixercsp_caps);    
    
    data->cspappsink_caps = gst_caps_from_string("video/x-raw-yuv, width=(int)720, height=(int)576, format=(fourcc)I420, framerate=(fraction)1/1");        
    if (!data->cspappsink_caps)
        g_printerr("3. Could not create media format string.\n");        
    g_object_set (G_OBJECT (data->cspappsink_capsfilter), "caps", data->cspappsink_caps, NULL);    
    gst_caps_unref(data->cspappsink_caps);        
            
    data->bin_capture = gst_bin_new ("bin_capture");        
    
    /*if(g_signal_connect(data->appsink, "new-buffer", G_CALLBACK(CaptureGstBuffer), NULL) <= 0)
    {
        g_printerr("Could not connect signal handler.\n");
        exit(1);
    }*/
    
    gst_bin_add_many (GST_BIN (data->bin_capture), data->video_source, data->vsource_capsfilter, data->deinterlace, data->nv_video_mixer, \
                        data->mixercsp_capsfilter, data->colorSpace, data->cspappsink_capsfilter, data->appsink, NULL);
    
    if (gst_element_link_many(data->video_source, data->vsource_capsfilter, data->deinterlace, NULL) != TRUE)
    {
        g_printerr ("video_src to deinterlace not linked.\n");
        return FALSE;
    }        
    
    if (gst_element_link_many (data->deinterlace, data->nv_video_mixer, NULL) != TRUE)
    {
        g_printerr ("deinterlace to video_mixer not linked.\n");
        return FALSE;
    }        
    
    if (gst_element_link_many (data->nv_video_mixer, data->mixercsp_capsfilter, data->colorSpace, NULL) != TRUE)
    {
        g_printerr ("video_mixer to colorspace not linked.\n");
        return FALSE;    
    }
    
    if (gst_element_link_many (data->colorSpace, data->appsink, NULL) != TRUE)
    {
        g_printerr ("colorspace to appsink not linked.\n");
        return FALSE;    
    }
    
    cout << "Returns from init_video_capture." << endl;
    return TRUE;
}

void delete_pipeline(gstData *data)
{
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    g_print ("Pipeline set to NULL\n");
    gst_object_unref (data->bus);
    gst_object_unref (data->pipeline);
    g_print ("Pipeline deleted\n");
}

gboolean add_bin_capture_to_pipe(gstData *data)
{
    if((gst_bin_add(GST_BIN (data->pipeline), data->bin_capture)) != TRUE)
    {
        g_print("bin_capture not added to pipeline\n");
    }
    
    if(gst_element_set_state (data->pipeline, GST_STATE_NULL) == GST_STATE_CHANGE_SUCCESS)
    {        
        return TRUE;
    }
    else
    {
        cout << "Failed to set pipeline state to NULL." << endl;
        return FALSE;        
    }
}

gboolean remove_bin_capture_from_pipe(gstData *data)
{
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    gst_element_set_state (data->bin_capture, GST_STATE_NULL);
    if((gst_bin_remove(GST_BIN (data->pipeline), data->bin_capture)) != TRUE)
    {
        g_print("bin_capture not removed from pipeline\n");
    }    
    return TRUE;
}

gboolean start_capture_pipe(gstData *data)
{
    if(gst_element_set_state (data->pipeline, GST_STATE_PLAYING) == GST_STATE_CHANGE_SUCCESS)
        return TRUE;
    else
    {
        cout << "Failed to set pipeline state to PLAYING." << endl;
        return FALSE;
    }
}

gboolean stop_capture_pipe(gstData *data)
{
    gst_element_set_state (data->bin_capture, GST_STATE_NULL);
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    return TRUE;
}

gboolean deinit_video_live(gstData *data)
{
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    gst_element_set_state (data->bin_capture, GST_STATE_NULL);
    gst_object_unref (data->bin_capture);
    return TRUE;
}

gboolean check_bus_cb(gstData *data)
{
    GError *err = NULL;                
    gchar *dbg = NULL;   
          
    g_print("Got message: %s\n", GST_MESSAGE_TYPE_NAME(data->msg));
    switch(GST_MESSAGE_TYPE (data->msg))
    {
        case GST_MESSAGE_EOS:       
            g_print ("END OF STREAM... \n");
            break;

        case GST_MESSAGE_ERROR:
            gst_message_parse_error (data->msg, &err, &dbg);
            if (err)
            {
                g_printerr ("ERROR: %s\n", err->message);
                g_error_free (err);
            }
            if (dbg)
            {
                g_printerr ("[Debug details: %s]\n", dbg);
                g_free (dbg);
            }
            break;

        default:
            g_printerr ("Unexpected message of type %d", GST_MESSAGE_TYPE (data->msg));
            break;
    }
    return TRUE;
}

void get_pipeline_bus(gstData *data)
{
    data->bus = gst_element_get_bus (data->pipeline);
    data->msg = gst_bus_poll (data->bus, GST_MESSAGE_EOS | GST_MESSAGE_ERROR, -1);
    if(GST_MESSAGE_TYPE (data->msg))
    {
        check_bus_cb(data);
    }
    gst_message_unref (data->msg);
}

int main(int argc, char *argv[])
{        
    //Mat frame;
    VideoCapture capture;    
    gstData gstreamerData;
    GstBuffer *gstImageBuffer;
    
    //XInitThreads();
    gst_init (&argc, &argv);
    create_pipeline(&gstreamerData);
    if(init_video_capture(&gstreamerData))
    {        
        add_bin_capture_to_pipe(&gstreamerData);    
        start_capture_pipe(&gstreamerData);
        //get_pipeline_bus(&gstreamerData);    
    
        cout << "Starting while loop..." << endl;
        cvNamedWindow("Toradex Face Detection Demo with Gstreamer", 0);    
    
        while(true)
        {    
            //pthread_mutex_lock(&threadMutex);
            //pthread_cond_wait(&waitForGstBuffer, &threadMutex);
            
            gstImageBuffer = gst_app_sink_pull_buffer((GstAppSink*)gstreamerData.appsink);
        
            if (gstImageBuffer != NULL)
            {        
                frame = cvCreateImage(cvSize(720, 576), IPL_DEPTH_8U, 1);
                    
                if (frame == NULL)
                {
                    g_printerr("IplImageFrame is null.\n");
                }
                else
                {        
                    frame->imageData = (char*)GST_BUFFER_DATA(gstImageBuffer);        
                    if (frame->imageData == NULL)
                    {
                        g_printerr("IplImage data is null.\n");            
                    }                    
                    cvShowImage("Toradex Face Detection Demo with Gstreamer", frame);  
                    cvWaitKey(1);                    
                    gst_buffer_unref(gstImageBuffer);
                }
            }
            else
            {
                cout << "Appsink buffer didn't return buffer." << endl;
            }
            /*
            if (frame)
            {
                cvShowImage("Toradex Face Detection Demo with Gstreamer", frame);
            }
            gst_buffer_unref(buffer);
            buffer = NULL;            
            pthread_mutex_unlock(&threadMutex);    
            cvWaitKey(1);*/                                    
        }
    }
    else
    {
        exit(1);
    }
              
    //Destroy the window
    cvDestroyWindow("Toradex Face Detection Demo with Gstreamer");
       remove_bin_capture_from_pipe(&gstreamerData);
       deinit_video_live(&gstreamerData);    
    delete_pipeline(&gstreamerData);
    
       return 0;
}

Implementing mmap for transferring data from user space to kernel space

I was recently working on an application where streams from four multiplexed analog video channels had to be displayed in four windows. I was trying to do this using OpenCV while the analog video decoder/multiplexer in question was the ADV7180. For switching the channels, i was using an ioctl() call to switch the channels in a while loop with a certain time interval, while capturing the frames and putting them in a separate queues as per the channel selected. This was done in the main thread, while separate threads pulled the frames from the queue and rendered them. The capturing and rendering part was being done with OpenCV. I was not able to achieve a decent enough frame rate with this. I had put delays in certain places to avoid frame glitches in the multiple windows displaying the frames.

Thinking that may be the ioctl() call and the context switch is the reason i have to use delays and this is going slow, i decided to look into ways of transferring data faster between the user and kernel space, instead of using an ioctl() call. An mmap() implementation in a driver or a memory mapping between user and kernel space is the fastest way to transfer data. This approach doesn’t incur a context switch nor a memory buffer copying. Below is a sample code showing how a mmap() implementation for a driver is done.

Below is the driver code.


#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/debugfs.h>
#include <linux/slab.h>
#include <linux/mm.h>  

#ifndef VM_RESERVED
# define  VM_RESERVED   (VM_DONTEXPAND | VM_DONTDUMP)
#endif

struct dentry  *file;

struct mmap_info
{
    char *data;            
    int reference;      
};

void mmap_open(struct vm_area_struct *vma)
{
    struct mmap_info *info = (struct mmap_info *)vma->vm_private_data;
    info->reference++;
}

void mmap_close(struct vm_area_struct *vma)
{
    struct mmap_info *info = (struct mmap_info *)vma->vm_private_data;
    info->reference--;
}

static int mmap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
{
    struct page *page;
    struct mmap_info *info;    
    
    info = (struct mmap_info *)vma->vm_private_data;
    if (!info->data)
    {
        printk("No data\n");
        return 0;    
    }
    
    page = virt_to_page(info->data);    
    
    get_page(page);
    vmf->page = page;            
    
    return 0;
}

struct vm_operations_struct mmap_vm_ops =
{
    .open =     mmap_open,
    .close =    mmap_close,
    .fault =    mmap_fault,    
};

int op_mmap(struct file *filp, struct vm_area_struct *vma)
{
    vma->vm_ops = &mmap_vm_ops;
    vma->vm_flags |= VM_RESERVED;    
    vma->vm_private_data = filp->private_data;
    mmap_open(vma);
    return 0;
}

int mmapfop_close(struct inode *inode, struct file *filp)
{
    struct mmap_info *info = filp->private_data;
    
    free_page((unsigned long)info->data);
    kfree(info);
    filp->private_data = NULL;
    return 0;
}

int mmapfop_open(struct inode *inode, struct file *filp)
{
    struct mmap_info *info = kmalloc(sizeof(struct mmap_info), GFP_KERNEL);    
    info->data = (char *)get_zeroed_page(GFP_KERNEL);
    memcpy(info->data, "hello from kernel this is file: ", 32);
    memcpy(info->data + 32, filp->f_dentry->d_name.name, strlen(filp->f_dentry->d_name.name));
    /* assign this info struct to the file */
    filp->private_data = info;
    return 0;
}

static const struct file_operations mmap_fops = {
    .open = mmapfop_open,
    .release = mmapfop_close,
    .mmap = op_mmap,
};

static int __init mmapexample_module_init(void)
{
    file = debugfs_create_file("mmap_example", 0644, NULL, NULL, &mmap_fops);
    return 0;
}

static void __exit mmapexample_module_exit(void)
{
    debugfs_remove(file);
}

module_init(mmapexample_module_init);
module_exit(mmapexample_module_exit);
MODULE_LICENSE("GPL");

 

Below is the user space application showing it’s use in an application.


#include <stdio.h>
#include <string.h>
#include <fcntl.h>
#include <sys/mman.h>

#define PAGE_SIZE     4096

int main ( int argc, char **argv )
{
    int configfd;
    char * address = NULL;

    configfd = open("/sys/kernel/debug/mmap_example", O_RDWR);
    if(configfd < 0)
    {
        perror("Open call failed");
        return -1;
    }
    
    address = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE, MAP_SHARED, configfd, 0);
    if (address == MAP_FAILED)
    {
        perror("mmap operation failed");
        return -1;
    }

    printf("Initial message: %s\n", address);
    memcpy(address + 11 , "*user*", 6);
    printf("Changed message: %s\n", address);
    close(configfd);    
    return 0;
}

The above code should run well for both desktop or embedded Linux. For learning more, refer to Chapter 15 of the Linux Device Drivers book.

Ultimately though, i didn’t do a mmap() but decided to do the switching channel work in the driver itself using kernel timers and workqueues. Not that this improved the frame rate much either, may be a 1 frame per second improvement. May be someone will give me some feedback some day with this task. Till then, hope you find this post useful.