Playing .wav/mp3 file using gstreamer in code

You can also clone this with

git clone https://github.com/SanchayanMaity/gstreamer-audio-playback.git

Though i used this on a Toradex Colibri Vybrid module, you can use the same on a Beagleboard or desktop with the correct setup.

/*
Notes for compilation:
1. For compiling the code along with the Makefile given, a OE setup is mandatory.
2. Before compiling, change the paths as per the setup of your environment.

Please refer the Gstreamer Application Development Manual at the below link before proceeding further
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html

Comprehensive documentation for Gstreamer
http://gstreamer.freedesktop.org/documentation/

The following elements/plugins/packages are expected to be in the module image for this to work
gstreamer
gst-plugins-base
gst-plugins-good-wavparse
gst-plugins-good-alsa
gst-plugins-good-audioconvert
gst-plugins-ugly-mad

Pipeline to play .wav audio file from command line
gst-launch filesrc location="location of file" ! wavparse ! alsasink 

Pipeline to play .mp3 audio file from command line
gst-launch filesrc location="location of file" ! mad ! audioconvert ! alsasink 

It is also assumed that the USB to Audio device is the only audio device being used on the system, if not the
"device" parameter for alsasink will change and the parameter to be used needs to be checked with cat /proc/asound/cards,
which then needs to be set as follows

In gstreamer pipeline 

Pipeline to play .wav audio file from command line
gst-launch filesrc location="location of file" ! wavparse ! alsasink device=hw:1,0

Pipeline to play .mp3 audio file from command line
gst-launch filesrc location="location of file" ! mad ! audioconvert ! alsasink device=hw:1,0

In code initialisation in init_audio_playback_pipeline
g_object_set (G_OBJECT (data->alsasink), "device", "hw:0,0", NULL);
                            OR
g_object_set (G_OBJECT (data->alsasink), "device", "hw:1,0", NULL);

The pipeline will ideally remain the same for a different audio device, only the device parameter for alsasink will change
*/

#include <gstreamer-0.10/gst/gst.h>
#include <gstreamer-0.10/gst/gstelement.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>

#define NUMBER_OF_BYTES_FOR_FILE_LOCATION    256

volatile gboolean exit_flag = FALSE;

typedef struct  
{
    GstElement *file_source;
    GstElement *pipeline;
    GstElement *audio_decoder;    
    GstElement *audioconvert;
    GstElement *alsasink;    
    GstElement *bin_playback;    
    GstBus *bus;
    GstMessage *message;        
    gchar filelocation[NUMBER_OF_BYTES_FOR_FILE_LOCATION];
}gstData;

gstData gstreamerData;

// Create the pipeline element
gboolean create_pipeline(gstData *data)
{        
    data->pipeline = gst_pipeline_new("audio_pipeline");    
    if (data->pipeline == NULL)
    {            
        return FALSE;
    }
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    return TRUE;
}

// Callback function for dynamically linking the "wavparse" element and "alsasink" element
void on_pad_added (GstElement *src_element, GstPad *src_pad, gpointer data)
{
    g_print ("\nLinking dynamic pad between wavparse and alsasink\n");

    GstElement *sink_element = (GstElement *) data;     // Is alsasink
    GstPad *sink_pad = gst_element_get_static_pad (sink_element, "sink");
    gst_pad_link (src_pad, sink_pad);

    gst_object_unref (sink_pad);
    src_element = NULL;     // Prevent "unused" warning here
}

// Setup the pipeline
gboolean init_audio_playback_pipeline(gstData *data)
{
    if (data == NULL)
        return FALSE;
        
    data->file_source = gst_element_factory_make("filesrc", "filesource");    
    
    if (strstr(data->filelocation, ".mp3"))
    {
        g_print ("\nMP3 Audio decoder selected\n");
        data->audio_decoder = gst_element_factory_make("mad", "audiomp3decoder");
    }
    
    if (strstr(data->filelocation, ".wav"))
    {
        g_print ("\nWAV Audio decoder selected\n");
        data->audio_decoder = gst_element_factory_make("wavparse", "audiowavdecoder");
    }
        
    data->audioconvert = gst_element_factory_make("audioconvert", "audioconverter");    
    
    data->alsasink = gst_element_factory_make("alsasink", "audiosink");
    
    if ( !data->file_source || !data->audio_decoder || !data->audioconvert || !data->alsasink )
    {
        g_printerr ("\nNot all elements for audio pipeline were created\n");
        return FALSE;
    }    
    
    // Uncomment this if you want to see some debugging info
    //g_signal_connect( data->pipeline, "deep-notify", G_CALLBACK( gst_object_default_deep_notify ), NULL );    
    
    g_print("\nFile location: %s\n", data->filelocation);
    g_object_set (G_OBJECT (data->file_source), "location", data->filelocation, NULL);            
    
    data->bin_playback = gst_bin_new ("bin_playback");    
    
    if (strstr(data->filelocation, ".mp3"))
    {
        gst_bin_add_many(GST_BIN(data->bin_playback), data->file_source, data->audio_decoder, data->audioconvert, data->alsasink, NULL);
    
        if (gst_element_link_many (data->file_source, data->audio_decoder, NULL) != TRUE)
        {
            g_printerr("\nFile source and audio decoder element could not link\n");
            return FALSE;
        }
    
        if (gst_element_link_many (data->audio_decoder, data->audioconvert, NULL) != TRUE)
        {
            g_printerr("\nAudio decoder and audio converter element could not link\n");
            return FALSE;
        }
    
        if (gst_element_link_many (data->audioconvert, data->alsasink, NULL) != TRUE)
        {
            g_printerr("\nAudio converter and audio sink element could not link\n");
            return FALSE;
        }
    }
    
    if (strstr(data->filelocation, ".wav"))
    {
        gst_bin_add_many(GST_BIN(data->bin_playback), data->file_source, data->audio_decoder, data->alsasink, NULL);
    
        if (gst_element_link_many (data->file_source, data->audio_decoder, NULL) != TRUE)
        {
            g_printerr("\nFile source and audio decoder element could not link\n");
            return FALSE;
        }
    
        // Avoid checking of return value for linking of "wavparse" element and "alsasink" element
        // Refer http://stackoverflow.com/questions/3656051/unable-to-play-wav-file-using-gstreamer-apis
        
        gst_element_link_many (data->audio_decoder, data->alsasink, NULL);
        
        g_signal_connect(data->audio_decoder, "pad-added", G_CALLBACK(on_pad_added), data->alsasink);    
    }    
    
    return TRUE;
}

// Starts the pipeline
gboolean start_playback_pipe(gstData *data)
{
    // http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstElement.html#gst-element-set-state
    gst_element_set_state (data->pipeline, GST_STATE_PLAYING);
    while(gst_element_get_state(data->pipeline, NULL, NULL, GST_CLOCK_TIME_NONE) != GST_STATE_CHANGE_SUCCESS);    
    return TRUE;
}

// Add the pipeline to the bin
gboolean add_bin_playback_to_pipe(gstData *data)
{
    if((gst_bin_add(GST_BIN (data->pipeline), data->bin_playback)) != TRUE)
    {
        g_print("\nbin_playback not added to pipeline\n");
        return FALSE;    
    }
    
    if(gst_element_set_state (data->pipeline, GST_STATE_NULL) == GST_STATE_CHANGE_SUCCESS)
    {        
        return TRUE;
    }
    else
    {
        g_print("\nFailed to set pipeline state to NULL\n");
        return FALSE;        
    }
}

// Disconnect the pipeline and the bin
void remove_bin_playback_from_pipe(gstData *data)
{
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    gst_element_set_state (data->bin_playback, GST_STATE_NULL);
    if((gst_bin_remove(GST_BIN (data->pipeline), data->bin_playback)) != TRUE)
    {
        g_print("\nbin_playback not removed from pipeline\n");
    }    
}

// Cleanup
void delete_pipeline(gstData *data)
{
    if (data->pipeline)
        gst_element_set_state (data->pipeline, GST_STATE_NULL);    
    if (data->bus)
        gst_object_unref (data->bus);
    if (data->pipeline)
        gst_object_unref (data->pipeline);    
}

// Function for checking the specific message on bus
// We look for EOS or Error messages
gboolean check_bus_cb(gstData *data)
{
    GError *err = NULL;                
    gchar *dbg = NULL;   
          
    g_print("\nGot message: %s\n", GST_MESSAGE_TYPE_NAME(data->message));
    switch(GST_MESSAGE_TYPE (data->message))
    {
        case GST_MESSAGE_EOS:       
            g_print ("\nEnd of stream... \n\n");
            exit_flag = TRUE;
            break;

        case GST_MESSAGE_ERROR:
            gst_message_parse_error (data->message, &err, &dbg);
            if (err)
            {
                g_printerr ("\nERROR: %s\n", err->message);
                g_error_free (err);
            }
            if (dbg)
            {
                g_printerr ("\nDebug details: %s\n", dbg);
                g_free (dbg);
            }
            exit_flag = TRUE;
            break;

        default:
            g_printerr ("\nUnexpected message of type %d\n", GST_MESSAGE_TYPE (data->message));
            break;
    }
    return TRUE;
}

int main(int argc, char *argv[])
{    
    if (argc != 2)
    {
        g_print("\nUsage: ./audiovf /home/root/filename.mp3\n");
        g_print("Usage: ./audiovf /home/root/filename.wav\n");
        g_print("Note: Number of bytes for file location: %d\n\n", NUMBER_OF_BYTES_FOR_FILE_LOCATION);
        return FALSE;
    }
    
    if ((!strstr(argv[1], ".mp3")) && (!strstr(argv[1], ".wav")))
    {
        g_print("\nOnly mp3 & wav files can be played\n");
        g_print("Specify the mp3 or wav file to be played\n");
        g_print("Usage: ./audiovf /home/root/filename.mp3\n");
        g_print("Usage: ./audiovf /home/root/filename.wav\n");
        g_print("Note: Number of bytes for file location: %d\n\n", NUMBER_OF_BYTES_FOR_FILE_LOCATION);
        return FALSE;
    }    
    
    // Initialise gstreamer. Mandatory first call before using any other gstreamer functionality
    gst_init (&argc, &argv);
    
    memset(gstreamerData.filelocation, 0, sizeof(gstreamerData.filelocation));
    strcpy(gstreamerData.filelocation, argv[1]);        
    
    if (!create_pipeline(&gstreamerData))
        goto err;        
    
    if(init_audio_playback_pipeline(&gstreamerData))
    {    
        if(!add_bin_playback_to_pipe(&gstreamerData))
            goto err;        
        
        if(start_playback_pipe(&gstreamerData))
        {
            gstreamerData.bus = gst_element_get_bus (gstreamerData.pipeline);
            
            while (TRUE)
            {
                if (gstreamerData.bus)
                {    
                    // Check for End Of Stream or error messages on bus
                    // The global exit_flag will be set in case of EOS or error. Exit if the flag is set
                    gstreamerData.message = gst_bus_poll (gstreamerData.bus, GST_MESSAGE_EOS | GST_MESSAGE_ERROR, -1);
                    if(GST_MESSAGE_TYPE (gstreamerData.message))
                    {
                        check_bus_cb(&gstreamerData);
                    }
                    gst_message_unref (gstreamerData.message);            
                }            
                
                if (exit_flag)
                    break;            
                
                sleep(1);                
            }                    
        }    
        remove_bin_playback_from_pipe(&gstreamerData);                    
    }    

err:    
    delete_pipeline(&gstreamerData);
    
    return TRUE;
}

A simple Makefile for compiling the code. You need to change the path as per your OE setup.

#Notes for compilation:
#1. For compiling the code with this Makefile, a OE setup is mandatory.
#2. Before compiling, change the paths as per the setup of your environment.

CC = ${HOME}/oe-core/build/out-eglibc/sysroots/x86_64-linux/usr/bin/armv7ahf-vfp-neon-angstrom-linux-gnueabi/arm-angstrom-linux-gnueabi-gcc
INCLUDES = "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/include" "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/include/glib-2.0" "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/lib/glib-2.0/include" "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/include/gstreamer-0.10" "-I${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/include/libxml2"
LIB_PATH = "-L${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf/usr/lib"
LDFLAGS = -lpthread -lgobject-2.0 -lglib-2.0 -lgstreamer-0.10 -lgstapp-0.10
CFLAGS = -O3 -g --sysroot=${HOME}/oe-core/build/out-eglibc/sysroots/colibri-vf 

all:
    ${CC} ${CFLAGS} ${INCLUDES} ${LIB_PATH} ${LDFLAGS} -o audiovf audiovf.c

clean:
    rm -rf audiovf

Multithreaded Facial Recognition with OpenCV

It has been quiet a while since i have been maintaining this blog and giving some information and codes to work with. Lately i started noticing that this can become tedious. So from now on i will try to give access to the projects or work i do using git. I had a git account since September of 2013, but, never got around to using it.

This project is a modification of the facial recognition project which is given with Mastering OpenCV with Practical Computer Vision. The book is available with Packtpub and Amazon. The code base is here https://github.com/MasteringOpenCV and is maintained by Shervin Emami.

I was trying to do the same on a Toradex NVidia Tegra3 based Colibri T30 module which has four CPU cores. The code is single threaded and as such doesn’t detect faces if the training process is going on. I made changes to this, so that even while the training process is on going, it can still detect faces. And mind you, the training process can go on for quiet a while if there are more than 3-4 faces. So, this is basically a two threaded version of the main code along with a few more changes as per my personal requirement. You can actually go one step further to utilize three cores, though right now i can’t recall what was suppose to be the job of the third core.

I do apologize for the code not being very clean. At first i was trying to use the threading facility available with C++, but since i am no C++ expert i ran into problems which i wasn’t able to fix quickly. Decided to use pthreads, which i am much more familiar and comfortable with. You will find the C++ threading part which i was trying to do, commented out. Once i get some C++ mastery using Bruce Eckel’s Thinking in C++, i will try to do it cleanly in just C++ and clean it or clean it anyways when i get time.

You can clone the project with:

git clone https://github.com/SanchayanMaity/MultithreadedFaceRecognition.git

You need to modify the Makefile to compile the project and use it on your platform which can be a PC or an embedded board. Please do note that this project will be useful only if you are running this on a platform which has two cores or more.

Hope you guys find it useful. Cheers! And git and Linus are awesome.

Extracting frame from a gstreamer pipeline and displaying it with OpenCV

Not much to write or say in this post. I was trying to extract a frame from the gstreamer pipeline and then display it with OpenCV.

There are two approaches in the code below.

1. Register a callback function whenever a new buffer becomes available with appsink and then use a locking mechanism to synchronize the extraction of the frame and display in the main thread.

2. The second one is to extract the buffer yourself in a while loop in the main thread.

The first one is active in the code below and the second one commented out. To enable the first mechanism, uncomment the mutex locking and signal connect mechanism and comment out the pull buffer call related stuff in the while loop.

Learn more about gstreamer from http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html and especially refer section 19.

For some reason, i am experiencing a memory leak issue with the below code (more so with the fist approach) and haven’t got around and being able to fix it. Also, for your platform the gstreamer pipeline elements will be different. Another problem was, i get x-raw-yuv data from my gstreamer source element and i am only able to display the black and white image with OpenCV. Nonetheless, i thought this might be useful and may be someone can also point out the error to me. Not a gstreamer expert by any means.


#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv/cv.h>
#include <gstreamer-0.10/gst/gst.h>
#include <gstreamer-0.10/gst/gstelement.h>
#include <gstreamer-0.10/gst/app/gstappsink.h>
#include <iostream>
#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>

using namespace std;
using namespace cv;

/* Structure to contain all our information, so we can pass it around */
typedef struct _CustomData
{
    GstElement *appsink;
    GstElement *colorSpace;    
    GstElement *pipeline;
    GstElement *vsource_capsfilter, *mixercsp_capsfilter, *cspappsink_capsfilter;
    GstElement *mixer_capsfilter;
    GstElement *bin_capture;
    GstElement *video_source, *deinterlace;     
    GstElement *nv_video_mixer;    
    GstPad *pad;
    GstCaps *srcdeinterlace_caps, *mixercsp_caps, *cspappsink_caps;    
    GstBus *bus;
    GstMessage *msg;        
}gstData;

GstBuffer* buffer;        

pthread_mutex_t threadMutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t waitForGstBuffer = PTHREAD_COND_INITIALIZER; 

/* Global variables */
CascadeClassifier face_cascade;
IplImage *frame = NULL;     
string window_name =         "Toradex Face Detection Demo";
String face_cascade_name =    "/home/root/haarcascade_frontalface_alt2.xml";
const int BORDER =             8;          // Border between GUI elements to the edge of the image.

template <typename T> string toString(T t)
{
    ostringstream out;
    out << t;
    return out.str();
}

// Draw text into an image. Defaults to top-left-justified text, but you can give negative x coords for right-justified text,
// and/or negative y coords for bottom-justified text
// Returns the bounding rect around the drawn text
Rect drawString(Mat img, string text, Point coord, Scalar color, float fontScale = 0.6f, int thickness = 1, int fontFace = FONT_HERSHEY_COMPLEX)
{
    // Get the text size & baseline.
    int baseline = 0;
    Size textSize = getTextSize(text, fontFace, fontScale, thickness, &baseline);
    baseline += thickness;

    // Adjust the coords for left/right-justified or top/bottom-justified.
    if (coord.y >= 0) {
        // Coordinates are for the top-left corner of the text from the top-left of the image, so move down by one row.
        coord.y += textSize.height;
    }
    else {
        // Coordinates are for the bottom-left corner of the text from the bottom-left of the image, so come up from the bottom.
        coord.y += img.rows - baseline + 1;
    }
    // Become right-justified if desired.
    if (coord.x < 0) {
        coord.x += img.cols - textSize.width + 1;
    }

    // Get the bounding box around the text.
    Rect boundingRect = Rect(coord.x, coord.y - textSize.height, textSize.width, baseline + textSize.height);

    // Draw anti-aliased text.
    putText(img, text, coord, fontFace, fontScale, color, thickness, CV_AA);

    // Let the user know how big their text is, in case they want to arrange things.
    return boundingRect;
}

void create_pipeline(gstData *data)
{
    data->pipeline = gst_pipeline_new ("pipeline");
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
}

gboolean CaptureGstBuffer(GstAppSink *sink, gstData *data)
{            
    //g_signal_emit_by_name (sink, "pull-buffer", &buffer);
    pthread_mutex_lock(&threadMutex);
    buffer = gst_app_sink_pull_buffer(sink);
    if (buffer)
    {        
        frame = cvCreateImage(cvSize(720, 576), IPL_DEPTH_16U, 3);
        if (frame == NULL)
        {
            g_printerr("IplImageFrame is null.\n");
        }
        else
        {
            //buffer = gst_app_sink_pull_buffer(sink);
            frame->imageData = (char*)GST_BUFFER_DATA(buffer);        
            if (frame->imageData == NULL)
            {
                g_printerr("IplImage data is null.\n");        
            }
        }        
        pthread_cond_signal(&waitForGstBuffer);            
    }            
    pthread_mutex_unlock(&threadMutex);
    return TRUE;
}

gboolean init_video_capture(gstData *data)
{    
    data->video_source = gst_element_factory_make("v4l2src", "video_source_live");
    data->vsource_capsfilter = gst_element_factory_make ("capsfilter", "vsource_cptr_capsfilter");
    data->deinterlace = gst_element_factory_make("deinterlace", "deinterlace_live");
    data->nv_video_mixer = gst_element_factory_make("nv_omx_videomixer", "nv_video_mixer_capture");    
    data->mixercsp_capsfilter = gst_element_factory_make ("capsfilter", "mixercsp_capsfilter");
    data->colorSpace = gst_element_factory_make("ffmpegcolorspace", "csp");        
    data->cspappsink_capsfilter = gst_element_factory_make ("capsfilter", "cspappsink_capsfilter");
    data->appsink = gst_element_factory_make("appsink", "asink");
        
    if (!data->video_source || !data->vsource_capsfilter || !data->deinterlace || !data->nv_video_mixer || !data->mixercsp_capsfilter || !data->appsink \
        || !data->colorSpace || !data->cspappsink_capsfilter)
    {
        g_printerr ("Not all elements for video were created.\n");
        return FALSE;
    }        
    
    g_signal_connect( data->pipeline, "deep-notify", G_CALLBACK( gst_object_default_deep_notify ), NULL );        
    
    gst_app_sink_set_emit_signals((GstAppSink*)data->appsink, true);
    gst_app_sink_set_drop((GstAppSink*)data->appsink, true);
    gst_app_sink_set_max_buffers((GstAppSink*)data->appsink, 1);    
    
    data->srcdeinterlace_caps = gst_caps_from_string("video/x-raw-yuv, width=(int)720, height=(int)576, format=(fourcc)I420, framerate=(fraction)1/1");        
    if (!data->srcdeinterlace_caps)
        g_printerr("1. Could not create media format string.\n");        
    g_object_set (G_OBJECT (data->vsource_capsfilter), "caps", data->srcdeinterlace_caps, NULL);
    gst_caps_unref(data->srcdeinterlace_caps);        
    
    data->mixercsp_caps = gst_caps_from_string("video/x-raw-yuv, width=(int)720, height=(int)576, format=(fourcc)I420, framerate=(fraction)1/1, pixel-aspect-ratio=(fraction)1/1");    
    if (!data->mixercsp_caps)
        g_printerr("2. Could not create media format string.\n");        
    g_object_set (G_OBJECT (data->mixercsp_capsfilter), "caps", data->mixercsp_caps, NULL);
    gst_caps_unref(data->mixercsp_caps);    
    
    data->cspappsink_caps = gst_caps_from_string("video/x-raw-yuv, width=(int)720, height=(int)576, format=(fourcc)I420, framerate=(fraction)1/1");        
    if (!data->cspappsink_caps)
        g_printerr("3. Could not create media format string.\n");        
    g_object_set (G_OBJECT (data->cspappsink_capsfilter), "caps", data->cspappsink_caps, NULL);    
    gst_caps_unref(data->cspappsink_caps);        
            
    data->bin_capture = gst_bin_new ("bin_capture");        
    
    /*if(g_signal_connect(data->appsink, "new-buffer", G_CALLBACK(CaptureGstBuffer), NULL) <= 0)
    {
        g_printerr("Could not connect signal handler.\n");
        exit(1);
    }*/
    
    gst_bin_add_many (GST_BIN (data->bin_capture), data->video_source, data->vsource_capsfilter, data->deinterlace, data->nv_video_mixer, \
                        data->mixercsp_capsfilter, data->colorSpace, data->cspappsink_capsfilter, data->appsink, NULL);
    
    if (gst_element_link_many(data->video_source, data->vsource_capsfilter, data->deinterlace, NULL) != TRUE)
    {
        g_printerr ("video_src to deinterlace not linked.\n");
        return FALSE;
    }        
    
    if (gst_element_link_many (data->deinterlace, data->nv_video_mixer, NULL) != TRUE)
    {
        g_printerr ("deinterlace to video_mixer not linked.\n");
        return FALSE;
    }        
    
    if (gst_element_link_many (data->nv_video_mixer, data->mixercsp_capsfilter, data->colorSpace, NULL) != TRUE)
    {
        g_printerr ("video_mixer to colorspace not linked.\n");
        return FALSE;    
    }
    
    if (gst_element_link_many (data->colorSpace, data->appsink, NULL) != TRUE)
    {
        g_printerr ("colorspace to appsink not linked.\n");
        return FALSE;    
    }
    
    cout << "Returns from init_video_capture." << endl;
    return TRUE;
}

void delete_pipeline(gstData *data)
{
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    g_print ("Pipeline set to NULL\n");
    gst_object_unref (data->bus);
    gst_object_unref (data->pipeline);
    g_print ("Pipeline deleted\n");
}

gboolean add_bin_capture_to_pipe(gstData *data)
{
    if((gst_bin_add(GST_BIN (data->pipeline), data->bin_capture)) != TRUE)
    {
        g_print("bin_capture not added to pipeline\n");
    }
    
    if(gst_element_set_state (data->pipeline, GST_STATE_NULL) == GST_STATE_CHANGE_SUCCESS)
    {        
        return TRUE;
    }
    else
    {
        cout << "Failed to set pipeline state to NULL." << endl;
        return FALSE;        
    }
}

gboolean remove_bin_capture_from_pipe(gstData *data)
{
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    gst_element_set_state (data->bin_capture, GST_STATE_NULL);
    if((gst_bin_remove(GST_BIN (data->pipeline), data->bin_capture)) != TRUE)
    {
        g_print("bin_capture not removed from pipeline\n");
    }    
    return TRUE;
}

gboolean start_capture_pipe(gstData *data)
{
    if(gst_element_set_state (data->pipeline, GST_STATE_PLAYING) == GST_STATE_CHANGE_SUCCESS)
        return TRUE;
    else
    {
        cout << "Failed to set pipeline state to PLAYING." << endl;
        return FALSE;
    }
}

gboolean stop_capture_pipe(gstData *data)
{
    gst_element_set_state (data->bin_capture, GST_STATE_NULL);
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    return TRUE;
}

gboolean deinit_video_live(gstData *data)
{
    gst_element_set_state (data->pipeline, GST_STATE_NULL);
    gst_element_set_state (data->bin_capture, GST_STATE_NULL);
    gst_object_unref (data->bin_capture);
    return TRUE;
}

gboolean check_bus_cb(gstData *data)
{
    GError *err = NULL;                
    gchar *dbg = NULL;   
          
    g_print("Got message: %s\n", GST_MESSAGE_TYPE_NAME(data->msg));
    switch(GST_MESSAGE_TYPE (data->msg))
    {
        case GST_MESSAGE_EOS:       
            g_print ("END OF STREAM... \n");
            break;

        case GST_MESSAGE_ERROR:
            gst_message_parse_error (data->msg, &err, &dbg);
            if (err)
            {
                g_printerr ("ERROR: %s\n", err->message);
                g_error_free (err);
            }
            if (dbg)
            {
                g_printerr ("[Debug details: %s]\n", dbg);
                g_free (dbg);
            }
            break;

        default:
            g_printerr ("Unexpected message of type %d", GST_MESSAGE_TYPE (data->msg));
            break;
    }
    return TRUE;
}

void get_pipeline_bus(gstData *data)
{
    data->bus = gst_element_get_bus (data->pipeline);
    data->msg = gst_bus_poll (data->bus, GST_MESSAGE_EOS | GST_MESSAGE_ERROR, -1);
    if(GST_MESSAGE_TYPE (data->msg))
    {
        check_bus_cb(data);
    }
    gst_message_unref (data->msg);
}

int main(int argc, char *argv[])
{        
    //Mat frame;
    VideoCapture capture;    
    gstData gstreamerData;
    GstBuffer *gstImageBuffer;
    
    //XInitThreads();
    gst_init (&argc, &argv);
    create_pipeline(&gstreamerData);
    if(init_video_capture(&gstreamerData))
    {        
        add_bin_capture_to_pipe(&gstreamerData);    
        start_capture_pipe(&gstreamerData);
        //get_pipeline_bus(&gstreamerData);    
    
        cout << "Starting while loop..." << endl;
        cvNamedWindow("Toradex Face Detection Demo with Gstreamer", 0);    
    
        while(true)
        {    
            //pthread_mutex_lock(&threadMutex);
            //pthread_cond_wait(&waitForGstBuffer, &threadMutex);
            
            gstImageBuffer = gst_app_sink_pull_buffer((GstAppSink*)gstreamerData.appsink);
        
            if (gstImageBuffer != NULL)
            {        
                frame = cvCreateImage(cvSize(720, 576), IPL_DEPTH_8U, 1);
                    
                if (frame == NULL)
                {
                    g_printerr("IplImageFrame is null.\n");
                }
                else
                {        
                    frame->imageData = (char*)GST_BUFFER_DATA(gstImageBuffer);        
                    if (frame->imageData == NULL)
                    {
                        g_printerr("IplImage data is null.\n");            
                    }                    
                    cvShowImage("Toradex Face Detection Demo with Gstreamer", frame);  
                    cvWaitKey(1);                    
                    gst_buffer_unref(gstImageBuffer);
                }
            }
            else
            {
                cout << "Appsink buffer didn't return buffer." << endl;
            }
            /*
            if (frame)
            {
                cvShowImage("Toradex Face Detection Demo with Gstreamer", frame);
            }
            gst_buffer_unref(buffer);
            buffer = NULL;            
            pthread_mutex_unlock(&threadMutex);    
            cvWaitKey(1);*/                                    
        }
    }
    else
    {
        exit(1);
    }
              
    //Destroy the window
    cvDestroyWindow("Toradex Face Detection Demo with Gstreamer");
       remove_bin_capture_from_pipe(&gstreamerData);
       deinit_video_live(&gstreamerData);    
    delete_pipeline(&gstreamerData);
    
       return 0;
}

Implementing mmap for transferring data from user space to kernel space

I was recently working on an application where streams from four multiplexed analog video channels had to be displayed in four windows. I was trying to do this using OpenCV while the analog video decoder/multiplexer in question was the ADV7180. For switching the channels, i was using an ioctl() call to switch the channels in a while loop with a certain time interval, while capturing the frames and putting them in a separate queues as per the channel selected. This was done in the main thread, while separate threads pulled the frames from the queue and rendered them. The capturing and rendering part was being done with OpenCV. I was not able to achieve a decent enough frame rate with this. I had put delays in certain places to avoid frame glitches in the multiple windows displaying the frames.

Thinking that may be the ioctl() call and the context switch is the reason i have to use delays and this is going slow, i decided to look into ways of transferring data faster between the user and kernel space, instead of using an ioctl() call. An mmap() implementation in a driver or a memory mapping between user and kernel space is the fastest way to transfer data. This approach doesn’t incur a context switch nor a memory buffer copying. Below is a sample code showing how a mmap() implementation for a driver is done.

Below is the driver code.


#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/debugfs.h>
#include <linux/slab.h>
#include <linux/mm.h>  

#ifndef VM_RESERVED
# define  VM_RESERVED   (VM_DONTEXPAND | VM_DONTDUMP)
#endif

struct dentry  *file;

struct mmap_info
{
    char *data;            
    int reference;      
};

void mmap_open(struct vm_area_struct *vma)
{
    struct mmap_info *info = (struct mmap_info *)vma->vm_private_data;
    info->reference++;
}

void mmap_close(struct vm_area_struct *vma)
{
    struct mmap_info *info = (struct mmap_info *)vma->vm_private_data;
    info->reference--;
}

static int mmap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
{
    struct page *page;
    struct mmap_info *info;    
    
    info = (struct mmap_info *)vma->vm_private_data;
    if (!info->data)
    {
        printk("No data\n");
        return 0;    
    }
    
    page = virt_to_page(info->data);    
    
    get_page(page);
    vmf->page = page;            
    
    return 0;
}

struct vm_operations_struct mmap_vm_ops =
{
    .open =     mmap_open,
    .close =    mmap_close,
    .fault =    mmap_fault,    
};

int op_mmap(struct file *filp, struct vm_area_struct *vma)
{
    vma->vm_ops = &mmap_vm_ops;
    vma->vm_flags |= VM_RESERVED;    
    vma->vm_private_data = filp->private_data;
    mmap_open(vma);
    return 0;
}

int mmapfop_close(struct inode *inode, struct file *filp)
{
    struct mmap_info *info = filp->private_data;
    
    free_page((unsigned long)info->data);
    kfree(info);
    filp->private_data = NULL;
    return 0;
}

int mmapfop_open(struct inode *inode, struct file *filp)
{
    struct mmap_info *info = kmalloc(sizeof(struct mmap_info), GFP_KERNEL);    
    info->data = (char *)get_zeroed_page(GFP_KERNEL);
    memcpy(info->data, "hello from kernel this is file: ", 32);
    memcpy(info->data + 32, filp->f_dentry->d_name.name, strlen(filp->f_dentry->d_name.name));
    /* assign this info struct to the file */
    filp->private_data = info;
    return 0;
}

static const struct file_operations mmap_fops = {
    .open = mmapfop_open,
    .release = mmapfop_close,
    .mmap = op_mmap,
};

static int __init mmapexample_module_init(void)
{
    file = debugfs_create_file("mmap_example", 0644, NULL, NULL, &mmap_fops);
    return 0;
}

static void __exit mmapexample_module_exit(void)
{
    debugfs_remove(file);
}

module_init(mmapexample_module_init);
module_exit(mmapexample_module_exit);
MODULE_LICENSE("GPL");

 

Below is the user space application showing it’s use in an application.


#include <stdio.h>
#include <string.h>
#include <fcntl.h>
#include <sys/mman.h>

#define PAGE_SIZE     4096

int main ( int argc, char **argv )
{
    int configfd;
    char * address = NULL;

    configfd = open("/sys/kernel/debug/mmap_example", O_RDWR);
    if(configfd < 0)
    {
        perror("Open call failed");
        return -1;
    }
    
    address = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE, MAP_SHARED, configfd, 0);
    if (address == MAP_FAILED)
    {
        perror("mmap operation failed");
        return -1;
    }

    printf("Initial message: %s\n", address);
    memcpy(address + 11 , "*user*", 6);
    printf("Changed message: %s\n", address);
    close(configfd);    
    return 0;
}

The above code should run well for both desktop or embedded Linux. For learning more, refer to Chapter 15 of the Linux Device Drivers book.

Ultimately though, i didn’t do a mmap() but decided to do the switching channel work in the driver itself using kernel timers and workqueues. Not that this improved the frame rate much either, may be a 1 frame per second improvement. May be someone will give me some feedback some day with this task. Till then, hope you find this post useful.

Adding sysfs support to a driver

A month or so back i had given an example of how to add support for a device using the platform bus framework, which as such mainly showed how exactly platform framework worked and what it was used for. In the driver i wrote, i provided access to the required values using the ioctl calls. The ioctl() calls are not a recommended way of doing this. These days drivers provide the data and control to the user space through the sysfs interface. As the Linux Kernel Development book mentions “The sysfs file system is currently the place for implementing functionality previously reserved for ioctl() calls on device nodes or the procfs filesystem“. This post will show how to make possible what we tried to achieve in the https://coherentmusings.wordpress.com/2013/12/13/how-to-write-a-platform-devicedriver-adc-driver-using-wm97xx-codec/ article in a much easier way using the sysfs interface. Please refer that article before reading further.

For learning about the sysfs interface and what it is, refer to the The Linux Device Model chapter of the Linux Device Drivers book or the Devices and Modules chapter of the Linux Kernel Development book by Robert Love.

Some other documentation can be found at the below links:

1. http://lxr.free-electrons.com/source/Documentation/driver-model/device.txt?v=3.12;a=arm

2. http://lxr.free-electrons.com/source/Documentation/filesystems/sysfs.txt

The core driver file to which we will make the changes is on the below link:

http://lxr.free-electrons.com/source/drivers/input/touchscreen/wm97xx-core.c

The below change was made to the probe function of the core driver file


if ( device_create_file(wm->dev, &dev_attr_adc_channel1) != 0 )
{
    printk(KERN_ALERT "Sysfs Attribute Creation failed for ADC Channel1\n");
}

if ( device_create_file(wm->dev, &dev_attr_adc_channel2) != 0 )
{
    printk(KERN_ALERT "Sysfs Attribute Creation failed for ADC Channel2\n");
}

if ( device_create_file(wm->dev, &dev_attr_adc_channel3) != 0 )
{
    printk(KERN_ALERT "Sysfs Attribute Creation failed for ADC Channel3\n");
}

if ( device_create_file(wm->dev, &dev_attr_adc_channel4) != 0 )
{
    printk(KERN_ALERT "Sysfs Attribute Creation failed for ADC Channel4\n");
}

The following was added to the core driver file to access the ADC data via sysfs attributes.


static ssize_t adc_channel1_show(struct device *child, struct device_attribute *attr, char *buf)
{
    struct wm97xx *wm = dev_get_drvdata(child);

    return sprintf(buf, "%d\n", wm97xx_read_aux_adc(wm, WM97XX_AUX_ID1));
}

static ssize_t adc_channel2_show(struct device *child, struct device_attribute *attr, char *buf)
{
    struct wm97xx *wm = dev_get_drvdata(child);

    return sprintf(buf, "%d\n", wm97xx_read_aux_adc(wm, WM97XX_AUX_ID2));
}

static ssize_t adc_channel3_show(struct device *child, struct device_attribute *attr, char *buf)
{
    struct wm97xx *wm = dev_get_drvdata(child);

    return sprintf(buf, "%d\n", wm97xx_read_aux_adc(wm, WM97XX_AUX_ID3));
}

static ssize_t adc_channel4_show(struct device *child, struct device_attribute *attr, char *buf)
{
    struct wm97xx *wm = dev_get_drvdata(child);

    return sprintf(buf, "%d\n", wm97xx_read_aux_adc(wm, WM97XX_AUX_ID4));
}

static DEVICE_ATTR(adc_channel1, 0644, adc_channel1_show, NULL);
static DEVICE_ATTR(adc_channel2, 0644, adc_channel2_show, NULL);
static DEVICE_ATTR(adc_channel3, 0644, adc_channel3_show, NULL);
static DEVICE_ATTR(adc_channel4, 0644, adc_channel4_show, NULL);

For cleanup, the attribute removal was done in the remove function as below.


device_remove_file(wm->dev, &dev_attr_adc_channel1);
device_remove_file(wm->dev, &dev_attr_adc_channel2);
device_remove_file(wm->dev, &dev_attr_adc_channel3);
device_remove_file(wm->dev, &dev_attr_adc_channel4);

After doing the above changes, the device attributes which we added show up in the sysfs tree, which can be used to get the ADC values, as shown in the below picture. sysfs

If you compare this to what we tried to do previously, you can see how easy it is to get the ADC values as it allows a simple command like cat to access the value. Of course, you could also access the value in C code as well. Things will not always be simple like this. Here we already had a core driver, which allowed us to just add the necessary changes and get what we wanted.

Check out the below file which shows how sysfs support was added for PWM. This support landed in the 3.6 version with the following commit http://lwn.net/Articles/553755/.

http://lxr.free-electrons.com/source/drivers/pwm/sysfs.c?v=3.12;a=arm

As such the sysfs support will be provided through the framework being used. For example, for devices like ADC, the Industrial IO framework will be used and you will have the sysfs support available through that framework.

Hopefully my earlier post and this one will give you an idea on how to add and use the sysfs interface.

Something on Linux Graphics

Some good sites to learn about OpenGL. The first one is particularly good with the various explanations.

1. http://open.gl/

2. http://www.opengl-tutorial.org/

3. http://nopper.tv/norbert/opengl.html

Good explanation at the below sites for someone looking to understand about the Linux graphics stack.

1. http://blog.mecheye.net/2012/06/the-linux-graphics-stack/

2. http://magcius.github.io/xplain/article/

In case you keep up with the things going on in the Linux world and have been wondering what the hell is all this hullabaloo over X and Wayland. Read the below article on Wayland after reading the above two.

http://wayland.freedesktop.org/architecture.html

How to write a Platform Device/Driver – PWM Driver for Tegra2

I will get straight to the point in this tutorial and give the codes here. The explanation given in the earlier post for ADC should suffice.

Do note that this is only an example, and what you might need to do will depend on what is available and what is not. Also, the code is not exactly up to the mark as per coding standards, but, i was too excited while working and rolling this out, so. I hope this clears the idea of how to use platform device/driver. Also, just in case it is not clear to people who are starting out, the core driver and header files are being changed. This will require a kernel recompilation and updating the uImage on the module, for our driver to work.

1. Header File: http://git.toradex.com/cgit/linux-toradex.git/tree/include/linux/pwm.h?h=tegra

2. Core Driver File: http://git.toradex.com/cgit/linux-toradex.git/tree/arch/arm/mach-tegra/pwm.c?h=tegra

3. Board File: http://git.toradex.com/cgit/linux-toradex.git/tree/arch/arm/mach-tegra/board-colibri_t20.c?h=tegra

Below is the change i made to the core drive file in the tegra_pwm_probe function.


platform_set_drvdata(pdev, pwm);

/*-------------------Register our PWM device---------------------*/
// Added by Sanchayan
pwm->colibripwm_dev = platform_device_alloc("colibri_pwm", -1);
if (!pwm->colibripwm_dev){
    printk("PWM Device creation failed\n");
}
platform_set_drvdata(pwm->colibripwm_dev, pwm);
ret = platform_device_add(pwm->colibripwm_dev);
if (ret < 0) {
    printk("PWM Device addition failed\n");
    platform_device_put(pwm->colibripwm_dev);
}
/*---------------------------------------------------------------*/

mutex_lock(&pwm_lock);

In the tegra_pwm_remove function, i added the below line just before return.


platform_device_unregister(pwm->colibripwm_dev);

In the board file i commented out lines 737 to 830. This was done as these lines exported the PWM to led driver framework. The led driver framework only allowed controlling the PWM’s duty cycle and period was fixed at 19600, as you can see in those lines. Hence, this driver to allow period or frequency to be controlled as well. Also, line 1479 was commented to prevent registration of these PWM’s with the led framework.

The driver code is as follows:


#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/fs.h>
#include <linux/errno.h>
#include <asm/uaccess.h>
#include <linux/kdev_t.h>
#include <linux/device.h>
#include <linux/cdev.h>
#include <linux/slab.h>
#include <linux/ioctl.h>
#include <linux/pwm.h>

#define ENABLE_PWM            _IOW('q', 1, pwmData *)
#define DISABLE_PWM            _IOW('q', 2, pwmData *)
#define PWM_CHANNELS        4

typedef struct
{
    unsigned int pwmChannel;
    unsigned int pwmDutyCycle;
    unsigned int pwmPeriod;
}pwmData;

struct pwm_device {
    struct list_head    node;
    struct platform_device    *pdev;
    struct platform_device *colibripwm_dev;
    const char        *label;
    struct clk        *clk;

    int            clk_enb;
    void __iomem        *mmio_base;

    unsigned int        in_use;
    unsigned int        id;
};

struct pwm_driver
{
    unsigned int period[PWM_CHANNELS];
    unsigned int pwmId[PWM_CHANNELS];
    struct pwm_device *pwmdev[PWM_CHANNELS];
};

static dev_t first;         // Global variable for the first device number
static struct cdev c_dev;   // Global variable for the character device structure
static struct class *cl;    // Global variable for the device class
static int init_result;
static struct pwm_driver *pwmDriver;

static ssize_t pwm_read(struct file* F, char *buf, size_t count, loff_t *f_pos)
{
    return -EPERM;
}

static ssize_t pwm_write(struct file* F, const char *buf, size_t count, loff_t *f_pos)
{
    return -EPERM;
}

static int pwm_open(struct inode *inode, struct file *file)
{
    return 0;
}

static int pwm_close(struct inode *inode, struct file *file)
{
    return 0;
}

static long pwm_device_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
{
    int retval;
    pwmData data;

    switch (cmd)
    {
        case ENABLE_PWM:
        if (copy_from_user(&data, (pwmData*)arg, sizeof(pwmData)))
        {
            return -EFAULT;
        }
        retval = pwm_config(pwmDriver->pwmdev[data.pwmChannel - 1], data.pwmDutyCycle, data.pwmPeriod);
        if (retval == 0)
        {
            retval = pwm_enable(pwmDriver->pwmdev[data.pwmChannel - 1]);
            return retval;
        }
        else
        {
            return retval;
        }
        break;

        case DISABLE_PWM:
        if (copy_from_user(&data, (pwmData*)arg, sizeof(pwmData)))
        {
            return -EFAULT;
        }
        retval = pwm_config(pwmDriver->pwmdev[data.pwmChannel - 1], 0, data.pwmPeriod);
        if (retval == 0)
        {
            pwm_disable(pwmDriver->pwmdev[data.pwmChannel - 1]);
        }
        else
        {
            return retval;
        }
        break;

        default:
            break;
    }

     return 0;
}

static int pwm_device_probe(struct platform_device *pdev)
{
    int i, ret = 0;

    pwmDriver = kzalloc(sizeof(struct pwm_driver), GFP_KERNEL);

    if (!pwmDriver)
    {
        printk(KERN_ALERT "Platform get drvdata returned NULL\n");
        return -ENOMEM;
    }

    for (i = 1; i < PWM_CHANNELS; i++)
    {
        switch (i)
        {
            case 0:
                pwmDriver->pwmdev[0] = pwm_request(0, "PWM_1");
            break;

            case 1:
                pwmDriver->pwmdev[1] = pwm_request(1, "PWM_2");
            break;

            case 2:
                pwmDriver->pwmdev[2] = pwm_request(2, "PWM_3");
            break;

            case 3:
                pwmDriver->pwmdev[3] = pwm_request(3, "PWM_4");
            break;

            default:
                break;
        }
        if (IS_ERR(pwmDriver->pwmdev[i])) {
            ret = PTR_ERR(pwmDriver->pwmdev[i]);
            dev_err(&pdev->dev, "unable to request PWM %d\n", i);
            goto err;
        }
    }

    platform_set_drvdata(pdev, pwmDriver);

    return 0;

err:
    if (i > 0) {
        for (i = i - 1; i >= 1; i--) {
            pwm_free(pwmDriver->pwmdev[i]);
    }
}

kfree(pwmDriver);

return ret;
}

static int pwm_device_remove(struct platform_device *pdev)
{
    int i;
    for (i = 1; i < PWM_CHANNELS; i++) {
        pwm_free(pwmDriver->pwmdev[i]);
    }
    printk("PWM Platform Device removed\n");
    return 0;
}

static struct platform_driver pwm_driver = {
    .probe = pwm_device_probe,
    .remove = pwm_device_remove,
    .driver = {
        .name = "colibri_pwm",
        .owner = THIS_MODULE,
    },
};

static struct file_operations FileOps =
{
    .owner                = THIS_MODULE,
    .open                 = pwm_open,
    .read                 = pwm_read,
    .write                = pwm_write,
    .release              = pwm_close,
    .unlocked_ioctl       = pwm_device_ioctl,
};

static int pwm_init(void)
{
    init_result = platform_driver_probe(&pwm_driver, &pwm_device_probe);

    if (init_result < 0)
    {
        printk(KERN_ALERT "PWM Platform Driver probe failed with :%d\n", init_result);
        return -1;
    }
    else
    {
        init_result = alloc_chrdev_region( &first, 0, 1, "pwm_drv" );
        if( 0 > init_result )
        {
            platform_driver_unregister(&pwm_driver);
            printk( KERN_ALERT "PWM Device Registration failed\n" );
            return -1;
        }
        if ( (cl = class_create( THIS_MODULE, "chardev" ) ) == NULL )
        {
            platform_driver_unregister(&pwm_driver);
            printk( KERN_ALERT "PWM Class creation failed\n" );
            unregister_chrdev_region( first, 1 );
            return -1;
        }

        if( device_create( cl, NULL, first, NULL, "pwm_drv" ) == NULL )
        {
            platform_driver_unregister(&pwm_driver);
            printk( KERN_ALERT "PWM Device creation failed\n" );
            class_destroy(cl);
            unregister_chrdev_region( first, 1 );
            return -1;
        }

        cdev_init( &c_dev, &FileOps );

        if( cdev_add( &c_dev, first, 1 ) == -1)
        {
            platform_driver_unregister(&pwm_driver);
            printk( KERN_ALERT "PWM Device addition failed\n" );
            device_destroy( cl, first );
            class_destroy( cl );
            unregister_chrdev_region( first, 1 );
            return -1;
        }
    }

    return 0;
}

static void pwm_exit(void)
{
    platform_driver_unregister(&pwm_driver);
    kfree(pwmDriver);
    cdev_del( &c_dev );
    device_destroy( cl, first );
    class_destroy( cl );
    unregister_chrdev_region( first, 1 );

    printk(KERN_ALERT "PWM Driver unregistered\n");
}

module_init(pwm_init);
module_exit(pwm_exit);

MODULE_AUTHOR("Sanchayan Maity");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Colibri T20 PWM Driver");

The Makefile for the driver:


CROSS_COMPILE ?= /home/sanchayan/Toradex/gcc-linaro/bin/arm-linux-gnueabihf-
ARCH          ?= arm
SOURCE_DIR    ?= /home/sanchayan/Toradex/T20V2.0/linux-toradex

AS          = $(CROSS_COMPILE)as
LD          = $(CROSS_COMPILE)ld
CC          = $(CROSS_COMPILE)gcc
CPP         = $(CC) -E
AR          = $(CROSS_COMPILE)ar
NM          = $(CROSS_COMPILE)nm
STRIP       = $(CROSS_COMPILE)strip
OBJCOPY     = $(CROSS_COMPILE)objcopy
OBJDUMP     = $(CROSS_COMPILE)objdump

obj-m += pwm_driver.o
ccflags-y += -I$(SOURCE_DIR)/arch/arm

all:
make ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) -C $(SOURCE_DIR) M=$(PWD) modules

clean:
rm *.o *.ko *.symvers *.order

The user space application code:


#include <stdio.h>
#include <fcntl.h>
#include <linux/ioctl.h>

#define ENABLE_PWM            _IOW('q', 1, pwmData *)
#define DISABLE_PWM            _IOW('q', 2, pwmData *)
#define PWM_CHANNELS        4

typedef struct
{
    unsigned int pwmChannel;
    unsigned int pwmDutyCycle;
    unsigned int pwmPeriod;
}pwmData;

int main(void)
{
    int fd;
    int choice;
    int pwm_channel;
    int pwm_period;
    int pwm_dutycycle;
    int retVal;
    int loop = 1;
    pwmData data;

    fd = open( "/dev/pwm_drv", O_RDWR );

    if( fd < 0 )
    {
        printf("Cannot open device \t");
        printf(" fd = %d \n",fd);
        return 0;
    }

    while (loop)
    {

        printf("1: Configure PWM 2: Disable PWM 3: Exit\n");
        printf("Enter choice: \t");
        scanf("%d", &choice);

        switch(choice)
        {
            case 1:
                printf("\nEnter PWM Channel: ");
                scanf("%d", &pwm_channel);
                printf("\nEnter PWM Duty Cycle: ");
                scanf("%d", &pwm_dutycycle);
                printf("\nEnter PWM Period: ");
                scanf("%d", &pwm_period);
                data.pwmChannel = pwm_channel;
                data.pwmDutyCycle = pwm_dutycycle;
                data.pwmPeriod = pwm_period;
                retVal = ioctl(fd, ENABLE_PWM, &data);
                if (retVal < 0)
                {
                    printf("Error: %d\n", retVal);
                }
                else
                {
                    printf("Return Value: %d\n", retVal);
                }
                break;

                case 2:
                    printf("\nEnter PWM Channel: ");
                    scanf("%d", &pwm_channel);
                    printf("\nEnter PWM Duty Cycle: ");
                    scanf("%d", &pwm_dutycycle);
                    printf("\nEnter PWM Period: ");
                    scanf("%d", &pwm_period);
                    data.pwmChannel = pwm_channel;
                    data.pwmDutyCycle = pwm_dutycycle;
                    data.pwmPeriod = pwm_period;
                    retVal = ioctl(fd, DISABLE_PWM, &data);
                    if (retVal < 0)
                    {
                        printf("Error: %d\n", retVal);
                    }
                    else
                    {
                        printf("Return Value: %d\n", retVal);
                    }
                    break;

                    case 3:
                        loop = 0;
                    break;

                    default:
                        break;
        }
    }

    if( 0 != close(fd) )
    {
        printf("Could not close device\n");
    }

    return 0;
}

How to write a Platform Device/Driver – ADC Driver using wm97xx codec

Somewhat more than two years back, while i was working in Godrej, a senior colleague from my development team gave us a lecture on how to write character drivers for Linux. Didn’t understand one single thing, but since that time i have been trying to learn Linux kernel related stuff, with device drivers being the main area of focus. Of course, in between there have lot of lulls where i have hit a road block and then gone into a depression mode and then started again from where i left off, with renewed zeal and vigor.

One of the road blocks till date has been the inability to understand or picture the driver framework. Recently, after quiet a lot of effort the platform device/driver framework (may be framework isn’t the correct technical term, but, anyways) became clear to me.

Let me give you a background on what i was trying to write and achieve. My company Toradex manufactures and sells embedded computer on modules. One of them is the Colibri T20 which has the NVidia Tegra 2 processor on it. There is WM9715 codec from Wolfson Microelectronics, which is an audio codec with a touch panel controller. This codec has four auxilary ADC’s and the device is connected on an AC97 bus. These four ADC’s can be used as general purpose ADC’s, while also being used for audio and touchscreen functionality.

Now, my company provides WinCE and Linux for the modules, but, the Linux is hardly as well supported as WinCE. With the default Linux image which is provided, there is no support for being able to use all the four ADC’s. Only two ADC’s can be used which are made available through the power driver framework. Now what i mean by power driver framework is difficult to explain, and doing so would result in digressing from the main topic. Don’t worry though, this will not affect the purpose of this tutorial. For wm9715, there is already a driver available which provides the touch screen functionality along with providing access to auxiliary ADC’s. But, you need to expose the interface through some other driver or framework, for it to be usable as a general purpose ADC.

I am giving three links below and i will be using these through out the rest of the tutorial, by referring to their names instead of the URL’s.

1. Header file: http://git.toradex.com/cgit/linux-toradex.git/tree/include/linux/wm97xx.h?h=tegra

2. Core Driver file: http://git.toradex.com/cgit/linux-toradex.git/tree/drivers/input/touchscreen/wm97xx-core.c?h=tegra

3. Board File: http://git.toradex.com/cgit/linux-toradex.git/tree/arch/arm/mach-tegra/board-colibri_t20.c?h=tegra

Of these, the first two files you can also find in the mainline Linux kernel.

Learn about the driver model from here, http://lxr.free-electrons.com/source/Documentation/driver-model/

N.B. It’s assumed that you have a decent knowledge of C, Linux, Character drivers, Cross compilation and Makefile.

The first question most people will ask is, what’s the use of a platform device/driver framework. For x86 based PC’s, it is possible for the OS to know what all devices are present, as ACPI or USB devices make it possible for the OS to query and find out which devices are present on the system, on which it is booting. For example, a device which is attached to a PCI or USB bus, the OS can query and find out what the device does or what are it’s capabilities, is it a graphics card or removable USB mass storage device.

For embedded systems, you have buses like I2c or SPI. Devices attached to these buses are not discoverable in the above sense as i tried to explain. The OS has to be explicitly told that, for example, a RTC is connected on the I2C bus at an address of 0x68. This is where the platform device/driver comes in the picture.

A board file is created for each board, which specifies the devices present on buses such as SPI and I2c. Have a look at the board file. You will find various platform devices and platform data structures which are used to register devices and relevant data with the OS. Platform data is used at later point when the OS is booting to know some specific details about the device. I will explain how the platform device and it’s driver are bound to each other in a while.

Have a look at the core driver file. The first two functions to look out for are the wm97xx_probe and wm97xx_remove. These are the two common operations (probe and remove) for any platform device and driver. You can see there is a device_driver structure at the bottom of the file. Whenever a module is loaded, the first function to be called is the _init() function. In this function, the driver is registered with a call to driver_register and passing it the device_driver structure. On a call to device_register() the probe function will be called. It will do all the necessary set up required. For this example, it is allocating memory, setting up handlers, allocating devices which will be using this core driver and registering it with the input subsystem, which is used for providing the touch screen functionality. Now, this basically remains the same, but might vary slightly, for example, a true ADC driver will register itself with the Industrial IO framework (IIO).

Now, here comes the main part. For me to use the ADC’s through a platform driver, i also need a platform device. So, i added a few function calls inside the probe() function for allocating platform device. This device i will be using later, how, we will come to that in a moment.

wm->colibriadc_dev = platform_device_alloc("colibri_adc", -1);
if (!wm->colibriadc_dev) {
ret = -ENOMEM;
    goto adc_err;
}
platform_set_drvdata(wm->colibriadc_dev, wm);
wm->colibriadc_dev->dev.parent = dev;
ret = platform_device_add(wm->colibriadc_dev);
if (ret < 0)
    goto adc_reg_err;

For error handling,

adc_reg_err:
    platform_device_put(wm->colibriadc_dev);
adc_err:
    platform_device_del(wm->colibriadc_dev);

Now, when the OS boots and registers and makes provision for this core driver, it will also allocate and make provision for my platform device which i created, to use this core driver.

I added a platform_device pointer to the struct wm97xx which is in the header file.

struct platform_device *colibriadc_dev;

This pointer holds the return value of platform_device_alloc(). After the above changes, we are ready for the platform driver which we will use in conjunction with a character driver for accessing the ADC’s.

#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/gpio.h>
#include <linux/fs.h>
#include <linux/errno.h>
#include <asm/uaccess.h>
#include <linux/wm97xx.h>
#include <linux/kdev_t.h>
#include <linux/device.h>
#include <linux/cdev.h>
#include <linux/slab.h>
#include <linux/ioctl.h>

typedef struct
{
    unsigned int channelNumber;
    unsigned int adcValue;
}adcData;

#define SET_ADC_CHANNEL        _IOW('q', 1, adcData *)
#define GET_ADC_DATA           _IOR('q', 2, adcData *)

static dev_t first;         // Global variable for the first device number
static struct cdev c_dev;   // Global variable for the character device structure
static struct class *cl;    // Global variable for the device class
static struct wm97xx *wm;
static int init_result;
static int adcChannel;
static int adcValue;

static ssize_t adc_read(struct file* F, char *buf, size_t count, loff_t *f_pos)
{
return -EPERM;
}

static ssize_t adc_write(struct file* F, const char *buf, size_t count, loff_t *f_pos)
{
    return -EPERM;
}

static int adc_open(struct inode *inode, struct file *file)
{
    return 0;
}

static int adc_close(struct inode *inode, struct file *file)
{
    return 0;
}

static long adc_device_ioctl(struct file *f, unsigned int adc_channel, unsigned long arg)
{
    adcData adc;

switch(adc_channel)
{
    case SET_ADC_CHANNEL:
    if (copy_from_user(&adc, (adcData*)arg, sizeof(adcData)))
    {
        return -EFAULT;
    }
    adcChannel = adc.channelNumber;
    break;

    case GET_ADC_DATA:
        switch (adcChannel)
        {
            case 1:
                adcValue = wm97xx_read_aux_adc(wm, WM97XX_AUX_ID1);
                break;

            case 2:
                adcValue = wm97xx_read_aux_adc(wm, WM97XX_AUX_ID2);
                break;

            case 3:
                adcValue = wm97xx_read_aux_adc(wm, WM97XX_AUX_ID3);
                break;

            case 4:
                adcValue = wm97xx_read_aux_adc(wm, WM97XX_AUX_ID4);
                break;

            default:
                return -EINVAL;
                break;
        }

        adc.channelNumber = adcChannel;
        adc.adcValue = adcValue;
        if (copy_to_user((adcData*)arg, &adc, sizeof(adcData)))
        {
            return -EFAULT;
        }
        printk(KERN_ALERT "AUX ADC%d reading: %d\n", adcChannel, adcValue);
        break;

        default:
        break;
    }

    return 0;
}

static int sample_wm97xx_probe(struct platform_device *pdev)
{
    wm = platform_get_drvdata(pdev);

    if (wm == NULL)
    {
        printk(KERN_ALERT "Platform get drvdata returned NULL\n");
        return -1;
    }

    return 0;
}

static int sample_wm97xx_remove(struct platform_device *pdev)
{
    /* http://opensource.wolfsonmicro.com/content/using-auxadc-wm97xx-touchscreen-drivers */

    return 0;
}

static struct platform_driver sample_wm97xx_driver = {
    .probe  = sample_wm97xx_probe,
    .remove = sample_wm97xx_remove,
    .driver = {
        .name = "colibri_adc",
        .owner = THIS_MODULE,
    },
};

static struct file_operations FileOps =
{
    .owner                = THIS_MODULE,
    .open                 = adc_open,
    .read                 = adc_read,
    .write                = adc_write,
    .release              = adc_close,
    .unlocked_ioctl        = adc_device_ioctl,
};

static int sample_wm97xx_init(void)
{
   init_result = platform_driver_probe(&sample_wm97xx_driver, &sample_wm97xx_probe);

   if (init_result < 0)
   {
       printk(KERN_ALERT "ADC Platform Driver probe failed with :%d\n", init_result);
       return -1;
   }
   else
   {
       init_result = alloc_chrdev_region( &first, 0, 1, "adc_drv" );
       if( 0 > init_result )
       {
           platform_driver_unregister(&sample_wm97xx_driver);
           printk( KERN_ALERT "ADC Device Registration failed\n" );
           return -1;
        }
       if ( (cl = class_create( THIS_MODULE, "chardev" ) ) == NULL )
       {
           platform_driver_unregister(&sample_wm97xx_driver);
           printk( KERN_ALERT "ADC Class creation failed\n" );
           unregister_chrdev_region( first, 1 );
           return -1;
    }

    if( device_create( cl, NULL, first, NULL, "adc_drv" ) == NULL )
    {
        platform_driver_unregister(&sample_wm97xx_driver);
        printk( KERN_ALERT "ADC Device creation failed\n" );
        class_destroy(cl);
        unregister_chrdev_region( first, 1 );
        return -1;
    }

    cdev_init( &c_dev, &FileOps );

    if( cdev_add( &c_dev, first, 1 ) == -1)
    {
        platform_driver_unregister(&sample_wm97xx_driver);
        printk( KERN_ALERT "ADC Device addition failed\n" );
        device_destroy( cl, first );
        class_destroy( cl );
        unregister_chrdev_region( first, 1 );
        return -1;
    }
    return 0;
}

static void sample_wm97xx_exit(void)
{
    platform_driver_unregister(&sample_wm97xx_driver);
    cdev_del( &c_dev );
    device_destroy( cl, first );
    class_destroy( cl );
    unregister_chrdev_region( first, 1 );

    printk(KERN_ALERT "ADC Driver unregistered\n");
}

module_init(sample_wm97xx_init);
module_exit(sample_wm97xx_exit);

MODULE_AUTHOR("Sanchayan Maity");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Colibri T20 ADC Driver");

 

CROSS_COMPILE ?= /home/sanchayan/Toradex/gcc-linaro/bin/arm-linux-gnueabihf-
ARCH          ?= arm
SOURCE_DIR    ?= /home/sanchayan/Toradex/T20V2.0/linux-toradex

AS          = $(CROSS_COMPILE)as
LD          = $(CROSS_COMPILE)ld
CC          = $(CROSS_COMPILE)gcc
CPP         = $(CC) -E
AR          = $(CROSS_COMPILE)ar
NM          = $(CROSS_COMPILE)nm
STRIP       = $(CROSS_COMPILE)strip
OBJCOPY     = $(CROSS_COMPILE)objcopy
OBJDUMP     = $(CROSS_COMPILE)objdump

obj-m     += adc_test.o
ccflags-y += -I$(SOURCE_DIR)/arch/arm

all:
    make ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) -C $(SOURCE_DIR) M=$(PWD) modules

clean:
    rm *.o *.ko *.symvers *.order

The driver file and makefile source are shown above. On loading the module, the _init() will be called, which will in turn call platform_driver_register(). This will call the probe() function. If you really noticed carefully, then you can see the driver has the same name viz. “colibri_adc” which is also the name i passed for device allocation inside the probe() call of core driver. This is how the platform device will bind to a platform driver. They will have the same names and if you try to do a platform driver for which a platform device was not allocated, you will get a no device exists error.

One of the important things to note, is the call to platform_set_drvdata() in the probe call of core driver. It’s not like i have traced all functions, but, from what i understand this establishes a linking between the device allocated by the core driver and your driver, which will later allow you access to the device pointer, which you need to pass to the functions in the core driver, to use them.

You can see that the functions in core driver take a pointer to struct wm97xx. From where are you suppose to get this pointer? This you will get in the probe call of the driver which we wrote, by calling platform_get_drvdata(). You can see that this is so, in the driver code, where it is assigned to a static global pointer. The reason for using a static global pointer is i want this pointer to be accessible in the ioctl() calls, which will be the final interface for the user, which ultimately calls the core driver functions, by passing the required wm97xx pointer. On module unloading, _exit() will be called, which in turn calls platform_driver_unregister() which will result in a call to remove() function of the driver. Now, here there is not really anything to do in remove, as we do not require any kind of memory deallocation or clean up work for the platform part of the driver. You can have a nice look at the probe and remove functions of the core driver for getting an idea of what is really done in real world driver.

So, now we finally have access to the auxiliary ADC’s!!!. Hurray! And we also have our first (at least my first) platform device/driver. The user space application is below and is pretty simple.

#include <stdio.h>
#include <fcntl.h>
#include <linux/ioctl.h>

typedef struct
{
    unsigned int channelNumber;
    unsigned int adcValue;
}adcData;

#define SET_ADC_CHANNEL        _IOW('q', 1, adcData *)
#define GET_ADC_DATA         _IOR('q', 2, adcData *)

int main(void)
{
    int fd;
    int choice;
    int adc_channel;
    int retVal;
    int loop = 1;
    adcData adc;

    fd = open( "/dev/adc_drv", O_RDWR );

   if( fd < 0 )
   {
       printf("Cannot open device \t");
       printf(" fd = %d \n",fd);
       return 0;
   }

   while (loop)
   {
       printf("1: Read 2:Exit\n");
       printf("Enter choice:\t");
       scanf("%d", &choice);

       switch(choice)
       {
           case 1:
               printf("Enter ADC Channel Number:\t");
               scanf("%d", &adc_channel);
               adc.channelNumber = adc_channel;
               adc.adcValue = 0;
               retVal = ioctl(fd, SET_ADC_CHANNEL, &adc);
               if (retVal < 0)
               {
                   printf("Error: %d\n", retVal);
               }
               else
               {
                   retVal = ioctl(fd, GET_ADC_DATA, &adc);
                   if (retVal < 0)
                   {
                       printf("Error: %d\n", retVal);
                   }
                   else
                   {
                       printf("ADC Channel: %d Value: %d\n", adc.channelNumber, adc.adcValue);
                   }
               }
               break;

               case 2:
                   loop = 0;
                   break;

                default:
                    break;
        }
    }

    if( 0 != close(fd) )
    {
        printf("Could not close device\n");
    }

    return 0;
}

Do note that this is only an example, and what you might need to do will depend on what is available and what is not. Also, the code is not exactly up to the mark as per coding standards, but, i was too excited while working and rolling this out, so. I hope this clears the idea of how to use platform device/driver. Also, just in case it is not clear to people who are starting out, the core driver and header files are being changed. This will require a kernel recompilation and updating the uImage on the module, for our driver to work.

I will be putting another example of a PWM driver, which will be slightly bit more involved and should serve as another example.

Face detection with OpenCV


#include <opencv/cv.h>
#include <opencv/cxcore.h>
#include <opencv/highgui.h>
#include "stdio.h"

int main()
{
IplImage* frame = NULL;
const char* cascade_path = "C:\\OpenCV\\opencv\\data\\haarcascades\\haarcascade_frontalface_alt2.xml";
CvHaarClassifierCascade* hc = NULL;
CvMemStorage* storage = cvCreateMemStorage(0);
CvSize minSize = cvSize(100, 100);
CvSize maxSize = cvSize(640, 480);
CvSeq* faces = NULL;
CvCapture* input_camera = NULL;

int key = 0;
int loopCounter = 0;

hc = (CvHaarClassifierCascade*)cvLoad(cascade_path, NULL, NULL, NULL);
if (hc == NULL)
{
printf("\nLoading of classifier failed\n");
return -1;
}

input_camera = cvCaptureFromCAM(-1);
if (input_camera == NULL)
{
printf("\nCould not open camera\n");
return -1;
}

//Grabs and returns a frame from camera
frame = cvQueryFrame(input_camera);
if (frame == NULL)
{
printf("\nCould not capture frame\n");
return -1;
}

cvNamedWindow("Capturing Image ...", 0);

cvResizeWindow("Capturing Image ...",
(int) cvGetCaptureProperty(input_camera, CV_CAP_PROP_FRAME_HEIGHT),
(int) cvGetCaptureProperty(input_camera, CV_CAP_PROP_FRAME_WIDTH));

while(frame != NULL)
{
faces = cvHaarDetectObjects(frame, hc, storage, 1.2, 3, CV_HAAR_DO_CANNY_PRUNING, minSize, maxSize);

for (loopCounter = 0; loopCounter < (faces ? faces->total : 0); loopCounter++)
{
CvRect *r = (CvRect*)cvGetSeqElem(faces, loopCounter);
CvPoint pt1 = { r->x, r->y };
CvPoint pt2 = { r->x + r->width, r->y + r->height };
cvRectangle(frame, pt1, pt2, CV_RGB(0, 255, 0), 3, 4, 0);
}

//Shows a frame
cvShowImage("Capturing Image ...", frame);

// Checks if ESC is pressed and gives a delay
// so that the frame can be displayed properly
key = cvWaitKey(1);
if (key == 27)        // ESC key
{
break;
}
//Grabs and returns the next frame
frame = cvQueryFrame(input_camera);
}

//cvReleaseImage( &frame );
cvReleaseCapture(&input_camera);
cvReleaseMemStorage( &storage );
cvReleaseHaarClassifierCascade( &hc );

//Destroy the window
cvDestroyWindow("Capturing Image ...");

return 0;
}

I have tested the above code on a NVidia Tegra2 with Toradex Colibri T20 module and also on my laptop. Works, but there is a bug it seems. The program crashes if i enable the cvReleaseImage() line. Not an OpenCV expert, will have to really study the functions properly before playing around i guess. Will update this as soon as i found out.

I have not explained the OpenCV installation procedure for windows as it is already documented on the OpenCV website. For using with Beagleboard you can refer to a previous article of mine https://coherentmusings.wordpress.com/2012/06/24/getting-started-with-opencv-on-beagleboard-xm/ .  You can also use OpenCV with embedded Linux if you use Buildroot or OpenEmbedded for building your image.

Implementing a MySQL Client

Lets say a situation comes where you have to access a MySQL Server running in some part of the world and for some reason the MySQL Client/Connector is not available for your platform (i am talking of embedded, just in case you are wondering) or though the source is available, trying to get it to compile cleanly for your platform is just too much of a f****** headache. Well, a much simpler way exists. Just implement the MySQL Client/Server protocol using the article below.

http://dev.mysql.com/doc/internals/en/client-server-protocol.html

If i knew such a protocol document was available which gave details on how the MySQL Client and Server communicate with each other, i could have saved so much of my time and effort. Hope if anyone else out there needs a MySQL Client/Connector to communicate with the MySQL Server and if support is not available for his platform, this will be of help.

N.B. I hope it’s implicit that you need a TCP/IP connection and need to use socket programming.