Michał Kazimierz Kowalczyk

weblog

Necessitas: solution for lack of Camera support


In this article I will present my solution for using Camera in Necessitas (Qt on Android) projects. In current version of Necessitas (0.3.4) Camera is not supported. I was looking for existing solutions but the only one that I found in a patch for using MultimediaKit (proposed by dr. Juan Manuel Sáez) was related to old versions of Necessitas. I tried to use it but it seemes that old versions are now unstable (NecessitasQtCreator is crashing during building project).

I decided to use JNI to have an acces to Camera (with a little help of Java (-: ) in Qt Application.

I created two classes CameraSupport:

  1. in Java, to get every frame using Android API,
  2. in C++, to get frames from Java and retrieve them.

First class is really simple and doesn't contains anything special (maybe using additionals buffers is something not typical). To build it we need Android SDK level 11 (or higher). If you prefer use lower levels of SDK, you can simply delete lines containing preferredPreviewSize.

Second class is little bit more complicated, because:

  1. it's using JNI to communicate with Java class,
  2. Android SDK is delivering frames in YUV format so I need to decode it to get RGB values,
  3. I'm using Samsung Galaxy Tab 10.1 with two-cores processor, so I wanted to take advantage of it by dividing calculations for converting YUV frame into RGB one using two threads.

I little explained how to communicate C++ and Java classses in previous article. In this article I will use my previous solution.

I was looking for long time an optimal way of decoding YUV image to RGB. I was using formulas which I found at Wikipedia (I chose bitwise version). To make it better I done all calculations before and I put results into arrays but still I was using clamping function. I found nice idea how to make it better in an article Optimizing YUV-RGB Color Space Conversion Using Intel’s SIMD Technology of Étienne Dupuis. He suggested removing clamping function and use instead of it predefined array with proper number of zeros at the beginning and 255s at the end. It was the end of my optimization. If anyone has idea how to make it better, please share it.

I guess the first time in my life I felt a need of threading something (yes, I'm quite young (-: ). The result of my work you can find in C++ class. I created it in this way to work with optimal number of threads. I discovered that because pre-emption one thread is doing its job much longer then other. I invented solution that don't divide data into 2 exact size portions (in case of 2-cores procesors) but gives each thread smaller portions and if thread finished its job and there is still some data to retrieve, it takes next portion. Of course, I you have better solution or you found some error, please contact me. (-:

I won't analyse here all my code, I just show you how to use it.

First, copy these files (you can find sources here) to your project (respectively to paths):

  1. PROJECT_NAME/android/src/pl/ekk/mkk/necessitas/CameraSupport.java
  2. PROJECT_NAME/camerasupport.cpp
  3. PROJECT_NAME/camerasupport.h
  4. PROJECT_NAME/JavaClassesLoader.cpp (if you don't have it already)

Second, add existing files to your Necessitas project.

Third, place this code into your header file of MainWindow class (or other):

#include <QTimer>

class CameraSupport;

#define MILISECONDS_FOR_REFRESH 1
#define WIDTH 640
#define HEIGHT 480

In my case, I use frames with resolution 640 x 480. On my tablet it works with 30 FPS. If you want have another resolution, just change above values. You can find in Application Output in moment of Camera initialization a line like this:

V/CAMERA SUPPORT(30977): Preferred preview size - 1024x768

which says what is preferred size of a frame (remember that you need Android SDK 11 or higher to get this information).

In private section add this code:

    QTimer *timer;
    QImage *frame;

    CameraSupport *cameraSupport;
    bool repaint;

    unsigned int totalFrames;
    unsigned long long *frameTime;
    unsigned int currentFrame;
    unsigned int fps;

    unsigned long long frameCounter;
    clock_t time1, time2;

First, we need QTimer to get frames with a proper frequency. Of course in case of big frames, time of retrieving is so long that time between getting new one can be really small. But still, in case of small images it's good to set this time longer.

Second, we need QImage to use data frame in Qt application. If you prefer other image containters, if they support loading data from RGBA array, you can use it as well.

Third, we need CameraSupport object to ask it about new frames and if there is one to get it.

To avoid painting the same frame I use repaint flag.

Below repaint variable you can find some diagnostic variables to calculate FPS, count time of execution of parts of code etc.

Now, let's add into protected section:

    void paintEvent (QPaintEvent *event);

We will use this method for displaying frames.

The last thing that you need to add into your header file is a slot for QTimer:

private slots:
    void updateFrame();

Now, let's modify cpp file. In place where you want to start using your camera (in my case in constructor) put this code:

    timer = new QTimer (this);
    connect (timer, SIGNAL (timeout ()), this, SLOT (updateFrame ()));
    timer -> start (MILISECONDS_FOR_REFRESH);

    frame = 0;

    cameraSupport = new CameraSupport (WIDTH, HEIGHT);

    repaint = false;

    totalFrames = 1000 / MILISECONDS_FOR_REFRESH;
    frameTime = new unsigned long long[totalFrames];
    for (unsigned int i = 0; i < totalFrames; i++)
        frameTime[i] = 0;
    currentFrame = 0;
    fps = 0;

    frameCounter = 0;
    time1 = time2 = 0;

I guess, this code is quite clear, there is nothing special to explain. So now, let's define updateFrame () slot:

void MainWindow::updateFrame (){
    clock_t start = clock ();
    bool result = cameraSupport -> UpdateFrame ();
    clock_t stop = clock ();

    if (result){
        if (frame != 0){
            delete frame;
            frame = 0;
        }
        frame = new QImage ((unsigned char *)cameraSupport -> GetRGBA (), WIDTH, HEIGHT, QImage::Format_ARGB32_Premultiplied);
        repaint = true;

        time1 += stop - start;         update (0, 0, WIDTH, HEIGHT);

        frameTime[currentFrame] = clock();

        fps++;
        while (frameTime[currentFrame] - frameTime[(currentFrame - fps) % totalFrames] > CLOCKS_PER_SEC)
            fps--;

        if (currentFrame < totalFrames - 1)
            currentFrame++;
        else
            currentFrame = 0;     }
}

Method UpdateFrame () from CameraSupport class returns true if new frame was loaded or false otherwise. To create QImage with new frame we just use method GetRGBA () from CameraSupport class. Notice that I use format QImage::Format_ARGB32_Premultiplied for my QImage. In my case, I display frames on a screen by using QPainter. Format that I chose makes some QPainter operations faster than QImage::Format_ARGB32.

Now, let's define painterEvent (.):

void MainWindow::paintEvent (QPaintEvent *event){
    if (!repaint)
        return;
    repaint = false;

    if (frame){
        time2 -= clock ();
        QPoint topLeft = event -> rect ().topLeft ();

        QPainter displayPainter (this);
        displayPainter.drawImage (topLeft, *frame);

        time2 += clock ();

        frameCounter++;
        qDebug () << frameCounter << ": " << time1 / frameCounter << ", " << time2 / frameCounter << ". FPS: " << fps ;
    }
}

We must remember about cleaning up memory that we allocated! In case of my project, you can find in the destructor these lines:

    delete cameraSupport;

    if (frame != 0)
        delete frame;
    frame = 0;

    delete []frameTime;

If you are using JavaClassLoader function from my previous article add these lines:

    {
        const char* className = "pl/ekk/mkk/necessitas/CameraSupport";
        jclass clazz = env -> FindClass (className);
        if (!clazz){
            __android_log_print (ANDROID_LOG_FATAL,"Qt", "Unable to find class '%s'", className);
            return JNI_FALSE;
        }
        jmethodID constr = env -> GetMethodID(clazz, "<init>", "()V");
        if (!constr){
            __android_log_print (ANDROID_LOG_FATAL,"Qt", "Unable to find constructor for class '%s'", className);
            return JNI_FALSE;
        }
        jobject obj = env -> NewObject (clazz, constr);
        cameraSupportClassPointer = env->NewGlobalRef(obj);
    }

Also, add outside the function this line:

jobject cameraSupportClassPointer;

Variable's name is important because it's used in CameraSupport class. If you are using another solution to load Java classes, remember to create this object.

Now, your project should work with Camera on Android! But there is still something to do. You can notice that paiting QImage takes lot of time. To make it faster we can take advantage of hardware acceleration by using OpenGL. Simply add to your *.pro file this line:

QT_GRAPHICSSYSTEM = opengl

And that's all!.

Now, the question is: is it possible to make it better? I guess it is!

First thing which I would change is a way of displaying frames. This inconspicious operation takes a lot of time. I found an article: How to get faster Qt painting on N810 right now which could be helpful for you.

Another idea is to use own version of QPaintEngine. Some clues could be found here: QGLWidget and hardware acceleration?.

I guess that there is not much things that can be done with YUV to RGB conversion. Maybe there exists better algorithm which do less read/write memory operations. I was thinking about setting two pixels in one time (by using unsigned long long int* instead of unsigned int*) but it will be better only in case of 64 bits architectures. You can always write some part of your code in assembler. If you want to look for more thriftiness you can try to find better solution for getting YUV data from Java.

The last night I found some technology which I never used before - OpenCL. Maybe this could be also useful for decreasing time of conversion YUV to RGB or for displaying frame on a screen? Anyone has some experience with this?

The last but not least - quality. If you want better quality of video frames (after conversion YUV to RGB), consider recalculating precalculated arrays. It is really important to notice that lot of algorithms requires that Y will be in range of <16, 235> and U, V will be in range of <16, 240> (YUV / YCbCr color componet data ranges) while what you really get is <0, 255> for all components.  You can read more here: About YUV VIDEO.

If you has some ideas, found some errors or just found this article interesting, please share with me your opinion.

MKK

Hosted on eKK.pl.