Posts Tagged ‘OpenCV’

Me and a buddy of mine from class, Thomas LaBruyere (Linkedin profile here) had recently worked on a panorama app on Android using OpenCV stitching module. Here is the GUI of the app-

Panorama app GUI

We built the app on Google Nexus 7  running Android 4.1(Jelly Bean). The basic functionality of the app is as follows –

1)  It has four buttons “Start Video Capture” , “Capture Still Image”, “Stitch” and “View Stitched Images”.

2) The app is started with normal video mode where the video from the front camera is shown. There are two main modes of getting the panorama and stitching it. First one is to use the “Start Video capture” button where a video of couple of seconds is stored where you move  your camera around to capture your surroundings. Then press the “Stitch” button and the stitched panorama image is saved on to the SD card after a couple of seconds. Second  is  “Capture Still Image” button where you  click the button to store images that need to be stitched as panorama while you move the camera around. Once done with capturing the images to be stitched , click on “Stitch” button to complete the panorama stitching.  The stitched output will be saved on to  the SD card. You may change the path in the code accordingly to save it at a different path on the disk. The button ” View Stitched images”  should open up the gallery to look through the stitched images but it has a slight problem that hasn’t been fixed yet so wouldn’t work as expected. You should navigate to the SD card folder manually to see the image.

The algorithm of panorama stitching is implemented through   OpenCV stitching module.  The Android jni interface communicates with the OpenCV C/C++ native  code.

We did not have any previous experience with Android programming or  linking OpenCV . We used the following link and got started –

Getting started with Android and OpenCV

Here is the output of the  panorama that we took in the class .

Panorama output

This project might be old  gives some compilation errors. I don’t have the setup now to fix it. I would highly recommend you to compile the example projects from opencv4android and integrate the code parts from below in them. Nevertheless the project folder can be downloaded just for reference from here.

The two main codes from the project are shown below –

1) OpenCV code to stitch the images. (jni_part.cpp)

#include < jni.h >
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/stitching/stitcher.hpp>

#include < vector >
#include < iostream >
#include < stdio.h >
#include < list >
#include< sstream >
#include< string >

using namespace std;
using namespace cv;

extern "C" {
//JNIEXPORT Mat JNICALL Java_org_opencv_samples_tutorial3_Sample3Native_FindFeatures(JNIEnv*, jobject, jlong addrGray, jlong addrRgba)

JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial3_Sample3Native_FindFeatures(
		JNIEnv*, jobject, jlong im1, jlong im2, jlong im3, jint no_images) {

	vector < Mat > imgs;
	bool try_use_gpu = false;
	// New testing
	Mat& temp1 = *((Mat*) im1);
	Mat& temp2 = *((Mat*) im2);
	Mat& pano = *((Mat*) im3);

	for (int k = 0; k < no_images; ++k) {
		string id;
		ostringstream convert;
		convert << k;
		id = convert.str();
		Mat img = imread("/storage/emulated/0/panoTmpImage/im" + id + ".jpeg");


	Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
	Stitcher::Status status = stitcher.stitch(imgs, pano);



2) Android code that has main GUI and calls the OpenCV function (

package org.opencv.samples.tutorial3;

import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.highgui.Highgui;

import android.content.Intent;
import android.os.Bundle;
import android.os.Environment;
import android.os.Handler;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.view.Window;
import android.view.WindowManager;
import android.widget.Button;

public class Sample3Native extends Activity implements CvCameraViewListener {
	private static final String TAG = "OCVSample::Activity";

	public static final int VIEW_MODE_RGBA = 0;
	public static final int SAVE_IMAGE_MAT = 1;
	public static final int CAPT_STILL_IM = 2;
	private static int viewMode = VIEW_MODE_RGBA;
//	public static int image_count = 0;
	private MenuItem mStitch;
	private MenuItem mItemCaptureImage;
	private Mat mRgba;
	private Mat mGrayMat;
	private Mat panorama;
	private Mat mtemp;
	private List < Mat > images_to_be_stitched = new ArrayList < Mat >();
	private CameraBridgeViewBase mOpenCvCameraView;
	private long mPrevTime = new Date().getTime();
	private static final int FRAME2GRAB = 10;
	private int mframeNum = 0;
	private static final File tempImageDir = new File(Environment.getExternalStorageDirectory() + File.separator + "panoTmpImage");
	private static final File StitchImageDir = new File(Environment.getExternalStorageDirectory()+ File.separator  + "panoStitchIm");
	private static final String mImageName = "im";
	private static final String mImageExt = ".jpeg";
	private long recordStart = new Date().getTime();
	private static final long MAX_VIDEO_INTERVAL_IN_SECONDS = 3 * 1000; // Convert milliseconds to seconds
	public final Handler mHandler = new Handler();

	// Create runnable for posting
    final Runnable mUpdateResults = new Runnable() {
        public void run() {

    private void updateResultsInUi()


	private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
		public void onManagerConnected(int status) {
			switch (status) {
			case LoaderCallbackInterface.SUCCESS: {
				Log.i(TAG, "OpenCV loaded successfully");

				// Load native library after(!) OpenCV initialization

			default: {

	public Sample3Native() {
		Log.i(TAG, "Instantiated new " + this.getClass());

	/** Called when the activity is first created. */
	public void onCreate(Bundle savedInstanceState) {
		Log.i(TAG, "called onCreate");


		final Button btnVidCapt = (Button) findViewById(;
		btnVidCapt.setOnClickListener(new View.OnClickListener() {
            public void onClick(View v) {

		final Button btnStitch = (Button) findViewById(;
		btnStitch.setOnClickListener(new View.OnClickListener() {
            public void onClick(View v) {

		final Button btnViewStitchedIm = (Button) findViewById(;
		btnViewStitchedIm.setOnClickListener(new View.OnClickListener() {
            public void onClick(View v) {

		final Button btnCapStil = (Button) findViewById(;
		btnCapStil.setOnClickListener(new View.OnClickListener() {
            public void onClick(View v) {
		mOpenCvCameraView = (CameraBridgeViewBase) findViewById(;

	public void onPause() {
		if (mOpenCvCameraView != null)

	public void onResume() {
		OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this,

	public void onDestroy() {
		if (mOpenCvCameraView != null)

	public void onCameraViewStarted(int width, int height) {
		mRgba = new Mat(height, width, CvType.CV_8UC3);
		mGrayMat = new Mat(height, width, CvType.CV_8UC1);
		mtemp = new Mat(height, width, CvType.CV_8UC3);
		panorama = new Mat(height, width, CvType.CV_8UC3);

	public void onCameraViewStopped() {

	public Mat onCameraFrame(Mat inputFrame) {
		switch (Sample3Native.viewMode) {
		case Sample3Native.VIEW_MODE_RGBA: {
			Core.putText(mRgba, "Video Mode", new Point(10, 50), 3, 1, new Scalar(255, 0, 0, 255), 2);
			// Update start recordtime until starting recording
		case Sample3Native.SAVE_IMAGE_MAT: {
			long curTime = new Date().getTime();
			Core.putText(mRgba, "Record Mode", new Point(10, 50), 3, 1, new Scalar(255, 0, 0, 255), 2);
			long timeDiff = curTime - recordStart;
			Log.i("timeDiff", Long.toString(timeDiff));

				if ((mframeNum % FRAME2GRAB) == 0) {
				mframeNum = 0;
		case Sample3Native.CAPT_STILL_IM :
			Sample3Native.viewMode = Sample3Native.VIEW_MODE_RGBA;
		return mRgba;

	public void startVidCap() {
		if (Sample3Native.viewMode == Sample3Native.VIEW_MODE_RGBA)
		else if (Sample3Native.viewMode == Sample3Native.SAVE_IMAGE_MAT)

	private void turnOffCapture()

		Sample3Native.viewMode = Sample3Native.VIEW_MODE_RGBA;

	private void turnOnCapture()

		Sample3Native.viewMode = Sample3Native.SAVE_IMAGE_MAT;
//		startVidCapture.setText("Stop Video Capture");
		recordStart = new Date().getTime();


	public void stitchImages() {
			for (int j = 0; j < images_to_be_stitched.size(); j++) {
				writeImage(images_to_be_stitched.get(j), j);
		Log.i("stitchImages", "Done writing 2 disk. Starting stitching " + images_to_be_stitched.size() + " images");
					panorama.getNativeObjAddr(), images_to_be_stitched.size());
		Log.i("stitchImages", "Done stitching. Writing panarama");

		Log.i("stitchImages", "deleting temp files");


    public void captStillImage()
    	Sample3Native.viewMode = Sample3Native.CAPT_STILL_IM;


	private String getFullFileName( int num)
		return mImageName + num + mImageExt;

	private void writeImage(Mat image, int imNum)
		writeImage(image, getFullFileName(imNum));

	private void writeImage(Mat image, String fileName) {
		File createDir = tempImageDir;
		Highgui.imwrite(tempImageDir+File.separator + fileName, image);

	private void writePano(Mat image)
		Date dateNow = new  Date();
		SimpleDateFormat dateFormat = new SimpleDateFormat("yyyyMMdd_HHmmss");
		Highgui.imwrite(StitchImageDir.getPath()+ File.separator + "panoStich"+dateFormat.format(dateNow) +mImageExt, image);


	private void deleteTmpIm()
		File curFile;
		for (int j = 0; j < images_to_be_stitched.size(); j++) {
			curFile = new File(getFullFileName(j));

	public void viewStitchImages()

		Intent intent = new Intent(this, GalleryActivity.class);


	private void saveImageToArray(Mat inputFrame) {

	private int FPS() {
		long curTime = new Date().getTime();
		int FPS = (int) (1000 / (curTime - mPrevTime));
		mPrevTime = curTime;
		return FPS;

	public boolean onCreateOptionsMenu(Menu menu) {
		return true;


	public boolean onOptionsItemSelected(MenuItem item) {
	return true;

	// public native void FindFeatures(List pano_images, Long stitch );
	public native void FindFeatures(long image1, long image2, long image3,
			int count);

The code  snippet shown below  is  for simple image stitching of two images in OpenCV . It can easily be modified to stitch multiple images together and create a Panorama.

OpenCV also has a stitching module which helps in achieving this task and which is more robust than this. The code presented here will help in understanding the major steps involved in image stitching algorithm. I am using OpenCV 2.4.3 and Visual studio 2010.  This code is based on the  openCV tutorial  available here.

The main parts of stitching algorithm are –  1) Finding Surf descriptors in both images 2) Matching the surf descriptors between two images . 3) Using  RANSAC to estimate the homography matrix using the matched surf descriptors. 4) Warping the images based on the homography matrix.

Input images :    

Stitched Output:


#include <stdio.h>
#include <iostream>

#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"

using namespace cv;

void readme();

/** @function main */
int main( int argc, char** argv )
 if( argc != 3 )
 { readme(); return -1; }

// Load the images
 Mat image1= imread( argv[2] );
 Mat image2= imread( argv[1] );
 Mat gray_image1;
 Mat gray_image2;
 // Convert to Grayscale
 cvtColor( image1, gray_image1, CV_RGB2GRAY );
 cvtColor( image2, gray_image2, CV_RGB2GRAY );

imshow("first image",image2);
 imshow("second image",image1);

if( ! || ! )
 { std::cout<< " --(!) Error reading images " << std::endl; return -1; }

//-- Step 1: Detect the keypoints using SURF Detector
 int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector< KeyPoint > keypoints_object, keypoints_scene;

detector.detect( gray_image1, keypoints_object );
 detector.detect( gray_image2, keypoints_scene );

//-- Step 2: Calculate descriptors (feature vectors)
 SurfDescriptorExtractor extractor;

Mat descriptors_object, descriptors_scene;

extractor.compute( gray_image1, keypoints_object, descriptors_object );
 extractor.compute( gray_image2, keypoints_scene, descriptors_scene );

//-- Step 3: Matching descriptor vectors using FLANN matcher
 FlannBasedMatcher matcher;
 std::vector< DMatch > matches;
 matcher.match( descriptors_object, descriptors_scene, matches );

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
 for( int i = 0; i < descriptors_object.rows; i++ )
 { double dist = matches[i].distance;
 if( dist < min_dist ) min_dist = dist;
 if( dist > max_dist ) max_dist = dist;

printf("-- Max dist : %f \n", max_dist );
 printf("-- Min dist : %f \n", min_dist );

//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
 std::vector< DMatch > good_matches;

for( int i = 0; i < descriptors_object.rows; i++ )
 { if( matches[i].distance < 3*min_dist )
 { good_matches.push_back( matches[i]); }
 std::vector< Point2f > obj;
 std::vector< Point2f > scene;

for( int i = 0; i < good_matches.size(); i++ )
 //-- Get the keypoints from the good matches
 obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
 scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );

// Find the Homography Matrix
 Mat H = findHomography( obj, scene, CV_RANSAC );
 // Use the Homography Matrix to warp the images
 cv::Mat result;
 cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows));
 imshow( "Result", result );

 return 0;

/** @function readme */
 void readme()
 { std::cout << " Usage: Panorama < img1 > < img2 >" << std::endl; }

Running the code :

Build the code and pass in the two images to be stitched as arguments to the generated exe. Sometimes if the stitching  output is not proper reversing the order of the two images when you pass to the exe would help.

This is an example showing integration of OpenCV and PCL(Point Cloud Library) in using trackbars of OpenCV(highgui) to adjust the x,y,z limits of passthrough filter of PCL .

It is assumed that you have already installed OpenCV and PCL . Make sure that you have OpenCV version> 2.3 & PCL version > 1.1.  In my case I am using OpenCV 2.3.1 and PCL 1.6 ( Compiled from current trunk).

Before downloading the code and running, you might want to see how the output looks like.

Running the code:

This code takes a PLY file as input and applies passthrough filters to it and visualizes it. You can adjust the X,Y,Z  limits of passthrough filters using trackbars and see the output on the fly.

Step1: Download the code folder and extract it.

Step2: Create a folder named “build” inside it. Open CMake and in the field where is the source code provide the path of folder. eg: C:/Users/Sanmarino/Downloads/Integrating_Opencv_PCL_PassthroughFilters. In the field where to build the binaries provide the path to the empty build folder you created. eg:C:/Users/Sanmarino/Downloads/Integrating_Opencv_PCL_PassthroughFilters/build. Then Click configure at the bottom. Choose the compiler you want to use when prompted. In my case I chose Visual Studio 10


Step3: If everything is fine after configure , click generate. Else sometimes you might get an error like this –

“CMake Error at CMakeLists.txt:6 (find_package): By not providing “FindOpenCV.cmake” in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by “OpenCV”, but CMake did not find one.

Could not find a package configuration file provided by “OpenCV” with any of the following names: OpenCVConfig.cmake opencv-config.cmake

Add the installation prefix of “OpenCV” to CMAKE_PREFIX_PATH or set “OpenCV_DIR” to a directory containing one of the above files. If “OpenCV” provides a separate development package or SDK, be sure it has been installed.

If this is the case then we have to set the OpenCV_DIR to the place where it can find OpenCVConfig.cmake. So  click on OpenCV_DIR-NOTFOUND and change it to the path where OpenCVConfig.cmake exists. In my case it was  C:\OpenCV\build  where I had built the binaries of OpenCV. Now hit Configure again and hit Generate.

Changed path of OpenCV_DIR:

Step4: Now open the generated solution and build it in release/debug mode.  Place scene_mesh.ply from the unzipped code folder in the folder where simple_visualizer.exe is generated. Run simple_visualizer.exe in command prompt by giving scene_mesh.ply as argument.

eg: C:\Users\Sanmarino\Downloads\Integrating_Opencv_PCL_PassthroughFilters\build\Release>    simple_visualizer.exe  scene_mesh.ply. Change the trackbars to apply passThrough filter on the pointcloud and see the output.


#include <iostream>

// Point cloud library
#include <pcl/point_cloud.h>
#include <pcl/io/pcd_io.h>
#include <pcl/io/ply_io.h>
#include <pcl/point_types.h>
#include <pcl/filters/passthrough.h>
#include <pcl/visualization/pcl_visualizer.h>

// Opencv
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;

// These are track bar initial settings adjusted to the given pointcloud to make it completely visible.
//  Need to be adjusted depending on the pointcloud and its xyz limits if used with new pointclouds.

int a = 22;
int b = 12;
int c=  10;

// PCL Visualizer to view the pointcloud
pcl::visualization::PCLVisualizer viewer ("Simple visualizing window");

	main (int argc, char** argv)
	pcl::PointCloud::Ptr cloud (new pcl::PointCloud);
	pcl::PointCloud::Ptr cloud_filtered (new pcl::PointCloud);

	if (pcl::io::loadPLYFile (argv[1], *cloud) == -1) //* load the ply file from command line
		PCL_ERROR ("Couldn't load the file\n");
		return (-1);

	pcl::copyPointCloud( *cloud,*cloud_filtered);

	float i ;
	float j;
	float k;

	cv::namedWindow( "picture");

	// Creating trackbars uisng opencv to control the pcl filter limits
	cvCreateTrackbar("X_limit", "picture", &a, 30, NULL);
	cvCreateTrackbar("Y_limit", "picture", &b, 30, NULL);
	cvCreateTrackbar("Z_limit", "picture", &c, 30, NULL);

	// Starting the while loop where we continually filter with limits using trackbars and display pointcloud
	char last_c = 0;
	while(true && (last_c != 27))

		pcl::copyPointCloud(*cloud_filtered, *cloud);

		// i,j,k Need to be adjusted depending on the pointcloud and its xyz limits if used with new pointclouds.

		i = 0.1*((float)a);
		j = 0.1*((float)b);
		k = 0.1*((float)c);

		// Printing to ensure that the passthrough filter values are changing if we move trackbars.

		cout << "i = " << i << " j = " << j << " k = " << k << endl;

		// Applying passthrough filters with XYZ limits

		pcl::PassThrough pass;
		pass.setInputCloud (cloud);
		pass.setFilterFieldName ("y");
		//  pass.setFilterLimits (-0.1, 0.1);
		pass.setFilterLimits (-k, k);
		pass.filter (*cloud);

		pass.setInputCloud (cloud);
		pass.setFilterFieldName ("x");
		// pass.setFilterLimits (-0.1, 0.1);
		pass.setFilterLimits (-j, j);
		pass.filter (*cloud);

		pass.setInputCloud (cloud);
		pass.setFilterFieldName ("z");
		//  pass.setFilterLimits (-10, 10);
		pass.setFilterLimits (-i, i);
		pass.filter (*cloud);

		// Visualizing pointcloud
		viewer.addPointCloud (cloud, "scene_cloud");

	return (0);


cmake_minimum_required(VERSION 2.8 FATAL_ERROR)


find_package(PCL 1.4 REQUIRED)
find_package(OpenCV REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} )
link_directories(${PCL_LIBRARY_DIRS} )
add_definitions(${PCL_DEFINITIONS} )

add_executable (simple_visualizer simple_visualizer.cpp)
target_link_libraries (simple_visualizer ${PCL_LIBRARIES} ${OpenCV_LIBS})