Android MediaRecorder with Camera

Our animated gif and stopmotion maker application (Gifagram) relays heavily on the Android camera support. Two good starting points for Camera and Video recording on android are:

http://developer.android.com/reference/android/media/MediaRecorder.html
http://www.codesend.com/view/73d3edc51302c0e6a2fa3075db6fa1ab/

The first link is the Android SDK MediaRecorder class and the second one is a lengthy example.

Unfortunelly SDK information on camera and video is poor, and hidden behavior is quite undocumented.

For instance, MediaRecorder.setProfile(‘HIGH’) sets an invalid video size in Samsung Galaxy SII (1080 height, instead of 1088 as hardware requires). The same code on HTC Wildfire throws an exception although on other HTC phones, everything works great. This is heavily hardware dependent which can run you crazy.

Video recording using a Camera object

The following code snipplet setups the camera:

boolean setupVideoRecording(int use_profile){
    	boolean ret = true;
    	try{
    		if(recorder != null){
    			recorder.release();
    			recorder = null;
    		}

    		if(camera != null){
    			camera.reconnect();
    		} else{
    			camera = Camera.open();
        		//camera.lock();

    		}

    		CamcorderProfile profile;
    		if(use_profile == 0){
    			profile = CamcorderProfile.get(CamcorderProfile.QUALITY_LOW);  			
    		}else{	
    			profile = CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH);
    		}

			Camera.Parameters params = camera.getParameters();
			List sizes = params.getSupportedPreviewSizes();

			//List formats = params.getSupportedPreviewFormats();
			Size optimalSize;
			optimalSize = getOptimalPreviewSize(sizes, profile.videoFrameWidth, profile.videoFrameHeight);

			params.setPreviewSize(optimalSize.width, optimalSize.height);
			params.setPreviewFrameRate(profile.videoFrameRate);

			params.setColorEffect(filter);
			camera.setParameters(params);
			setCameraDisplayOrientation((Activity)context, 0, camera);

			camera.setPreviewDisplay(holder);
			camera.startPreview();	
			camera.unlock();

			recorder = new MediaRecorder();
			recorder.setCamera(camera);

			recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
			recorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);

			if(use_profile == 2){
				recorder.setOutputFormat(profile.fileFormat);
				recorder.setVideoSize(optimalSize.width, optimalSize.height);
				//recorder.setVideoSize(profile.videoFrameWidth, profile.videoFrameHeight);
				recorder.setVideoEncoder(profile.videoCodec);
				recorder.setAudioEncoder(profile.audioCodec);
			} else{
				recorder.setProfile(profile);
			}

    	} catch(Exception e){
    		String message = e.getMessage();
	    	e.printStackTrace();
	    	ret = false;
    	}

    	return ret;
    }

We pass a use_profile integer to choose some hardware dependent code. But for almost everything the use_profile == 2 is the right one for you.

And the following prepares the media recorder. Both functions can be called one after the other to completely setup the camera for video recording.

boolean initializeMediaRecorder(){
    	try{
    		recorder.setOutputFile(captureFile);
			recorder.setPreviewDisplay(holder.getSurface());
			recorder.prepare();
			return true;
    	} catch(Exception e){
    		String message = e.getMessage();
	    	return false;
    	}
    }

This code is quite simple but I hope it can be the foundation of something more elaborated for someone. Please feel free to leave a comment if you have a tip that you would like to share with everyone.

Understanding Modern GPUs (III): Command And Setup Logic

Up to this points we have reviewed the common APIs, the host side of the communication between the CPU and the GPU, and how they interact and synchronize. In this new post we will start by exploring the GPU’s citizen in charge of being its interface to the outside, the Command Processor (CP).

 

Barts architecture diagram (AMD)

THE COMMAND PROCESSOR 

We learnt how the CPU sends command to the CPU, with state changes, instructions and data, but this is codified somehow and must be interpreted. A CP keeps track of states within the GPU, updates host mapped registers and signals interrupts to inform the CPU.

In fact, CPs are (afaik) a micro-processors embedded into the GPU capable of doing most of the tasks traditionally handled by software at the driver. Contains an internal memory, can do complex logic, arithmetic operations and so on. Its capable of managing multiple command buffers, keep track of what is sent down into the GPU or update fences once the command stream has reached them.

Her first task is decoding the commands in order to feed other components. It’s also responsible of reading and writing host memory and managing device memory. Managing states is a complex task and in some cases, in order to maintain integrity, a partial pipeline flush is issued before proceeding. This is the worst case as it can serialize everything. This is a world itself and very vendor specific.

SETUP STEP

The Command Processor manages some dedicated fixed-function logic, a Vertex Assembler (VA), Tessellator, Geometry Assembler (GA), Rasterizer/Interpolator. These elements are responsible of feeding the processing cores with data, they talk with the Thread Scheduler (named GigaThread by NVIDIA) and issue computing tasks in blocks. Fermi architecture seems to be a bit different as these fixed function logic blocks start to become more and more a bottleneck . NVIDIA has opted to duplicated some of these logic or rearrange them into their Stream Processors which would allow many concurrent operations.

Althought I would like to focus these posts for High Performance Computing, a simple explanation on these setup blocks is interesting. For each vertex in a stream there is a set of associated attributes (like normal, binormals…) that need to be fetched (besides the vertex position) and assembled into a block before further processing. This is the primary task for the Vertex Assembler. As the attribute list grows, the performance decreases as more and more data needs to be fetched from memory before it can be processed.

The Tesselator is a highly programable logic block which can perform patch tessellation and feed (back) the pipeline with new vertices and primitives. The Geometry Assembler fetches primitive information along with adjacency information and sends down all these information for further processing inside a Geometry Shader. The Rasterizer emits fragments to be processed by Fragment Shaders. Interpolation used to be a monolithic a operation done inside the rasterizer but nowadays most of the work is done directly inside the Fragment Shader. With the fragment barycentric coordinates you can easily interpolate each defined attribute directly inside the shader (via patching the user shader).

All these setup blocks feed the Stream Processors which are the core of the GPUs. We will review how they work in a few days.

Understanding Modern GPUs (II): Drivers and Command Ring

This is the second post on Understanding Modern GPUs where we will review the driver, the data flow to the GPU and what modules are involved. In my previous post we talked about the software front end of the GPU, this post and the following ones are going to be more hardware related.

USER SPACE AND KERNEL SPACE COMMAND FLOW

Pick any API of your choice, let it be OpenGL, DirectX, OpenCL any mix of these or other APIs (yes, you can mix OpenGL and OpenCL), they implement many functionality in user space. For example in Windows Vista/7 the Windows Display Driver Model (http://msdn.microsoft.com/en-us/library/ff570593.aspx) you can find the following diagram very self explanatory:

User space and kernel space display drivers model

Each application using a graphics API loads into their private process memory a set of driver/API functionality which is not shared among other processes. There you can find the command buffer, where all your APIs calls are transformed and stored sequentially before being transfered to the side of the driver residing in kernel model. The per process command buffer is vendor dependent and may contain the actual commands given  to the GPU. Also this is the point where shaders are compiled in runtime (in user space) and the driver may inject custom code (therefore patching the shader) to implement specific functionality that doesn’t translate directly to hardware. This part is very specific to the GPU being used.

Being the GPU a resource shared among different processes, there must be a mechanism to ensure not only that the commands are executed in order, but that the GPU can be used by all those processes and that there is no data corruption. This is done by the device driver executing in kernel model. There you can find the scheduler and the final Command Ring Buffer that is used to talk to the GPU. At the other side of the Command Ring Buffer is the Command Processor (CP) which reads from the stream, decodes the command and feeds the Threads/Stream Scheduler (we will talk about this in other post).

Simplified Ring Buffer (Command Buffer)

The kernel driver scheduler reads from each individual (process) command buffer and moves them to the DMA command buffer. In fact, that would be a waste of resources. Current GPUs contain a DMA controller and a MMU. The first one lets the GPU talk directly to host RAM to fetch and write data without CPU intervention. The MMU virtualizes GPU/host memory and offers some memory protection. For example the Fermi architecture contains 2 DMA engines to get a two-way simultaneous transfers. Another possibility is transferring data between GPUs without CPU intervention. It’s important to notice that those transfers are executed in parallel with the main command buffer, thus adding another level of parallelism to exploit.

We send a command to the GPU to fetch data from a memory region using its DMA engines, instead of transferring directly,this way we can create different command buffers, one for each user-space driver and let the GPU fetch them.

COMMAND/RING BUFFER

Settings a Command Buffer in DirectX: http://msdn.microsoft.com/en-us/library/ff569747(v=VS.85).aspx

Basically through the command buffer you set some states in the GPU, set it to fetch data and issue execution orders. In older days user APIs had a big drawback, you had to specify you primitives by commands directly to the API, for example glBegin/glEnd. Those harmfull calls are now removed from OpenGL ES for example, as they performance killers on modern graphics cards. You can think of the CPU and the GPU as two threads that communicate through the Command Ring Buffer. Its a ring (FIFO) that is filled by the CPU and read by the GPU until its drained. If the ring is empty (write and read pointers are equal) the GPU stalls and waits until has something to do. In this case you are probably CPU bounded. If the CPU fills the entire buffer and has to wait for some free space, you are GPU bounded.

Ring Buffer and its Control Structure. Source (AMD): http://developer.amd.com/gpu_assets/R5xx_Acceleration_v1.2.pdf

The above figure shows the Host and the Graphics Controller (Command Processor) connected through the Ring Buffer (RB). The RB is initialized with a fixed buffer size and both Write and Read Pointers are set to zero (empty buffer). The driver adds packets into the RB and updates the Write Pointer register inside the device. When the device reads packets, updates the Read Pointer. Updating both pointers incurs in some overhead that can be mitigated by only updating these register when some a block of data has been consumed (by grouping packets in blocks) instead of doing so for each packet. This also needs more logic in both sides to avoid writing when the RB is full (more info here http://developer.amd.com/gpu_assets/R5xx_Acceleration_v1.2.pdf  although a bit out-dated).

This command stream adds some other synchronization issues that must be taken care of. Imagine the following: you create a huge data array which is going to be processed, but once the GPU has finish fetching from the main memory region we would like to update it as soon as possible with new data. How does the CPU know that some commands have been processed so that we can update the array? Remember that this is implemented by a pointing the GPU to fetch it from memory but meanwhile, both the GPU and the CPU can work in parallel to this fetch. The solution is in fact very simple. There are some command types embedded into the command stream called fences (I found this patent by VIA about the matter 😉 http://www.patentgenius.com/patent/7755632.html). Those fences are read by the GPU which updates some register so that the CPU knows that we are up to that point in the stream.

What happens to your OpenCL kernel or vertex shader up to this point? The kernel code was compiled to an intermediate language by the user-space driver (PTX for CUDA devices or AMD IL for AMD devices for example). The code is then compiled to the specific hardware by the driver and passed to the GPU. Since some GPUs may have missing functionality or needs different steps to compute some function, the code needs to be targeted to that hardware running on the computer. For example double floating poins operations may need additional passes to obtain the desired accuracy on hardware lacking a dedicated double precision float processor. Some GPU architectures sacrifice IEEE compliance while others lack double-precision altogether.

Some of those concepts we have been talking about can be seen in the OpenCL API (remember I told you there was some kind of direct mapping between this API and how the hardware actually works from a logical view). You create a command buffer using clCreateCommandQueue, you enqueue a read from device memory using clEnqueueReadBuffer, you run a computing kernel using clEnqueueNDRangeKernel and so on.

On the next episode, we will talk about the Command Processor and some setup logic. Stay tunned!

Understanding Modern GPUs (I): Introduction

PREFACE

From time to time I’m asked if I could explain “how graphics work”, which is obviously a very broad question. Instead of answer the most known answer, that is, how to draw primitives using vertices, matrices, textures and so on, I would like to use my blog to write down all my current limited knowledge about the low level side of computer graphics. How the API (OpenGL, DirectX, etc) talks to the hardware and how data is moved around to and from these computing monsters (GPUs). I said the word “computing” on purpose, today GPUs are designed to handle massive amounts of vector processing which makes them suitable for High Performance Computing (HPC) besides drawing your current AAA title.

nVidia Fermi GF100 Graphics Processor

Btw, I have found another blogs explaining similar stuff here: http://fgiesen.wordpress.com/2011/07/01/a-trip-through-the-graphics-pipeline-2011-part-1/

Some explanations given are very speculative thoughts or things I have read in the past which may not be totally correct or are completely wrong. Vendors don’t want to expose their architecture but lot of things can be learnt reading their APIs, documents and third-party investigations (http://www.icare3d.org/GPU/CN08). OpenCL is a good start as many concepts through the API seem to map directly on how the hardware implements those operations. You may want to check Intel, AMD and NVIDIA documents on their sites. I recommend reading the following papers to get more in-depth information about GPUs:

http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIAFermiComputeArchitectureWhitepaper.pdf

http://developer.amd.com/SDKS/AMDAPPSDK/documentation/Pages/default.aspx

3D GRAPHICS PIPELINE INTRODUCTION

Before anything, I want to make a dirty introduction to a common 3d graphics pipeline and I will later make a short introduction to OpenCL/HPC. If you need more information, the concepts explained in this section may help you direct your question to Google more easily.

Before the age of current highly programmable GPUs there was a set of APIs with fixed pipelines. Those pipelines had some stages that are still present somehow in current 3D engines but it’s up to the user to implement/port such functionality or ignore it altogether. A common set of stages of a modern graphics pipeline (very simplified) can be found in the following figure:

Simplified Graphics Pipeline

You start with an stream of vertices and a set of  shaderssmall program that run inside the GPU and transform those vertices (for example by applying some space transformations and a projection to obtain their position in screen space). This is quite oversimplified, modern APIs add more stages after the Programmable Vertex Processor which let you emit new vertices or primitives but all these falls outside the scope of this writing. With the vertices stream the pipeline builds some primitives (points, lines and vertices) and moved into the rasterization stage where the user can define a Fragment Shader to run for each emitted fragment. Those fragments may after some z-test and some final decisions end written into the image buffer. This logic is highly programmable through the use of different shaders.

The full Shader Model 5.0 pipeline found in DIrectX

The point behind all is how the GPU architecture exploits the fact that the input and output data can be isolated from other packets inside the data stream, that is, GPUs exploit data level parallelism. For us, those data packets are vertices and pixeles. If you enforce this in your design you can operate with hundreds of vertices and pixels in parallel.

OPENCL INTRODUCTION

This paradigm shift, from sequential processing into parallel SIMD processing can be exploited to do any kind of repetitive computation besides transforming vertices and calculating pixel values. OpenCL is an API to access the GPU as a computational device which runs small programs (kernels) that operates over some data. The idea behing OpenCL is to run a kernel over a point in a problem  domain assuming that each point is highly isolated from others points and thus can be executed in parallel (like the vertices and pixeles in our Graphics Pipeline introduction). The API defines an execution model and a memory model that maps to the imposed contrains found in the GPUs. A running program operating over a point in a domain is called work-item (like a thread) . You must define a N-dimension domain space of work-items and group a set of work-items into work-groups. Those work-groups can share a common memory region private to that work-group.

OpenCL memory model

To maximize throughput you want to load all stream processor inside the GPU and fully utilize other areas of the die, like the texture units/memory fetcher because they work in parallel with the ALU. Furthermore, round-trips to host memory  are slow and must be minimized. We will see this in more details in future posts.

The reader can find a nice introduction to OpenCL here: http://www.amd.com/us/products/technologies/stream-technology/opencl/pages/opencl-intro.aspx

In my next post (which is almost ready) we will talk about more low level stuff: how the APIs talk to the driver an the hardware, and how a GPU is orchestrated to run in parallel with the CPU.

NVIDIA Parallel Nsight

Wow, I have just noticed NVIDIA Parallel Nsight launch (http://developer.nvidia.com/object/nsight.html) Impressive stuff, way ahead of the old days when I had to guess the performance hit of adding a new assembler instruction in a pixel shader program. The effort graphic chips manufacturers are making nowadays to bring GPU computing to the mass market is quite interesting.

I guess its worth giving a try to analyze and maximize our throughput in multichannel broadcasting using Matrox