http://www.mindfiresolutions.com/Merging-Two-Videos--iPhoneiPad-Tip--1396.php



Suppose you have two video files with name video1.mp4 and video2.mp4 and want to merge them into a single video programmatically this tip might help you. Follow the instructions given below-
 
First, we need to add following frameworks in our project:
a)AVFoundation  framework 
b)Asset Library framework 
c)Media Player Framework
d)Core Media frame work
 
Then we need to import following in the view controller class :
 

#import <MediaPlayer/MediaPlayer.h>
#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreFoundation/CoreFoundation.h>
#import <AVFoundation/AVBase.h>

@implementation MyVideoViewController

 - (void) mergeTwoVideo

{

    AVMutableComposition* composition = [AVMutableCompositioncomposition];

NSString* path1 = [[NSBundlemainBundle] pathForResource:@"video1"ofType:@"mp4"];

NSString* path2 = [[NSBundlemainBundle] pathForResource:@"video2"ofType:@"mp4"];

 AVURLAsset* video1 = [[AVURLAssetalloc]initWithURL:[NSURLfileURLWithPath:path1] options:nil];

    AVURLAsset* video2 = [[AVURLAssetalloc]initWithURL:[NSURLfileURLWithPath:path2] options:nil];

    AVMutableCompositionTrack * composedTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo 

                                                                      preferredTrackID:kCMPersistentTrackID_Invalid]; 

        [composedTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, video1.duration)

                         ofTrack:[[video1 tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]

                          atTime:kCMTimeZero

                           error:nil];

    [composedTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, video2.duration) 

                         ofTrack:[[video2 tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] 

                          atTime:video1.duration

                           error:nil];

 

 

NSString* documentsDirectory= [selfapplicationDocumentsDirectory];

NSString* myDocumentPath= [documentsDirectory stringByAppendingPathComponent:@"merge_video.mp4"];

NSURL *url = [[NSURL alloc] initFileURLWithPath: myDocumentPath];

     if([[NSFileManagerdefaultManager] fileExistsAtPath:myDocumentPath])

    {

        [[NSFileManagerdefaultManager] removeItemAtPath:myDocumentPath error:nil];

    }

VAssetExportSession *exporter = [[[AVAssetExportSessionalloc] initWithAsset:composition presetName:AVAssetExportPresetHighestQuality] autorelease];

 

exporter.outputURL=url;

   exporter.outputFileType = @"com.apple.quicktime-movie";

    exporter.shouldOptimizeForNetworkUse = YES;

    [exporter exportAsynchronouslyWithCompletionHandler:^{

         switch ([exporter status]) {

             caseAVAssetExportSessionStatusFailed:

                 break;

             caseAVAssetExportSessionStatusCancelled:

                 break;

             caseAVAssetExportSessionStatusCompleted:

                break;

            default:

                break;

        }

     }];

 }

 // The import will be completed only when control reaches to  Handler block
 
 

- (NSString*) applicationDocumentsDirectory

{

    NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);

    NSString* basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;

    return basePath;

}


      
Posted by k_ben


http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework



I'd like to introduce a new open source framework that I've written, called GPUImage. The GPUImage framework is a BSD-licensed iOS library (for which the source code can be found on Github) that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a slightly simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.

UPDATE (4/15/2012): I've disabled comments, because they were getting out of hand. If you wish to report an issue with the project, or request a feature addition, go to its GitHub page. If you want to ask a question about it, contact me at the email address in the footer of this page, or post in the new forum I have set up for the project.

About a year and a half ago, I gave a talk at SecondConf where I demonstrated the use of OpenGL ES 2.0 shaders to process live video. The subsequent writeup and sample code that came out of that proved to be fairly popular, and I've heard from a number of people who have incorporated that video processing code into their iOS applications. However, the amount of code around the OpenGL ES 2.0 portions of that example made it difficult to customize and reuse. Since much of this code was just scaffolding for interacting with OpenGL ES, it could stand to be encapsulated in an easier to use interface.

Example of four types of video filters

Since then, Apple has ported some of their Core Image framework from the Mac to iOS. Core Image provides an interface for doing filtering of images and video on the GPU. Unfortunately, the current implementation on iOS has some limitations. The largest of these is the fact that you can't write your own custom filters based on their kernel language, like you can on the Mac. This severely restricts what you can do with the framework. Other downsides include a somewhat more complex interface and a lack of iOS 4.0 support. Others have complained about some performance overhead, but I've not benchmarked this myself.

Because of the lack of custom filters in Core Image, I decided to convert my video filtering example into a simple Objective-C image and video processing framework. The key feature of this framework is its support for completely customizable filters that you write using the OpenGL Shading Language. It also has a straightforward interface (which you can see some examples of below) and support for iOS 4.0 as a target.

Note that this framework is built around OpenGL ES 2.0, so it will only work on devices that support this API. This means that this framework will not work on the original iPhone, iPhone 3G, and 1st and 2nd generation iPod touches. All other iOS devices are supported.

The following is my first pass of documentation for this framework, an up-to-date version of which can be found within the framework repository on GitHub:

General architecture

GPUImage uses OpenGL ES 2.0 shaders to perform image and video manipulation much faster than could be done in CPU-bound routines. It hides the complexity of interacting with the OpenGL ES API in a simplified Objective-C interface. This interface lets you define input sources for images and video, attach filters in a chain, and send the resulting processed image or video to the screen, to a UIImage, or to a movie on disk.

Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include GPUImageVideoCamera (for live video from an iOS camera) and GPUImagePicture (for still images). Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.

Filters and other subsequent elements in the chain conform to the GPUImageInput protocol, which lets them take in the supplied or processed texture from the previous link in the chain and do something with it. Objects one step further down the chain are considered targets, and processing can be branched by adding multiple targets to a single output or filter.

For example, an application that takes in live video from the camera, converts that video to a sepia tone, then displays the video onscreen would set up a chain looking something like the following:

GPUImageVideoCamera -> GPUImageSepiaFilter -> GPUImageView

A small number of filters are built in:

  • GPUImageBrightnessFilter
  • GPUImageContrastFilter
  • GPUImageSaturationFilter
  • GPUImageGammaFilter
  • GPUImageColorMatrixFilter
  • GPUImageColorInvertFilter
  • GPUImageSepiaFilter: Simple sepia tone filter
  • GPUImageDissolveBlendFilter
  • GPUImageMultiplyBlendFilter
  • GPUImageOverlayBlendFilter
  • GPUImageDarkenBlendFilter
  • GPUImageLightenBlendFilter
  • GPUImageRotationFilter: This lets you rotate an image left or right by 90 degrees, or flip it horizontally or vertically
  • GPUImagePixellateFilter: Applies a pixellation effect on an image or video, with the fractionalWidthOfAPixel property controlling how large the pixels are, as a fraction of the width and height of the image
  • GPUImageSobelEdgeDetectionFilter: Performs edge detection, based on a Sobel 3x3 convolution
  • GPUImageSketchFilter: Converts video to a sketch, and is the inverse of the edge detection filter
  • GPUImageToonFilter
  • GPUImageSwirlFilter
  • GPUImageVignetteFilter
  • GPUImageKuwaharaFilter: Converts the video to an oil painting, but is very slow right now

but you can easily write your own custom filters using the C-like OpenGL Shading Language, as described below.

Adding the framework to your iOS project

Once you have the latest source code for the framework, it's fairly straightforward to add it to your application. Start by dragging the GPUImage.xcodeproj file into your application's Xcode project to embed the framework in your project. Next, go to your application's target and add GPUImage as a Target Dependency. Finally, you'll want to drag the libGPUImage.a library from the GPUImage framework's Products folder to the Link Binary With Libraries build phase in your application's target.

GPUImage needs a few other frameworks to be linked into your application, so you'll need to add the following as linked libraries in your application target:

  • CoreMedia
  • CoreVideo
  • OpenGLES
  • AVFoundation
  • QuartzCore

You'll also need to find the framework headers, so within your project's build settings set the Header Search Paths to the relative path from your application to the framework/ subdirectory within the GPUImage source directory. Make this header search path recursive.

To use the GPUImage classes within your application, simply include the core framework header using the following:

#import "GPUImage.h"

As a note: if you run into the error "Unknown class GPUImageView in Interface Builder" or the like when trying to build an interface with Interface Builder, you may need to add -ObjC to your Other Linker Flags in your project's build settings.

Performing common tasks

Filtering live video

To filter live video from an iOS device's camera, you can use code like the following:

GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
 
// Add the view somewhere so it's visible
 
[videoCamera addTarget:thresholdFilter];
[customFilter addTarget:filteredVideoView];
 
[videoCamera startCameraCapture];

This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.

Processing a still image

There are a couple of ways to process a still image and create a result. The first way you can do this is by creating a still image source object and manually creating a filter chain:

UIImage *inputImage = [UIImage imageNamed:@"Lambeau.jpg"];
 
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];
 
[stillImageSource addTarget:stillImageFilter];
[stillImageSource processImage];
 
UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentlyProcessedOutput];

For single filters that you wish to apply to an image, you can simply do the following:

GPUImageSepiaFilter *stillImageFilter2 = [[GPUImageSepiaFilter alloc] init];
UIImage *quickFilteredImage = [stillImageFilter2 imageByFilteringImage:inputImage];

Writing a custom filter

One significant advantage of this framework over Core Image on iOS (as of iOS 5.0) is the ability to write your own custom image and video processing filters. These filters are supplied as OpenGL ES 2.0 fragment shaders, written in the C-like OpenGL Shading Language.

A custom filter is initialized with code like

GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];

where the extension used for the fragment shader is .fsh. Additionally, you can use the -initWithFragmentShaderFromString: initializer to provide the fragment shader as a string, if you would not like to ship your fragment shaders in your application bundle.

Fragment shaders perform their calculations for each pixel to be rendered at that filter stage. They do this using the OpenGL Shading Language (GLSL), a C-like language with additions specific to 2-D and 3-D graphics. An example of a fragment shader is the following sepia-tone filter:

varying highp vec2 textureCoordinate;
 
uniform sampler2D inputImageTexture;
 
void main()
{
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    lowp vec4 outputColor;
    outputColor.r = (textureColor.r * 0.393) + (textureColor.g * 0.769) + (textureColor.b * 0.189);
    outputColor.g = (textureColor.r * 0.349) + (textureColor.g * 0.686) + (textureColor.b * 0.168);    
    outputColor.b = (textureColor.r * 0.272) + (textureColor.g * 0.534) + (textureColor.b * 0.131);
 
	gl_FragColor = outputColor;
}

For an image filter to be usable within the GPUImage framework, the first two lines that take in the textureCoordinate varying (for the current coordinate within the texture, normalized to 1.0) and the inputImageTexture varying (for the actual input image frame texture) are required.

The remainder of the shader grabs the color of the pixel at this location in the passed-in texture, manipulates it in such a way as to produce a sepia tone, and writes that pixel color out to be used in the next stage of the processing pipeline.

One thing to note when adding fragment shaders to your Xcode project is that Xcode thinks they are source code files. To work around this, you'll need to manually move your shader from the Compile Sources build phase to the Copy Bundle Resources one in order to get the shader to be included in your application bundle.

Filtering and re-encoding a movie

Movies can be loaded into the framework via the GPUImageMovie class, filtered, and then written out using a GPUImageMovieWriter. GPUImageMovieWriter is also fast enough to record video in realtime from an iPhone 4's camera at 640x480, so a direct filtered video source can be fed into it.

The following is an example of how you would load a sample movie, pass it through a pixellation and rotation filter, then record the result to disk as a 480 x 640 h.264 movie:

movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
pixellateFilter = [[GPUImagePixellateFilter alloc] init];
GPUImageRotationFilter *rotationFilter = [[GPUImageRotationFilter alloc] initWithRotation:kGPUImageRotateRight];
 
[movieFile addTarget:rotationFilter];
[rotationFilter addTarget:pixellateFilter];
 
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"];
unlink([pathToMovie UTF8String]);
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
 
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
[pixellateFilter addTarget:movieWriter];
 
[movieWriter startRecording];
[movieFile startProcessing];

Once recording is finished, you need to remove the movie recorder from the filter chain and close off the recording using code like the following:

[pixellateFilter removeTarget:movieWriter];
[movieWriter finishRecording];

A movie won't be usable until it has been finished off, so if this is interrupted before this point, the recording will be lost.

Sample applications

Several sample applications are bundled with the framework source. Most are compatible with both iPhone and iPad-class devices. They attempt to show off various aspects of the framework and should be used as the best examples of the API while the framework is under development. These include:

ColorObjectTracking

A version of my ColorTracking example ported across to use GPUImage, this application uses color in a scene to track objects from a live camera feed. The four views you can switch between include the raw camera feed, the camera feed with pixels matching the color threshold in white, the processed video where positions are encoded as colors within the pixels passing the threshold test, and finally the live video feed with a dot that tracks the selected color. Tapping the screen changes the color to track to match the color of the pixels under your finger. Tapping and dragging on the screen makes the color threshold more or less forgiving. This is most obvious on the second, color thresholding view.

SimpleImageFilter

A bundled JPEG image is loaded into the application at launch, a filter is applied to it, and the result rendered to the screen. Additionally, this sample shows two ways of taking in an image, filtering it, and saving it to disk.

MultiViewFilterExample

From a single camera feed, four views are populated with realtime filters applied to camera. One is just the straight camera video, one is a preprogrammed sepia tone, and two are custom filters based on shader programs.

FilterShowcase

This demonstrates every filter supplied with GPUImage.

BenchmarkSuite

This is used to test the performance of the overall framework by testing it against CPU-bound routines and Core Image. Benchmarks involving still images and video are run against all three, with results displayed in-application.

Things that need work

This is just a first release, and I'll keep working on this to add more functionality. I also welcome any and all help with enhancing this. Right off the bat, these are missing elements I can think of:

  • Images that exceed 2048 pixels wide or high currently can't be processed on devices older than the iPad 2 or iPhone 4S.
  • Currently, it's difficult to create a custom filter with additional attribute inputs and a modified vertex shader.
  • Many common filters aren't built into the framework yet.
  • Video capture and processing should be done on a background GCD serial queue.
  • I'm sure that there are many optimizations that can be made on the rendering pipeline.
  • The aspect ratio of the input video is not maintained, but stretched to fill the final image.
  • Errors in shader setup and other failures need to be explained better, and the framework needs to be more robust when encountering odd situations.

Hopefully, people will find this to be helpful in doing fast image and video processing within their iOS applications.

      
Posted by k_ben


http://daoudev.tistory.com/entry/모바일-기기에서의-동영상-재생#footnote_link_28_8



1. 지원 파일

 

 

안드로이드 공식 지원 파일 포맷

코덱

확장자

비고

H.263

3gp, mp4

 

H.264 AVC

3gp, mp4, ts

인코딩은 허니컴부터 지원

ts 형식 또한 허니컴부터 지원

MPEG-4 SP

3gp

 

VP8

webm, mkv

2.3.3 부터 지원

스트리밍 형태는 ICS 부터 가능

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

아이폰 공식 지원 파일 포맷

코덱

확장자

비고

H.264

m4v, mp4, mov, 3gp

640 * 480, 30fps, 1.5Mbps

320 * 240, 30fps, 768Kbps

MPEG-4

m4v, mp4, mov, 3gp

640 * 480, 30fps, 2.5Mbps

 

 

 

 

 

 

 

 

 

 

2. 모바일 동영상 재생 방법

 

동영상은 다운로드 또는 스트리밍 방식으로 재생할 수 있습니다.

안드로이드 기반의 모바일 기기에서는 RTSP(Realtime Streaming Protocol)/HLS(HTTP Live Streaming) 지원합니다. 단, HLS 안드로이드 3.0 이상의 OS부터 지원하고 있습니다. iOS기반의 아이폰은 HLS1 지원하며,이외의 방식은 reject 사유에 포함됩니다.

 

참고로 아이폰에서의 동영상 재생 심사 기준에 대해서 알려드리겠습니다.HLS를 사용하지 않고, 10Mb 이상의 파일을 스트리밍 형식으로 재생하는 경우 reject 사유에 포함됩니다.

 

- 인코딩시 비트레이트 조건

화질

비디오

오디오

Low

96Kbps

64Kbps

Medium

256Kbps

64Kbps

High

800Kbps

64Kbps

 

 

모바일에서 일반 동영상 파일을 재생 하기 위해서는 모바일 플랫폼별 지원 포맷으로 변환하는 과정이 필요합니다. (FFmpeg Open Source 에서 지원)

 

 

3. FFMpeg 소개

 

FFMpeg는 인코딩2/먹싱3/트랜스코딩/디먹싱4/디코딩5/스트림/재생 등 멀티미디어 관련한 거의 모든 기능을 갖추고 있는 오픈 소스 멀티 미디어 프레임워크입니다. 크로스 플랫폼(Cross Platform)을 지원하고 GNU Lesser General Public License (LGPL) 라이선스에 의해 배포됩니다.

FFMpeg는 간단한 동영상 출력 프로그램인 ffplay도 제공합니다.

 

FFMpeg Library

  • libavcodec: 오디오/비디오의 인코더/디코더

  • libavformat: 오디오/비디어 컨테이너 포맷의 muxer/demuxer
  • libavutil: FFmpeg 개발  필요한 다양한 유틸리티
  • libpostproc: video post-processing
  • libswscale: 비디오의 image scaling, color-space, pixel-format 변환
  • libavfilter: 인코더와 디코더 사이에서 오디오/비디오를 변경하고 검사
  • libswresample: 오디오 리샘플링(audio resampling)

 

일반적인 컨버팅 절차

 

FFMpeg 동영상 컨버팅 과정

 

 

출처: http://helloworld.naver.com/helloworld/8794

 

1) libavformat  통해 비디어코덱과 오디오 코덱의 정보를 추출

2) libavcodec  통해 비디오/오디오 데이터를 디코딩

3) PC, 모바일에서 동일하게 적용되는 방법이며 위에서 추출한 데이터를 기반으로 파일로 저장/재생/편집

 

 

동영상 플레이어 구현 방법

 

출처: http://helloworld.naver.com/helloworld/8794

 

1) 컨버팅 과정에서 받은 정보를 큐에 저장

2) 큐에 저장된 정보를 SDL6 통해서 지속적으로 렌더링하여 재생

 - 영상비디오 Refresher 프레임 갱신을 요청/처리하여 비디오 Renderer 통해서 화면에 출력

 - 오디오오디오 Renderer 통해 오디오 정보를 재생

 

 

4. 구현 방법 시나리오

 

A. 서버에서 변환하는 경우

 

A-1) 컨텐츠 제공자가 파일을 등록할  변환

 

- 파일이 업로드 되면 서버에서 자동으로 파일을 변환하여 저장하는 방식

- FFmpeg 에서 제공하는 변환 기능을 통해 업로드  파일을 변환 저장 가능

- 유튜브네이버네이트  동영상 스트리밍 서비스에서 주로 사용되는 방식

 

A-2) 컨텐츠 제공자가 파일 스트리밍 요청할 때 변환

 

스트리밍 요청이 왔을 때 서버에서 실시간으로 변환하며 스트리밍

- 네이버 N드라이브가 모바일에서 동영상 전송시 사용되는 방식

- 대다수의 RTSP, HLS 기능을 포함한 서버 툴이 이러한 방식을 사용

 

A-3) 컨텐츠 제공자가 규약된 포맷으로 변환 후 업로드

 

- 컨텐츠 제공자가 규약된 포맷으로 변환하여 서버에 업로드

- 개인 사용자가 사용 가능한 다양한 변환 툴7이 있음.

- 개인용 서버에서 주로 사용되는 방식


 

B. 모바일에서 변환하는 경우

 

B-1) 변환 후 내장 플레이어로 플레이 가능한 파일로 저장

 

- 전달받은 영상 정보를 파일로 변환하여 저장 후 재생하는 방식

- FFmpeg 에 내장된 기능으로 특정 포맷으로 파일 변환/저장이 가능

- 일반 듀얼코어 PC 에서 800M 영상을 변환시 12분 소요

- 이보다  성능이 낮은 모바일 기기에서 구현시 배터리와 성능에 있어서 많은 문제가 있을것으로 예상됨.

 

B-2) 변환 후 내장 플레이어로 데이터 전달

 

 

- 내장 플레이어에 접근 가능한 모듈을 통해 변환 정보를 전송하는 방식

- 모바일 내장 플레이어에 대한 플러그인이 공식적으로 지원되지 않음.

- 별도의 제조사별 협의가 필요하며 다양한 기기에 대한 지원 대책이 필요함.

 

B-3) 변환 기능이 포함된 플레이어를 통해 플레이

 

 

- 전달 받은 영상 정보를 변환 모듈을 추가한 플레이어를 통해 재생하는 방식

- 가장 일반적인 방법으로 FFmpeg 를 활용한 다양한 플레이어가 있음.

- NHN 의 N드라이브 내장 플레이어 (안드로이드, iOS), AVPlayer (iOS) 가 개발 소스를 공개함.8

 

  1. HTTP Live Streaming 애플에서 2009년 iOS 3.0 발표 때 내놓은 프로토콜로 스트리밍 데이터를 MPEG-2 TS에 담아 시간단위로 쪼개서 보냅니다. Adobe사는 Flash Media Server 4.0에서, MicroSoft에서는 IIS Media Server 4.0 에서부터 정식 지원했으며 안드로이드는 3.0에서부터 지원합니다. [본문으로]
  2. 인코딩 (Encoding): 영상 데이터를 특정 동영상 코덱으로 변환하는 것 [본문으로]
  3. 먹싱 (Muxing): 변환한 데이터를 Digital Container Format 파일에 담는 것 [본문으로]
  4. 디먹싱 (Demuxing): 먹싱과 반대 개념으로 동영상 파일로 부터 비트 스트림 데이터를 추출하는 것 [본문으로]
  5. 디코딩 (Decoding): 특정 코덱의 데이터로부터 영상 데이터를 추출하는 것 [본문으로]
  6. SDL(Simple Directmedia Layer) : 비디오, 오디오, 사용자 입력 등의 계층을 추상화하여, 리눅스, 마이크로소프트 윈도, 맥 OS X 등 여러 운영 체제에서 실행이 가능하도록 한 크로스플랫폼 멀티미디어 라이브러리 [본문으로]
  7. 이러한 프로그램을 인코더(Encoder)라고 부른다. 특정 포맷의 동영상을 다른 포맷의 영상파일로 바꿔주는 기능을 수행한다. 국내에서 대중적으로 사용되는 프로그램은 다음의 팟 인코더이며 FFmpeg 를 활용하여 개발되었다. 라이센스는 개인/기업/공공/교육 등에 제한이 없다. [본문으로]
  8. 안드로이드 소스: http://helloworld.naver.com/helloworld/8794, N드라이브에 내장된 플레이어의 개발 과정 및 샘플 프로젝트가 공개됨. iOS 소스: http://luuvish.org/206, 엔터테이먼트 부문 2위 ( 플레이어 1위) 인 AVPlayer 소스가 공개됨. [본문으로]
------------------------------------------------------------------------------

플랫폼별 기본 내장 브라우저에서 플레이 가능한 코덱

 

코덱

iPhone

iPad

Android

H.263[i]

X

O

O

H.264[ii]

O

O

O

MPEG-4 Part2[iii]

O

O

O

VP8[iv]

X

X

O

 

 

 

 

 

 

 


  • iOS 브라우저 특징 - iOS 4.2 이상의 iPad 에서는 내장 브라우저 내에 비디오 컨트롤러가 포함되어 있으며, iPhone에서는 비디오 플레이어를 별도로 실행하여 플레이한다.
  • 안드로이드 브라우저 특징 - HTML 태그에 따라 브라우저 내 재생과 플레이어 호출이 각각 이루어 진다.

 

[i] H.263: H.261 코덱 기반으로 개발되었다. H.261에 비해 절반의 대역폭으로 똑같은 화질을 얻을 수 있기 때문에H.261을 대신해 범용으로 사용되며 비디오 스트리밍 전송을 위한 실시간 전송 프로토콜(RTP)에 사용되고 있다.

[ii] H.264/ MPEG-4 AVC: 매우 높은 데이터 압축률을 가지는 디지털 비디오 코덱 표준으로 MPEG-4 파트 10 또는MPEG-4/AVC(고급 비디오 부호화, Advanced Video Coding)라 부르기도 한다.

[iii] MPEG-4 Part 2 / SP: MPEG-4 Part2 또는 MPEG-4 Visual 또는 MPEG-4 ASP은 ISO/IEC의 동화상 전문가 그룹(MPEG)에서 만든 디지털 비디오 코덱이다. DivX, Xvid 등이 이 코덱의 구현에 해당한다.

[iv] VP8: 구글이 인수한 On2 테크놀로지스의 비디오 코덱 중 하나이다. BSD 라이선스 형식의 특허 대응을 위해, 수정 라이선스로 오픈 소스 소프트웨어화 함.

 

 

 

모바일 브라우저에서 동영상 플레이 하는 방법

 

브라우저 내에서 플레이

1
2
3
4
5
6
7
8
9
10
11
12
<video id="player1" width="600" height="360" controls>
    <source src="./h264.mp4" />
</video>
 
 
// 터치시 플레이가 되도록 스크립트 구성
<script language="javascript">
    var player1 = document.getElementById('player1');<br>
    player1.addEventListener('click', function() {
    player1.play();
    }, false);
</scipt>



미디어 플레이어 호출을 통한 플레이


 

테스트 결과

1) iOS 테스트

플랫폼

테스트

H.264

H.263

MPEG-4 PART 2

비고

재생/호출

재생/호출

재생/호출

iPhone

유튜브

O

X

O

Quick Player 호출 후 재생

iPad

O

O

O

Safari 내에서 재생

2) 안드로이드 테스트

브라우저

테스트

H.264

H.263

MPEG-4

PART 2

비고

재생

호출

재생

호출

재생

호출

기본

유튜브

 

O

O

O

O

O

O

호출 시

다운로드 선택 가능

Chrome

O

O

O

O

O

O

-

Firefox

X

X

X

호출 시

다운로드만 지원함

Dolphin

O

O

O

O

O

O

호출 시

다운로드 선택 가능

Boat

X

O

O

O

O

O

Maxthon

O

O

O

O

O

O

Xscope

O

O

O

O

O

O



      
Posted by k_ben


http://www.iphones.ru/forum/index.php?showtopic=77707

testImageArray = [[[NSArray alloc] initWithObjects:
                                                                       
[UIImage imageNamed:@"1.png"],
                                                                       
[UIImage imageNamed:@"2.png"],
                                                                       
[UIImage imageNamed:@"3.png"],
                                                                       
[UIImage imageNamed:@"4.png"],
                                                                       
[UIImage imageNamed:@"5.png"],
                                                                 
nil]autorelease];

                       
[self writeImageAsMovie:testImageArray toPath:documentsDirectory size:CGSizeMake(460, 320) duration:1];


-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size duration:(int)duration 
{
       
NSError *error = nil;
       
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
                                                                 
[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
                                                                                                                          error
:&error];
       
NSParameterAssert(videoWriter);
       
       
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                                                   
AVVideoCodecH264, AVVideoCodecKey,
                                                                   
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                                                   
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
                                                                   
nil];
       
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
                                                                                assetWriterInputWithMediaType
:AVMediaTypeVideo
                                                                                outputSettings
:videoSettings] retain];
       
       
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                                                                         assetWriterInputPixelBufferAdaptorWithAssetWriterInput
:writerInput
                                                                                                         sourcePixelBufferAttributes
:nil];
       
NSParameterAssert(writerInput);
       
NSParameterAssert([videoWriter canAddInput:writerInput]);
       
[videoWriter addInput:writerInput];
       
       
       
//Start a session:
       
[videoWriter startWriting];
       
[videoWriter startSessionAtSourceTime:kCMTimeZero];
       
       
CVPixelBufferRef buffer = NULL;
       
//buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:0] CGImage]];
        buffer
= [self pixelBufferFromCGImage:[[array objectAtIndex:0] CGImage] size:CGSizeMake(460, 320)];
       
CVPixelBufferPoolCreatePixelBuffer (NULL, adaptor.pixelBufferPool, &buffer);
       
       
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
       
int i = 1;
       
while (writerInput.readyForMoreMediaData)
       
{
               
NSLog(@"inside for loop %d",i);
               
CMTime frameTime = CMTimeMake(1, 20);
               
               
CMTime lastTime=CMTimeMake(i, 20); //i is from 0 to 19 of the loop above
               
               
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
               
               
if (i >= [array count])
               
{
                        buffer
= NULL;
               
}
               
else
               
{
                        buffer
= [self pixelBufferFromCGImage:[[array objectAtIndex:0] CGImage] size:CGSizeMake(460, 320)];
               
}                
               
//CVBufferRetain(buffer);
               
               
if (buffer)
               
{
                       
// append buffer
                       
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
                        i
++;
               
}
               
else
               
{
                       
// done!
                       
                       
//Finish the session:
                       
[writerInput markAsFinished];
                       
[videoWriter finishWriting];                            
                       
                       
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
                       
[videoWriter release];
                       
[writerInput release];
                       
NSLog (@"Done");
//                      [imageArray removeAllObjects];                    
                       
break;
               
}
       
}
       
        path
= [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/myMovie.m4v"]];
       
//UISaveVideoAtPathToSavedPhotosAlbum(path, self, @selector(video:didFinishSavingWithError:contextInfo:), nil);
       
       
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
       
NSParameterAssert(library);
       
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:path]]) {
               
[library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:path] completionBlock:^(NSURL *assetURL, NSError *error){}];   //Тут ошибка вылетает
       
}
       
[library release];
}


      
Posted by k_ben


----- 영상 -------------------------------------------
FPS 구하기 : 비디오스트림의 r_frame_rate를 변환하면 fps를 구할 수 있다.
double fps   = av_q2d(pFormatCtx->streams[videoStream]->r_frame_rate);

동영상 길이 구하기 : 
 비디오스트림의 time_base는 프레임당 시간값이 저장되어 있다.
 그리고 비디오스트림의 duration에는 총 프레임 수가 저장되어 있다.

 이 두 값을 곱하면 동영상의 총 길이를 구할 수 있다.
double dur = av_q2d(pFormatCtx->streams[videoStream]->time_base) * pFormatCtx->streams[videoStream]->duration;


----- 음성 -------------------------------------------
audioStream의 데이터 값 중 duration 값에는 오디오에 할당된 실제 바이트가 입력되어 있다.
pFormatCtx->streams[audioStream]->duration

따라서 wave header를 생성시 이 값을 그대로 넣으면 된다.

wf.nSubChunk2Size = (DWORD)( pFormatCtx->streams[audioStream]->duration );

audio의 시간을 알고 싶다면 

pFormatCtx->streams[audioStream]->duration / wf.nAvgBytesPerSec

하면 초단위의 시간이 나온다.

---------------------------------------------------------------------------------------------
// FFMPEG을 이용한 웨이브 헤더 생성

MY_WAVEFORMATEX wf;
wf.Riff   = 0x46464952;    // "RIFF" { 'R', 'I', 'F', 'F' };
wf.Wave   = 0x45564157;    // "WAVE" { 'W', 'A', 'V', 'E' };
wf.Fmt   = 0x20746D66;    // "fmt " { 'f', 'm', 't', ' ' };
wf.nSubChunk1Size = 16;
wf.wFormatTag  = WAVE_FORMAT_PCM;  // PCM_WAVE = 1
wf.nChannels  = pCodecCtx_a->channels > 2 ? 2 : pCodecCtx_a->channels;
wf.nSamplesPerSec = pCodecCtx_a->sample_rate;
switch(pCodecCtx_a->sample_fmt) {
    case SAMPLE_FMT_U8 :     wf.wBitsPerSample = 8;     break;
    case SAMPLE_FMT_S16 :    wf.wBitsPerSample = 16;   break;
    case SAMPLE_FMT_S32 :    wf.wBitsPerSample = 32;   break;
    default :    bAudio = false;                                           break;
}
wf.nBlockAlign  = (wf.wBitsPerSample / 8) * wf.nChannels;
wf.nAvgBytesPerSec = wf.nSamplesPerSec * wf.nBlockAlign;
wf.Data    = 0x61746164;    // "data" { 'd', 'a', 't', 'a' };
wf.nSubChunk2Size = (DWORD)( pFormatCtx->streams[audioStream]->duration );
wf.nChunkSize  = wf.nSubChunk2Size + sizeof(MY_WAVEFORMATEX) - 8;

      
Posted by k_ben


플레이나 편집 같은거 나온 사이트

http://www.raywenderlich.com/13418/how-to-play-record-edit-videos-in-ios


How to Play, Record, and Edit Videos in iOS

 jneuman 
Learn how to play, record, and edit videos on iOS!

Learn how to play, record, and edit videos on iOS!

This is a blog post by iOS Tutorial Team member Abdul Azeem, software architect and co-founder atDatainvent Systems, a software development and IT services company.

Update 8/14/12: Fixes and clarifications made by Joseph Neuman.

Recording videos (and playing around with them programmatically) is one of the coolest things you can do with your phone, but surprisingly relatively few apps make use of it.

This is likely because learning the technology behind video recording and editing in iOS – AVFoundation – is notoriously difficult.

And to make it worse, there is very little documentation on how to accomplish anything with AVFoundation. One of the few resources available is the WWDC 2010 AVFoundation session video, but it only takes you so far.

There should be an easier way. Thus was born this tutorial! :]

In this tutorial, we’ll give you hands-on experience with the AVFoundation APIs so you can start using them in your own apps. You’ll learn how to:

  • Select and play a video from the media library.
  • Record and save a video to the media library.
  • Merge multiple videos together into a combined video, complete with a custom soundtrack! :]

Are you ready? Lights, cameras, action!

Getting Started

Let’s get started by creating a simple app that will allow you to play and record videos and save them to files.

Start Xcode and create a new project with the iOS\Application\Single View Application template. Enter “VideoPlayRecord” for the project name, choose iPhone for the Device Family, make sure the “Use Storyboard” and “Use Automatic Reference Counting” options are checked, and save the project to a location of your choice.

Next up, add some of the necessary frameworks to your project.

Select the root of the project in the “Project Navigator” pane in the left sidebar to bring up the project information in the central pane. If the project target is not selected, select it, then switch to the “Build Phases” tab.

Now click the triangle next to the “Link Binary With Libraries” section to expand it. Here you can add additional libraries/frameworks to your project.

Video1

Click the (+) button to add frameworks. You can select multiple items in the dialog that opens by command-clicking on each item. Add the following frameworks to your project:

  • AssetsLibrary
  • AVFoundation
  • CoreMedia
  • MediaPlayer
  • MobileCoreServies
  • QuartzCore

In this project, you’ll create an app with four screens. The first will simply have three buttons that will allow you to navigate to the following three screens:

  • Video Play
  • Video Record
  • Video Merge

Get Your Story Straight

Select MainStoryboard.storyboard in the main window to see a view controller. You need this view controller to be embedded in a navigation controller because there are going to be multiple screens in the app.

To do this, first click the view controller to give it focus, then click select Editor\Embed In\Navigation Controller from the menu. The view controller now has a segue from a navigation controller.

Now, drag three UIButtons from the Object Library (at the bottom half of the right sidebar – if the Object Library isn’t selected, it’s the third tab) to the view controller. Once you’ve placed them in the view to your satisfaction, set the titles of the buttons as follows:

  1. Select and Play Video
  2. Record and Save Video
  3. Merge Video

You can set the titles for each button by tapping the button to select it, and then editing the Title property for the button in the Attributes Inspector, which is the fourth tab in the top half of the right sidebar.

Next, set up three view controllers for the views that will be displayed via these buttons. Do this by creating three UIViewController subclass objects using the iOS\Cocoa Touch\UIViewController subclass template. Name the new classes PlayVideoViewControllerRecordVideoViewController, and MergeVideoViewController. As you’re using storyboards, make sure you uncheck the “With XIB for user interface” option for each class.

Now switch back to MainStoryboard.storyboard and drag three UIViewControllers from the Object Library onto your storyboard. Select each view controller object in turn and switch to the Identity Inspector (the third tab in the top half of the right sidebar) to set the class for each view controller as follows:

  1. PlayVideoViewController
  2. RecordVideoViewController
  3. MergeVideoViewController

Video2

Now you’ve got to hook all of these things together. You’ll do this by creating a segue from each button to the new view controller it will load.

Select each button in turn, ensure that the Connections Inspector (sixth tab in the top half of the right sidebar) is open, and drag from the “Push” connector to the relevant view controller.

Video3

Once you’re done, your storyboard should look similar to the screen below:

Video4

Great, you’ve set up the basic UI! Build the application and run it to ensure that the three buttons work as intended, each leading to a secondary screen.

If you’re confused about storyboards and how to set them up, don’t worry! There’s a tutorial for that. Check out the Beginning Storyboards in iOS 5 tutorial series.

Now that your UI is working, it’s time to create those secondary screens and give some substance to the form!

Select and Play Video

Switch to MainStoryboard.storyboard and create a new button titled “Play Video” in the Play Video View Controller. Hook the new button to an action in the PlayVideoViewController class by doing the following:

  1. Switch to the Assistant Editor view by tapping the middle button in the Editor section of the toolbar at the top of the Xcode window. This should open up a split view where you can see both the interface and its matching class.
  2. Tap on the new button you just created to select it.
  3. Switch to the Connections Inspector (sixth tab in the top half of the right sidebar).
  4. Control-drag from the Touch Up Inside event to the line beneath the @interface line in the PlayVideoViewController source code, and let go.
  5. You should see a dialog similar to the one in the image below. Type in “playVideo” as the action name and click “Connect.”

Video5

You just set up an action in PlayVideoViewController named playVideo, which will be executed whenever you tap the “Play Video” button. But you still have to implement the new playVideo action.

Start by adding the following import statements to the top of PlayVideoViewController.h:

#import <MobileCoreServices/UTCoreTypes.h>
#import <MediaPlayer/MediaPlayer.h>

The MediaPlayer.h header gives you access to the MediaPlayer object that will be used to play the selected video. UTCoreTypes.h defines a constant value named “kUTTypeMovie,” which you’ll need to refer to when selecting media.

Now add the following code to the end of the @interface line below the #import statements:

<UIImagePickerControllerDelegate, UINavigationControllerDelegate>

This sets up the PlayVideoViewController as a delegate for UIImagePickerController and UINavigationController, so that you can use the UIImagePickerController in your class. Specifically, you’ll be using it to browse videos in your photo library. What is this UIImagePickerController class provided by Apple? It offers a basic, customizable user interface for taking pictures and recording movies. It also provides some simple editing functionality for newly-captured media. If you don’t need a fully-customized UI, it’s generally better to use an image picker controller to select audio and video files from the media library. To browse media, you need to open an instance of UIImagePickerController as a pop up view. Add the definition for a method to do this in PlayVideoViewController.h, above the @end line:

// For opening UIImagePickerController
-(BOOL)startMediaBrowserFromViewController:(UIViewController*)controller usingDelegate:(id )delegate;

Now switch to PlayVideoViewController.m and add the following code to the playVideo method:

[self startMediaBrowserFromViewController:self usingDelegate:self];

The above code ensures that tapping the “Play Video” button will open the UIImagePickerController, allowing the user to select a video file from the media library.

Now, add the implementation for startMediaBrowserFromViewController to the bottom of the file (but above the final @end):

-(BOOL)startMediaBrowserFromViewController:(UIViewController*)controller usingDelegate:(id )delegate {
    // 1 - Validations
    if (([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeSavedPhotosAlbum] == NO)
        || (delegate == nil)
        || (controller == nil)) {
        return NO;
    }
    // 2 - Get image picker
    UIImagePickerController *mediaUI = [[UIImagePickerController alloc] init];
    mediaUI.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
    mediaUI.mediaTypes = [[NSArray alloc] initWithObjects: (NSString *) kUTTypeMovie, nil];
    // Hides the controls for moving & scaling pictures, or for
    // trimming movies. To instead show the controls, use YES.
    mediaUI.allowsEditing = YES;
    mediaUI.delegate = delegate;
    // 3 - Display image picker
    [controller presentModalViewController:mediaUI animated:YES];
    return YES;
}

In the above code, you do the following:

  1. Check if the UIImagePickerControllerSourceTypeSavedPhotosAlbum (the defined source) is available on the device. This check is essential whenever you use a UIImagePickerController to pick media. If you don’t do it, you might try to pick media from a non-existent media library, resulting in crashes or other unexpected issues.
  2. If the source you want is available, you create a new UIImagePickerController object and set its source and media type. Only “kUTTypeMovie” is included in the mediaTypes array, as you only need video. You can include “kUTTypeImage” in the array to select images as well.
  3. Finally, you present the UIImagePickerController as a modal view controller.

Now you’re ready to give your project another whirl! Build and run.

If you have any videos in your media library, you should see them presented, similar to the following screenshot, when you tap the “Select and Play Video” button on the first screen, and then tap the “Play Video” button on the second screen.

Note: If you run this project on the simulator, you’ll have no way to capture video. Plus, you’ll need to figure out a way to add videos to the media library manually. In other words, I recommend you test this project on a device!

Once you see the list of videos, select one. You’ll be taken to another screen that shows the video in detail. Tap the “Choose” button to actually select the video here.

Hang on! If you tap “Choose,” nothing happens, except that the app returns to the Play Video screen! This is because you haven’t implemented any delegate methods to handle the actions you carried out while displaying the image picker.

UIImagePickerController has a delegate callback method that can be executed when media is selected. Implement this method by adding the following code to the end of PlayVideoViewController.m:

-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    // 1 - Get media type
    NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];
    // 2 - Dismiss image picker
    [self dismissModalViewControllerAnimated:NO];
    // Handle a movie capture
    if (CFStringCompare ((__bridge_retained CFStringRef)mediaType, kUTTypeMovie, 0) == kCFCompareEqualTo) {
        // 3 - Play the video
        MPMoviePlayerViewController *theMovie = [[MPMoviePlayerViewController alloc] 
            initWithContentURL:[info objectForKey:UIImagePickerControllerMediaURL]];
        [self presentMoviePlayerViewControllerAnimated:theMovie];
        // 4 - Register for the playback finished notification
        [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(myMovieFinishedCallback:)
            name:MPMoviePlayerPlaybackDidFinishNotification object:theMovie];
    }
}

The above code does the following:

  1. Gets the media type so you can verify later on that the selected media is a video.
  2. Dismisses the image picker so that it’s no longer displayed on screen.
  3. Verifies that the selected media is a video, and then creates an instance of MPMoviePlayerViewController to play it.
  4. Adds a callback method that will be executed once the movie finishes playing.

The myMovieFinishedCallback: method referenced in step #4 needs to be implemented. Add the following code to the end ofPlayVideoViewController.m:

// When the movie is done, release the controller.
-(void)myMovieFinishedCallback:(NSNotification*)aNotification {
    [self dismissMoviePlayerViewControllerAnimated];
    MPMoviePlayerController* theMovie = [aNotification object];
    [[NSNotificationCenter defaultCenter] removeObserver:self 
        name:MPMoviePlayerPlaybackDidFinishNotification object:theMovie];
}

The last thing to do is to add a handler for when the user taps “Cancel” instead of selecting a video. Add the following code right below imagePickerController:didFinishPickingMediaWithInfo::

// For responding to the user tapping Cancel.
-(void)imagePickerControllerDidCancel:(UIImagePickerController *)picker {
    [self dismissModalViewControllerAnimated: YES];
}

If the user cancels the operation, the image picker gets dismissed.

Compile and run your project. Press the “Select and Play Video” button, then the “Play Video” button, and finally choose a video from the list. You should be able to see the video playing in the media player.

Record and Save Video

Now that you have video playback working, it’s time to record a video using the device’s camera and save it to the media library.

Switch back to the storyboard and do the following:

  1. Add a new button titled “Record Video” to the Record Video View Controller.
  2. As before, switch to Assistant Editor mode and connect the “Record Video” button to an action named recordAndPlay:.

Video6

Time to get coding! Replace the contents of RecordVideoViewController.h with the following:

#import <MediaPlayer/MediaPlayer.h>
#import <MobileCoreServices/UTCoreTypes.h>
#import <AssetsLibrary/AssetsLibrary.h>
 
@interface RecordVideoViewController: UIViewController 
 
-(IBAction)recordAndPlay:(id)sender;
-(BOOL)startCameraControllerFromViewController:(UIViewController*)controller 
    usingDelegate:(id )delegate;
-(void)video:(NSString *)videoPath didFinishSavingWithError:(NSError *)error contextInfo:(void*)contextInfo;
 
@end

You may have noticed: some of this looks similar to what you did in PlayVideoViewController. As for the bits that don’t:

The AssetsLibrary.h import provides access to the videos and photos under the control of the Photos application. As you want to save your video to the Saved Photos library, you need access to the AssetsLibrary framework.

The asset library includes media that is in the Saved Photos album, media coming from iTunes, and media that was directly imported onto the device. You use AssetsLibrary to retrieve a list of all asset groups and to save images and videos into the Saved Photos album.

The other new item is video:didFinishSavingWithError:contextInfo:. This method, as the name implies, is executed after a video is saved to the Asset/Photo Library.

Switch to RecordVideoViewController.m and add the following to recordAndPlay::

[self startCameraControllerFromViewController:self usingDelegate:self];

You are again in familiar territory. The code simply calls startCameraControllerFromViewController:usingDelegate: when the “Record Video” button is tapped. Of course, this means you should add the implementation for the method next. Add the following code to the end of the file (but before the final @end):

-(BOOL)startCameraControllerFromViewController:(UIViewController*)controller
    usingDelegate:(id )delegate {
    // 1 - Validattions
    if (([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] == NO)
        || (delegate == nil)
        || (controller == nil)) {
        return NO;
    }
    // 2 - Get image picker
    UIImagePickerController *cameraUI = [[UIImagePickerController alloc] init];
    cameraUI.sourceType = UIImagePickerControllerSourceTypeCamera;
    // Displays a control that allows the user to choose movie capture
    cameraUI.mediaTypes = [[NSArray alloc] initWithObjects:(NSString *)kUTTypeMovie, nil];
    // Hides the controls for moving & scaling pictures, or for
    // trimming movies. To instead show the controls, use YES.
    cameraUI.allowsEditing = NO;
    cameraUI.delegate = delegate;
    // 3 - Display image picker
    [controller presentModalViewController: cameraUI animated: YES];
    return YES;
}

In the code above, you check for “UIImagePickerControllerSourceTypeCamera” instead of “UIImagePickerControllerSourceTypeSavedPhotosAlbum” because you want to use the camera. The rest of the code is mostly identical to what you used before.

Build and run your code to see what you’ve got so far.

Go to the Record screen and press the “Record Video” button. Instead of the Photo Gallery, the camera UI opens. Start recording a video by tapping the red record button at the bottom of the screen, and tap it again when you’re done recording.

This video is more exciting that it looks. I can’t post it here, but there’s a hot babe just offscreen! Just kidding, but it is more exciting when you see this running on your own device. :]

When you get to the next screen, you can opt to use the recorded video or re-take the video. If you select “Use,” you’ll notice that nothing happens – that’s because, you guessed it, there is no callback method implemented. You need the callback method to save the recorded video to the media library.

To implement the callback methods, add the following code to the end of RecordVideoViewController.m:

-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];
    [self dismissModalViewControllerAnimated:NO];
    // Handle a movie capture
    if (CFStringCompare ((__bridge_retained CFStringRef) mediaType, kUTTypeMovie, 0) == kCFCompareEqualTo) {
        NSString *moviePath = [[info objectForKey:UIImagePickerControllerMediaURL] path];
        if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(moviePath)) {
            UISaveVideoAtPathToSavedPhotosAlbum(moviePath, self, 
                @selector(video:didFinishSavingWithError:contextInfo:), nil);
        } 
    }
}
 
-(void)video:(NSString*)videoPath didFinishSavingWithError:(NSError*)error contextInfo:(void*)contextInfo {
    if (error) {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"Video Saving Failed" 
            delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
        [alert show];
    } else {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Video Saved" message:@"Saved To Photo Album" 
            delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil];
        [alert show];
    }
}

In the above code, imagePickerController:didFinishPickingMediaWithInfo: gives you a moviePath. You verify that the movie can be saved to the device’s photo album, and save it if so.

UISaveVideoAtPathToSavedPhotosAlbum is the default method provided by the SDK to save videos to the Photos Album. As parameters, you pass both the path to the video to be saved, as well as a callback method that will inform you of the status of the save operation.

Build the code and run it. Record a video and select “Use.” If the “Video Saved” button pops up, your video has been successfully saved to the photo library.

A Brief Intro to AVFoundation

Now that your video playback and recording is up and running, let’s move on to something a bit more complex: AVFoundation.

Since iOS 4.0, the iOS SDK provides a number of video editing APIs in the AVFoundation framework. With these APIs, you can apply any kind of CGAffineTransform to a video and merge multiple video and audio files together into a single video.

These last few sections of the tutorial will walk you through merging two videos into a single video and adding a background audio track.

Before diving into the code, let’s discuss some theory first.

AVAsset

This is an abstract class that represents timed audiovisual media such as video and audio. Each asset contains a collection of tracks intended to be presented or processed together, each of a uniform media type, including but not limited to audio, video, text, closed captions, and subtitles.

An AVAsset object defines the collective properties of the tracks that comprise the asset. A track is represented by an instance of AVAssetTrack.

In a typical simple case, one track represents the audio component and another represents the video component; in a complex composition, there may be multiple overlapping tracks of audio and video. You will represent the video and audio files you’ll merge together as AVAsset objects.

AVComposition

An AVCompositionobject combines media data from multiple file-based sources in a custom temporal arrangement in order to present or process it together. All file-based audiovisual assets are eligible to be combined, regardless of container type.

At its top level, an AVComposition is a collection of tracks, each presenting media of a specific type such as audio or video, according to a timeline. Each track is represented by an instance of AVCompositionTrack.

AVMutableComposition and AVMutableCompositionTrack

A higher-level interface for constructing compositions is also presented by AVMutableComposition and AVMutableCompositionTrack. These objects offer insertion, removal, and scaling operations without direct manipulation of the trackSegment arrays of composition tracks.

AVMutableComposition and AVMutableCompositionTrack make use of higher-level constructs such as AVAsset and AVAssetTrack. This means the client can make use of the same references to candidate sources that it would have created in order to inspect or preview them prior to inclusion in a composition.

In short, you have an AVMutableComposition and you can add multiple AVMutableCompositionTrack instances to it. Each AVMutableCompositionTrack will have a separate media asset.

And the Rest

In order to apply a CGAffineTransform to a track, you will make use of AVVideoCompositionInstruction and AVVideoComposition. An AVVideoCompositionInstruction object represents an operation to be performed by a compositor. The object contains multiple AVMutableVideoCompositionLayerInstruction objects.

You use an AVVideoCompositionLayerInstruction object to modify the transform and opacity ramps to apply to a given track in an AV composition. AVMutableVideoCompositionLayerInstruction is a mutable subclass of AVVideoCompositionLayerInstruction.

An AVVideoComposition object maintains an array of instructions to perform its composition, and an AVMutableVideoComposition object represents a mutable video composition.

Conclusion

To sum it all up:

  • You have a main AVMutableComposition object that contains multiple AVMutableCompositionTrack instances. Each track represents an asset.
  • You have AVMutableVideoComposition objects that contain multiple AVMutableVideoCompositionInstructions.
  • Each AVMutableVideoCompositionInstruction contains multiple AVMutableVideoCompositionLayerInstruction instances.
  • Each layer instruction is used to apply a certain transform to a given track.

Got all that? There will be a test at the end before you can download the project sample code. ;]

Now you have at least heard of all the major objects you will use to merge your media. It may be a little confusing, but things will get clearer as you write some code. I promise!

Merge Video

Now to put that theory to use! Open MainStoryboard.storyboard and select the Merge Video View Controller. Add four buttons to the screen and name them as follows:

  1. Load Asset 1
  2. Load Asset 2
  3. Load Audio
  4. Merge and Save Video

Switch to the Assistant Editor mode and connect your four buttons to the following actions, as before:

  1. loadAssetOne:
  2. loadAssetTwo:
  3. loadAudio:
  4. mergeAndSave:

The final result should look something like this:

Video7

Now switch to MergeVideoViewController.h and replace its contents with:

#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import <MobileCoreServices/UTCoreTypes.h>
#import <AssetsLibrary/AssetsLibrary.h>
#import <MediaPlayer/MediaPlayer.h>
 
@interface MergeVideoViewController: UIViewController {
    BOOL isSelectingAssetOne;
}
 
@property(nonatomic, strong) AVAsset *firstAsset;
@property(nonatomic, strong) AVAsset *secondAsset;
@property(nonatomic, strong) AVAsset *audioAsset;
@property (weak, nonatomic) IBOutlet UIActivityIndicatorView *activityView;
 
-(IBAction)loadAssetOne:(id)sender;
-(IBAction)loadAssetTwo:(id)sender;
-(IBAction)loadAudio:(id)sender;
-(IBAction)mergeAndSave:(id)sender;
-(BOOL)startMediaBrowserFromViewController:(UIViewController*)controller usingDelegate:(id)delegate;
-(void)exportDidFinish:(AVAssetExportSession*)session;
 
@end

Most of the above should be familiar by now. There are a few new properties, but they are mostly to hold references to the assets that you’ll add to create the final merged video. In additional to the assets, there’s an activity indicator that will display when the app is merging files, since it can take some time to complete the process.

To synthesize the properties you added above, switch to MergeVideoViewController.m and add the following at the top of the file, right below the @implementation line:

@synthesize firstAsset, secondAsset, audioAsset;
@synthesize activityView;

Then, add the following to loadAssetOne:

if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeSavedPhotosAlbum] == NO) {
    UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"No Saved Album Found"  
        delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
    [alert show];        
} else {
    isSelectingAssetOne = TRUE;
    [self startMediaBrowserFromViewController:self usingDelegate:self];  
}

Add this code to loadAssetTwo:

if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeSavedPhotosAlbum] == NO) {
    UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"No Saved Album Found" 
        delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
    [alert show];
} else {
    isSelectingAssetOne = FALSE;
    [self startMediaBrowserFromViewController:self usingDelegate:self];  
}

Notice that the code in both the above instances is almost identical, except for the value assigned to isSelectingAssetOne. You use a UIImagePickerController to select the video files as you did in the “Play Video” section. The isSelectingAssetOne variable is used to identify which asset is currently selected.

Add the following code to the end of the file for the UIImagePickerController display and handling:

-(BOOL)startMediaBrowserFromViewController:(UIViewController*)controller usingDelegate:(id)delegate {
    // 1 - Validation
    if (([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeSavedPhotosAlbum] == NO)
        || (delegate == nil)
        || (controller == nil)) {
        return NO;
    }
    // 2 - Create image picker
    UIImagePickerController *mediaUI = [[UIImagePickerController alloc] init];
    mediaUI.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
    mediaUI.mediaTypes = [[NSArray alloc] initWithObjects:(NSString *)kUTTypeMovie, nil];
    // Hides the controls for moving & scaling pictures, or for
    // trimming movies. To instead show the controls, use YES.
    mediaUI.allowsEditing = YES;
    mediaUI.delegate = delegate;
    // 3 - Display image picker
    [controller presentModalViewController: mediaUI animated: YES];
    return YES;
}
 
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    // 1 - Get media type
    NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];
    // 2 - Dismiss image picker
    [self dismissModalViewControllerAnimated:NO];
    // 3 - Handle video selection
    if (CFStringCompare ((__bridge_retained CFStringRef) mediaType, kUTTypeMovie, 0) == kCFCompareEqualTo) {
        if (isSelectingAssetOne){
            NSLog(@"Video One  Loaded");
            UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Asset Loaded" message:@"Video One Loaded" 
                delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
            [alert show];
            firstAsset = [AVAsset assetWithURL:[info objectForKey:UIImagePickerControllerMediaURL]];
        } else {
            NSLog(@"Video two Loaded");
            UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Asset Loaded" message:@"Video Two Loaded" 
                delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
            [alert show];
            secondAsset = [AVAsset assetWithURL:[info objectForKey:UIImagePickerControllerMediaURL]];        
        }
    }
}

Notice that in imagePickerController:didFinishPickingMediaWithInfo:, you initialize each asset variable using the media URL returned by the image picker. Also note how the isSelectingAssetOne variable is used to determine which asset variable is set.

At this point, you have the code in place to select the two video assets.

Compile and run, and make sure you have at least two videos in your library. Then select the “Merge Videos” option, and select two videos. If everything works, you should see the “Asset Loaded” message upon selecting each video.

The next step is to add the functionality to select the audio file.

The UIImagePickerController only provides functionality to select video and images from the media library. To select audio files from your music library, you will use the MPMediaPickerController. It works exactly the same as UIImagePickerController, but instead of images and video, it accesses audio files in the media library.

Add the following code to loadAudio:

MPMediaPickerController *mediaPicker = [[MPMediaPickerController alloc] initWithMediaTypes:MPMediaTypeAny];
mediaPicker.delegate = self;
mediaPicker.prompt = @"Select Audio";
[self presentModalViewController:mediaPicker animated:YES];

The above code creates a new MPMediaPickerController instance and displays it as a modal view controller.

Build and run. Now when you tap the “Load Audio” button, you can access the audio library on your device. (Of course, you’ll need some audio files on your device. Otherwise, the list will be empty.)

If you select a song from the list, you’ll notice that nothing happens. That’s right, MPMediaPickerController needs delegate methods! Add the following two methods at the end of the file:

-(void) mediaPicker:(MPMediaPickerController *)mediaPicker didPickMediaItems:(MPMediaItemCollection *)mediaItemCollection {
    NSArray *selectedSong = [mediaItemCollection items];
    if ([selectedSong count] > 0) {
        MPMediaItem *songItem = [selectedSong objectAtIndex:0];
        NSURL *songURL = [songItem valueForProperty:MPMediaItemPropertyAssetURL];
        audioAsset = [AVAsset assetWithURL:songURL];
         NSLog(@"Audio Loaded");
         UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Asset Loaded" message:@"Audio Loaded" 
             delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
         [alert show];
    }
    [self dismissModalViewControllerAnimated:YES];
}
 
-(void)mediaPickerDidCancel:(MPMediaPickerController *)mediaPicker {
    [self dismissModalViewControllerAnimated: YES];
}

The code is very similar to the delegate methods for UIImagePickerController. You set the audio asset based on the media item selected via the MPMediaPickerController.

Build and run again. Go to the Merge Videos screen and select an audio file. If there are no errors, you should see the “Audio Loaded” message.

You now have all your video and audio assets loading correctly. It’s time to merge the various media files into one file.

But before you get into that code, you have to do a little bit of set up. Add the following code to mergeAndSave::

    if (firstAsset !=nil && secondAsset!=nil) {
        [activityView startAnimating];
        // 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.
        AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init];
        // 2 - Video track
        AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo 
            preferredTrackID:kCMPersistentTrackID_Invalid];
        [firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration) 
            ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil];
        [firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondAsset.duration) 
            ofTrack:[[secondAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:firstAsset.duration error:nil];
        // 3 - Audio track
        if (audioAsset!=nil){
            AVMutableCompositionTrack *AudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio 
                preferredTrackID:kCMPersistentTrackID_Invalid];
            [AudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstAsset.duration, secondAsset.duration)) 
                ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
        } 
        // 4 - Get path
        NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        NSString *documentsDirectory = [paths objectAtIndex:0];
        NSString *myPathDocs =  [documentsDirectory stringByAppendingPathComponent:
            [NSString stringWithFormat:@"mergeVideo-%d.mov",arc4random() % 1000]];
        NSURL *url = [NSURL fileURLWithPath:myPathDocs];
        // 5 - Create exporter
        AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition 
            presetName:AVAssetExportPresetHighestQuality];
        exporter.outputURL=url;
        exporter.outputFileType = AVFileTypeQuickTimeMovie;
        exporter.shouldOptimizeForNetworkUse = YES;
        [exporter exportAsynchronouslyWithCompletionHandler:^{
             dispatch_async(dispatch_get_main_queue(), ^{
                 [self exportDidFinish:exporter];
             });
         }];
    }

Here’s a step-by-step breakdown of the above code:

  1. You create an AVMutableComposition object to hold your video and audio tracks and transform effects.
  2. Next, you create an AVMutableCompositionTrack for the video and add it to your AVMutableComposition object. Then you insert your two videos to the newly created AVMutableCompositionTrack.

    Note that the insertTimeRangemethod allows you to insert a part of a video into your main composition instead of the whole video. This way, you can trim the video to a time range of your choosing.

    In this instance, you want to insert the whole video, so you create a time range from kCMTimeZero to your video asset duration.The atTime parameter allows you to place your video/audio track wherever you want it in your composition. Notice how firstAsset is inserted at time zero, and secondAsset is inserted at the end of the first video. This tutorial assumes you want your video assets one after the other. But you can also overlap the assets by playing with the time ranges.

    For working with time ranges, you use CMTime structs. CMTime structs are non-opaque mutable structs representing times, where the time could be a timestamp or a duration.

  3. Similarly, you create a new track [AVMutableCompositionTrack?] for your audio and add it to the main composition. This time you set the audio time range to the sum of the duration of the first and second videos, since that will be the complete length of your video.
  4. Before you can save the final video, you need a path for the saved file. So create a random file name that points to a file in the documents folder.
  5. Finally, render and export the merged video. To do this, you create an AVAssetExportSession object that transcodes the contents of an AVAsset source object to create an output of the form described by a specified export preset.
  6. After you’ve initialized an export session with the asset that contains the source media, the export preset name (presetName), and the output file type (outputFileType), you start the export running by invoking exportAsynchronouslyWithCompletionHandler:.
  7. Because the export is performed asynchronously, this method returns immediately. The completion handler you supply to exportAsynchronouslyWithCompletionHandler: is called whether the export fails, completes, or is canceled. Upon completion, the exporter’s status property indicates whether the export has completed successfully. If it has failed, the value of the exporter’s error property supplies additional information about the reason for the failure.

Notice that the completion handler calls exportDidFinish:, a method that needs implementation. Add the following code to the end of the file:

-(void)exportDidFinish:(AVAssetExportSession*)session {
    if (session.status == AVAssetExportSessionStatusCompleted) {
        NSURL *outputURL = session.outputURL;
        ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
        if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:outputURL]) {
            [library writeVideoAtPathToSavedPhotosAlbum:outputURL completionBlock:^(NSURL *assetURL, NSError *error){
                dispatch_async(dispatch_get_main_queue(), ^{
                    if (error) {
                        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"Video Saving Failed" 
                            delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
                        [alert show];
                    } else {
                        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Video Saved" message:@"Saved To Photo Album" 
                            delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil];
                        [alert show];
                    }
                });
            }];
        } 
    }
    audioAsset = nil;
    firstAsset = nil;
    secondAsset = nil;
    [activityView stopAnimating];
}

Once the export completes successfully, the newly exported video is saved to the photo album. You don’t actually need to do – you can use an AssetBrowser to browse to the final video you saved to your documents folder. But it’s easier to copy the output video to the photo album so you can see the final output.

Go ahead, build and run your project!

Select the video and audio files and merge the selected files. If the merge was successful, you should see a “Video Saved” message. At this point, your new video should be present in the photo album.

Go to the photo album, or browse using your own “Select and Play Video” screen! You’ll notice that although the videos have been merged, there are some orientations issues. Portrait video is in landscape mode, and sometimes videos are turned upside down.

This is due to the default AVAsset orientation. All movie and image files recorded using the default iPhone camera application have the video frame set to landscape, and so the media is saved in landscape mode.

AVAsset has a preferredTransform property that contains the media orientation information, and this is applied to a media file whenever you view it using the Photos app or QuickTime. In the code above, you haven’t applied a transform to your AVAsset objects, hence the orientation issue.

You can correct this easily by applying the necessary transforms to your AVAsset objects. But as your two video files can have different orientations, you’ll need to use two separate AVMutableCompositionTrack instances instead of one as you originally did.

Replace section #2 in mergeAndSave: with the following so that you have two AVMutableCompositionTrack instances instead of one:

// 2 - Create two video tracks
AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo 
    preferredTrackID:kCMPersistentTrackID_Invalid];
[firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration) 
    ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *secondTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo 
    preferredTrackID:kCMPersistentTrackID_Invalid];
[secondTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondAsset.duration) 
    ofTrack:[[secondAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:firstAsset.duration error:nil];

Since you now have two separate AVMutableCompositionTrack instances, you need to apply an AVMutableVideoCompositionLayerInstruction to each track in order to fix the orientation. So add the following code after the code you just replaced (and before section #3):

// 2.1 - Create AVMutableVideoCompositionInstruction
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstAsset.duration, secondAsset.duration));
// 2.2 - Create an AVMutableVideoCompositionLayerInstruction for the first track
AVMutableVideoCompositionLayerInstruction *firstlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack];
AVAssetTrack *firstAssetTrack = [[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
UIImageOrientation firstAssetOrientation_  = UIImageOrientationUp;
BOOL isFirstAssetPortrait_  = NO;
CGAffineTransform firstTransform = firstAssetTrack.preferredTransform;
if (firstTransform.a == 0 && firstTransform.b == 1.0 && firstTransform.c == -1.0 && firstTransform.d == 0) {
    firstAssetOrientation_ = UIImageOrientationRight; 
    isFirstAssetPortrait_ = YES;
}
if (firstTransform.a == 0 && firstTransform.b == -1.0 && firstTransform.c == 1.0 && firstTransform.d == 0) {
    firstAssetOrientation_ =  UIImageOrientationLeft; 
    isFirstAssetPortrait_ = YES;
}
if (firstTransform.a == 1.0 && firstTransform.b == 0 && firstTransform.c == 0 && firstTransform.d == 1.0) {
    firstAssetOrientation_ =  UIImageOrientationUp;
}
if (firstTransform.a == -1.0 && firstTransform.b == 0 && firstTransform.c == 0 && firstTransform.d == -1.0) {
    firstAssetOrientation_ = UIImageOrientationDown;
}
[firstlayerInstruction setTransform:firstAsset.preferredTransform atTime:kCMTimeZero];
[firstlayerInstruction setOpacity:0.0 atTime:firstAsset.duration];
// 2.3 - Create an AVMutableVideoCompositionLayerInstruction for the second track
AVMutableVideoCompositionLayerInstruction *secondlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:secondTrack];
AVAssetTrack *secondAssetTrack = [[secondAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
UIImageOrientation secondAssetOrientation_  = UIImageOrientationUp;
BOOL isSecondAssetPortrait_  = NO;
CGAffineTransform secondTransform = secondAssetTrack.preferredTransform;
if (secondTransform.a == 0 && secondTransform.b == 1.0 && secondTransform.c == -1.0 && secondTransform.d == 0) {
    secondAssetOrientation_= UIImageOrientationRight; 
    isSecondAssetPortrait_ = YES;
}
if (secondTransform.a == 0 && secondTransform.b == -1.0 && secondTransform.c == 1.0 && secondTransform.d == 0) {
    secondAssetOrientation_ =  UIImageOrientationLeft; 
    isSecondAssetPortrait_ = YES;
}
if (secondTransform.a == 1.0 && secondTransform.b == 0 && secondTransform.c == 0 && secondTransform.d == 1.0) {
    secondAssetOrientation_ =  UIImageOrientationUp;
}
if (secondTransform.a == -1.0 && secondTransform.b == 0 && secondTransform.c == 0 && secondTransform.d == -1.0) {
    secondAssetOrientation_ = UIImageOrientationDown;
}
[secondlayerInstruction setTransform:secondAsset.preferredTransform atTime:firstAsset.duration];
}

In section #2.1, you create an AVMutableVideoCompositionInstruction object that will hold your layer instructions.

Then in section #2.2, you add the orientation fix to your first track as follows:

  • You create an AVMutableVideoCompositionLayerInstruction and associate it with your firstTrack.
  • Next, you create an AVAssetTrack object from your AVAsset. An AVAssetTrack object provides the track-level inspection interface for all assets. You need this object in order to access the preferredTransform and dimensions of the asset.
  • Then, you determine the orientation of your AVAsset. This will be used later when determining the exported video size.
  • Next, you apply the preferredTransform to fix the orientation.
  • You also set the opacity of your first layer to zero at time firstAsset.duration. This is because you want your first track to disappear when it has finished playing. Otherwise, the last frame of the first track will remain on screen and overlap the video from the second track.

The code in section #2.3 is almost identical to that in section #2.2. It’s just the orientation fix applied to the second track.

Next, add the following code right after section #2.3 (and before section #3):

// 2.4 - Add instructions
mainInstruction.layerInstructions = [NSArray arrayWithObjects:firstlayerInstruction, secondlayerInstruction,nil];
AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];
mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];
mainCompositionInst.frameDuration = CMTimeMake(1, 30);
 
CGSize naturalSizeFirst, naturalSizeSecond;
if(isFirstAssetPortrait_){
    naturalSizeFirst = CGSizeMake(FirstAssetTrack.naturalSize.height, FirstAssetTrack.naturalSize.width);
} else {
    naturalSizeFirst = FirstAssetTrack.naturalSize;
}
if(isSecondAssetPortrait_){
    naturalSizeSecond = CGSizeMake(SecondAssetTrack.naturalSize.height, SecondAssetTrack.naturalSize.width);
} else {
    naturalSizeSecond = SecondAssetTrack.naturalSize;
}
 
float renderWidth, renderHeight;
if(naturalSizeFirst.width > naturalSizeSecond.width) {
    renderWidth = naturalSizeFirst.width;
} else {
    renderWidth = naturalSizeSecond.width;
}
if(naturalSizeFirst.height > naturalSizeSecond.height) {
    renderHeight = naturalSizeFirst.height;
} else {
    renderHeight = naturalSizeSecond.height;
}
MainCompositionInst.renderSize = CGSizeMake(renderWidth, renderHeight);

Now that you have your AVMutableVideoCompositionLayerInstruction instances for the first and second tracks, you just add them to the main AVMutableVideoCompositionInstruction object. Next, you add your mainInstruction object to the instructions property of an instance of AVMutableVideoComposition. You also set the frame rate for the composition to 30 frames/second.

Then you have to find the final video’s export size. First we have to check if the resources are portrait or landscape. To do this, we use the variables isFirstAssetPortrait_ and isSecondAssetPortrait_ from earlier. If they are landscape, we can use the naturalSize property we are supplied with, but if they are portrait, we must flip the the naturalSize so that the width is now the height and vice-versa. We save each of the results to variables.

Then we have to determine which of the two assets is wider and which of the two is taller. This is to ensure the exported video is large enough to accommodate all of each video. With some simple comparisons, we save the results of this to variables as well.

You can then set the renderSize of the export to the found renderWidth and renderHeight.

Now that you’ve got an AVMutableVideoComposition object configured, all you need to do is assign it your exporter. In section #5, insert the following code after line 4 of the section (just before the exportAsynchronouslyWithCompletionHandler: call):

exporter.videoComposition = mainCompositionInst;

Whew – that’s it!

Build and run your project. If you create a new video by combining two videos (and optionally an audio file), you will see that the orientation issues have disappeared when you play back the new merged video.

Where to Go From Here?

OK, I was bluffing about the quiz. Here is sample project with all of the code from the above tutorial. You’ve earned it.

If you followed along, you should now have a good understanding of how to play video, record video, and merge multiple videos and audio in your apps.

AVFoundation gives you a lot of flexibility when playing around with videos. You can also apply any kind of CGAffineTransform to merge, scale, or position videos.

I would recommend that you have a look at the WWDC 2010 AVFoundation session video if you want to go into a bit more detail. Also, check out the Apple AVFoundation Framework programming guide.

I hope this tutorial has been useful to get you started with video manipulation in iOS. If you have any questions, comments, or suggestions for improvement, please join the forum discussion below!


      
Posted by k_ben

외부 앱 연동

컴퓨터/아이폰 : 2013. 11. 15. 16:28


 
 위와 같이 
plist에다가. URL types 를 만들어
URL Schemes 와 URL identifier 을 정해서 지정한대로 설정해주어야 합니다.


이번엔 AppA, 즉 호출하는 쪽에서 호출하는 방법에 대해 알아보겠습니다.

간단히 버튼을 하나 만들어서 버튼이벤트에 호출하는 메소드를 추가했습니다.

BOOL isInstalled = [[UIApplication sharedApplicationopenURL: [NSURL URLWithString:@"AppB://"];    

if (!isInstalled) {

    // 설치 되어 있지 않습니다앱스토어로 안내...

    //[[UIApplication sharedApplication] openURL: [NSURL URLWithString: appstoreurl]];

    
}

위와 같이 호출하면 호출이 됩니다.

그런데 정보를 넘기고 싶으시다고요? 그럼 또 방법이 있죠 ㅎㅎㅎㅎ

[[UIApplication sharedApplicationopenURL: [NSURL URLWithString:@"AppB://"];   
이렇게 호출을 해줄때 AppB:// 이 뒷부분에 넘기고 싶은 정보를 넘겨주시면됩니다.


[[UIApplication sharedApplicationopenURL: [NSURL URLWithString:@"AppB://넘기고 싶은정보"];

이렇게요,


그럼 AppB에서는


- (BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url
여기에서 메세지를 받을 수 있습니다.

간단하게 받는 메세지 전부를 Alert창으로 띄우는 예제를 보시면,
 


- (BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url {

// 어플 자신이 호출된 경우에 얼럿창 띄우기 

NSString *strURL = [url absoluteString];

    

UIAlertView *alertView= [[UIAlertView allocinitWithTitle:@"call message"

                                                       message:strURL

                                                      delegate:nil 

                                             cancelButtonTitle:@"OK" otherButtonTitles:nil];

    

[alertView  show];

[alertView  release];

    

return YES;

}

      
Posted by k_ben


불쌍한 내 블로그 이래저래 글도 안쓰고 관리도 안해주고 있었는데..

무슨 일 때문인지 70~80명의 사람들이 꾸준히 방문해주시더군요..;;;


그래서 이거 광고 수익 좀 있지 않을까라는 들뜬 마음으로..

애드센스의 수익을 봤더니...








4.3달러-_______________-;;;

흠...헐;; ㅜ__ㅜ ㅋ~


대체 하루에 몇명이 들어와야 이걸로 돈을 벌 수 있는것일까요;;

하루 방문객이 천단위정도는 돼야 그나마 돈이 벌릴듯한 느낌이네요..

휴~


그치만 평균 70명의 사람이 들어온다는걸 알게 되니 저도 자주 예전보다 자주 들어오게 되더군요^^

오셔서 어떤 글을 읽으시는지는 잘 모르겠지만..

앞으로도 좋은 글 많이 남기도록 노력하겠습니다^^ㅋ



      
Posted by k_ben


터미널을 열고


find . | grep -v .svn | grep "\.a" | grep -v "\.app" | xargs grep uniqueIdentifier


하면 찾아짐


찾아진걸 삭제-_-

'컴퓨터 > 아이폰' 카테고리의 다른 글

동영상 편집 관련1  (0) 2013.11.18
외부 앱 연동  (0) 2013.11.15
sk, kt 사업자 알수 있는 방법  (0) 2013.01.09
[ios6]앱스토어 모달로 불러오기  (0) 2013.01.09
[iPhone SDK] 현재 전화의 상태를 알아오기  (0) 2013.01.08
      
Posted by k_ben


뭐 이런 엉뚱한 댓글들이 많이 달려있어..;;

지우느라 고생 좀 했네-_-


댓글을 로그인으로 바꿔놨으니 이제 좀 덜할겠지...


관리를 안하니 스팸댓글이 판치는구나..ㅋ~


-________-


이넘의 스팸들..저거 적으면 하나 적을때마다 돈 버나..?

왜 저런짓을 하지-_-;


      
Posted by k_ben