http://www.mindfiresolutions.com/Merging-Two-Videos--iPhoneiPad-Tip--1396.php



Suppose you have two video files with name video1.mp4 and video2.mp4 and want to merge them into a single video programmatically this tip might help you. Follow the instructions given below-
 
First, we need to add following frameworks in our project:
a)AVFoundation  framework 
b)Asset Library framework 
c)Media Player Framework
d)Core Media frame work
 
Then we need to import following in the view controller class :
 

#import <MediaPlayer/MediaPlayer.h>
#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreFoundation/CoreFoundation.h>
#import <AVFoundation/AVBase.h>

@implementation MyVideoViewController

 - (void) mergeTwoVideo

{

    AVMutableComposition* composition = [AVMutableCompositioncomposition];

NSString* path1 = [[NSBundlemainBundle] pathForResource:@"video1"ofType:@"mp4"];

NSString* path2 = [[NSBundlemainBundle] pathForResource:@"video2"ofType:@"mp4"];

 AVURLAsset* video1 = [[AVURLAssetalloc]initWithURL:[NSURLfileURLWithPath:path1] options:nil];

    AVURLAsset* video2 = [[AVURLAssetalloc]initWithURL:[NSURLfileURLWithPath:path2] options:nil];

    AVMutableCompositionTrack * composedTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo 

                                                                      preferredTrackID:kCMPersistentTrackID_Invalid]; 

        [composedTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, video1.duration)

                         ofTrack:[[video1 tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]

                          atTime:kCMTimeZero

                           error:nil];

    [composedTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, video2.duration) 

                         ofTrack:[[video2 tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] 

                          atTime:video1.duration

                           error:nil];

 

 

NSString* documentsDirectory= [selfapplicationDocumentsDirectory];

NSString* myDocumentPath= [documentsDirectory stringByAppendingPathComponent:@"merge_video.mp4"];

NSURL *url = [[NSURL alloc] initFileURLWithPath: myDocumentPath];

     if([[NSFileManagerdefaultManager] fileExistsAtPath:myDocumentPath])

    {

        [[NSFileManagerdefaultManager] removeItemAtPath:myDocumentPath error:nil];

    }

VAssetExportSession *exporter = [[[AVAssetExportSessionalloc] initWithAsset:composition presetName:AVAssetExportPresetHighestQuality] autorelease];

 

exporter.outputURL=url;

   exporter.outputFileType = @"com.apple.quicktime-movie";

    exporter.shouldOptimizeForNetworkUse = YES;

    [exporter exportAsynchronouslyWithCompletionHandler:^{

         switch ([exporter status]) {

             caseAVAssetExportSessionStatusFailed:

                 break;

             caseAVAssetExportSessionStatusCancelled:

                 break;

             caseAVAssetExportSessionStatusCompleted:

                break;

            default:

                break;

        }

     }];

 }

 // The import will be completed only when control reaches to  Handler block
 
 

- (NSString*) applicationDocumentsDirectory

{

    NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);

    NSString* basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;

    return basePath;

}


      
Posted by k_ben


http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework



I'd like to introduce a new open source framework that I've written, called GPUImage. The GPUImage framework is a BSD-licensed iOS library (for which the source code can be found on Github) that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a slightly simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.

UPDATE (4/15/2012): I've disabled comments, because they were getting out of hand. If you wish to report an issue with the project, or request a feature addition, go to its GitHub page. If you want to ask a question about it, contact me at the email address in the footer of this page, or post in the new forum I have set up for the project.

About a year and a half ago, I gave a talk at SecondConf where I demonstrated the use of OpenGL ES 2.0 shaders to process live video. The subsequent writeup and sample code that came out of that proved to be fairly popular, and I've heard from a number of people who have incorporated that video processing code into their iOS applications. However, the amount of code around the OpenGL ES 2.0 portions of that example made it difficult to customize and reuse. Since much of this code was just scaffolding for interacting with OpenGL ES, it could stand to be encapsulated in an easier to use interface.

Example of four types of video filters

Since then, Apple has ported some of their Core Image framework from the Mac to iOS. Core Image provides an interface for doing filtering of images and video on the GPU. Unfortunately, the current implementation on iOS has some limitations. The largest of these is the fact that you can't write your own custom filters based on their kernel language, like you can on the Mac. This severely restricts what you can do with the framework. Other downsides include a somewhat more complex interface and a lack of iOS 4.0 support. Others have complained about some performance overhead, but I've not benchmarked this myself.

Because of the lack of custom filters in Core Image, I decided to convert my video filtering example into a simple Objective-C image and video processing framework. The key feature of this framework is its support for completely customizable filters that you write using the OpenGL Shading Language. It also has a straightforward interface (which you can see some examples of below) and support for iOS 4.0 as a target.

Note that this framework is built around OpenGL ES 2.0, so it will only work on devices that support this API. This means that this framework will not work on the original iPhone, iPhone 3G, and 1st and 2nd generation iPod touches. All other iOS devices are supported.

The following is my first pass of documentation for this framework, an up-to-date version of which can be found within the framework repository on GitHub:

General architecture

GPUImage uses OpenGL ES 2.0 shaders to perform image and video manipulation much faster than could be done in CPU-bound routines. It hides the complexity of interacting with the OpenGL ES API in a simplified Objective-C interface. This interface lets you define input sources for images and video, attach filters in a chain, and send the resulting processed image or video to the screen, to a UIImage, or to a movie on disk.

Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include GPUImageVideoCamera (for live video from an iOS camera) and GPUImagePicture (for still images). Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.

Filters and other subsequent elements in the chain conform to the GPUImageInput protocol, which lets them take in the supplied or processed texture from the previous link in the chain and do something with it. Objects one step further down the chain are considered targets, and processing can be branched by adding multiple targets to a single output or filter.

For example, an application that takes in live video from the camera, converts that video to a sepia tone, then displays the video onscreen would set up a chain looking something like the following:

GPUImageVideoCamera -> GPUImageSepiaFilter -> GPUImageView

A small number of filters are built in:

  • GPUImageBrightnessFilter
  • GPUImageContrastFilter
  • GPUImageSaturationFilter
  • GPUImageGammaFilter
  • GPUImageColorMatrixFilter
  • GPUImageColorInvertFilter
  • GPUImageSepiaFilter: Simple sepia tone filter
  • GPUImageDissolveBlendFilter
  • GPUImageMultiplyBlendFilter
  • GPUImageOverlayBlendFilter
  • GPUImageDarkenBlendFilter
  • GPUImageLightenBlendFilter
  • GPUImageRotationFilter: This lets you rotate an image left or right by 90 degrees, or flip it horizontally or vertically
  • GPUImagePixellateFilter: Applies a pixellation effect on an image or video, with the fractionalWidthOfAPixel property controlling how large the pixels are, as a fraction of the width and height of the image
  • GPUImageSobelEdgeDetectionFilter: Performs edge detection, based on a Sobel 3x3 convolution
  • GPUImageSketchFilter: Converts video to a sketch, and is the inverse of the edge detection filter
  • GPUImageToonFilter
  • GPUImageSwirlFilter
  • GPUImageVignetteFilter
  • GPUImageKuwaharaFilter: Converts the video to an oil painting, but is very slow right now

but you can easily write your own custom filters using the C-like OpenGL Shading Language, as described below.

Adding the framework to your iOS project

Once you have the latest source code for the framework, it's fairly straightforward to add it to your application. Start by dragging the GPUImage.xcodeproj file into your application's Xcode project to embed the framework in your project. Next, go to your application's target and add GPUImage as a Target Dependency. Finally, you'll want to drag the libGPUImage.a library from the GPUImage framework's Products folder to the Link Binary With Libraries build phase in your application's target.

GPUImage needs a few other frameworks to be linked into your application, so you'll need to add the following as linked libraries in your application target:

  • CoreMedia
  • CoreVideo
  • OpenGLES
  • AVFoundation
  • QuartzCore

You'll also need to find the framework headers, so within your project's build settings set the Header Search Paths to the relative path from your application to the framework/ subdirectory within the GPUImage source directory. Make this header search path recursive.

To use the GPUImage classes within your application, simply include the core framework header using the following:

#import "GPUImage.h"

As a note: if you run into the error "Unknown class GPUImageView in Interface Builder" or the like when trying to build an interface with Interface Builder, you may need to add -ObjC to your Other Linker Flags in your project's build settings.

Performing common tasks

Filtering live video

To filter live video from an iOS device's camera, you can use code like the following:

GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
 
// Add the view somewhere so it's visible
 
[videoCamera addTarget:thresholdFilter];
[customFilter addTarget:filteredVideoView];
 
[videoCamera startCameraCapture];

This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.

Processing a still image

There are a couple of ways to process a still image and create a result. The first way you can do this is by creating a still image source object and manually creating a filter chain:

UIImage *inputImage = [UIImage imageNamed:@"Lambeau.jpg"];
 
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];
 
[stillImageSource addTarget:stillImageFilter];
[stillImageSource processImage];
 
UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentlyProcessedOutput];

For single filters that you wish to apply to an image, you can simply do the following:

GPUImageSepiaFilter *stillImageFilter2 = [[GPUImageSepiaFilter alloc] init];
UIImage *quickFilteredImage = [stillImageFilter2 imageByFilteringImage:inputImage];

Writing a custom filter

One significant advantage of this framework over Core Image on iOS (as of iOS 5.0) is the ability to write your own custom image and video processing filters. These filters are supplied as OpenGL ES 2.0 fragment shaders, written in the C-like OpenGL Shading Language.

A custom filter is initialized with code like

GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];

where the extension used for the fragment shader is .fsh. Additionally, you can use the -initWithFragmentShaderFromString: initializer to provide the fragment shader as a string, if you would not like to ship your fragment shaders in your application bundle.

Fragment shaders perform their calculations for each pixel to be rendered at that filter stage. They do this using the OpenGL Shading Language (GLSL), a C-like language with additions specific to 2-D and 3-D graphics. An example of a fragment shader is the following sepia-tone filter:

varying highp vec2 textureCoordinate;
 
uniform sampler2D inputImageTexture;
 
void main()
{
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    lowp vec4 outputColor;
    outputColor.r = (textureColor.r * 0.393) + (textureColor.g * 0.769) + (textureColor.b * 0.189);
    outputColor.g = (textureColor.r * 0.349) + (textureColor.g * 0.686) + (textureColor.b * 0.168);    
    outputColor.b = (textureColor.r * 0.272) + (textureColor.g * 0.534) + (textureColor.b * 0.131);
 
	gl_FragColor = outputColor;
}

For an image filter to be usable within the GPUImage framework, the first two lines that take in the textureCoordinate varying (for the current coordinate within the texture, normalized to 1.0) and the inputImageTexture varying (for the actual input image frame texture) are required.

The remainder of the shader grabs the color of the pixel at this location in the passed-in texture, manipulates it in such a way as to produce a sepia tone, and writes that pixel color out to be used in the next stage of the processing pipeline.

One thing to note when adding fragment shaders to your Xcode project is that Xcode thinks they are source code files. To work around this, you'll need to manually move your shader from the Compile Sources build phase to the Copy Bundle Resources one in order to get the shader to be included in your application bundle.

Filtering and re-encoding a movie

Movies can be loaded into the framework via the GPUImageMovie class, filtered, and then written out using a GPUImageMovieWriter. GPUImageMovieWriter is also fast enough to record video in realtime from an iPhone 4's camera at 640x480, so a direct filtered video source can be fed into it.

The following is an example of how you would load a sample movie, pass it through a pixellation and rotation filter, then record the result to disk as a 480 x 640 h.264 movie:

movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
pixellateFilter = [[GPUImagePixellateFilter alloc] init];
GPUImageRotationFilter *rotationFilter = [[GPUImageRotationFilter alloc] initWithRotation:kGPUImageRotateRight];
 
[movieFile addTarget:rotationFilter];
[rotationFilter addTarget:pixellateFilter];
 
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"];
unlink([pathToMovie UTF8String]);
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
 
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
[pixellateFilter addTarget:movieWriter];
 
[movieWriter startRecording];
[movieFile startProcessing];

Once recording is finished, you need to remove the movie recorder from the filter chain and close off the recording using code like the following:

[pixellateFilter removeTarget:movieWriter];
[movieWriter finishRecording];

A movie won't be usable until it has been finished off, so if this is interrupted before this point, the recording will be lost.

Sample applications

Several sample applications are bundled with the framework source. Most are compatible with both iPhone and iPad-class devices. They attempt to show off various aspects of the framework and should be used as the best examples of the API while the framework is under development. These include:

ColorObjectTracking

A version of my ColorTracking example ported across to use GPUImage, this application uses color in a scene to track objects from a live camera feed. The four views you can switch between include the raw camera feed, the camera feed with pixels matching the color threshold in white, the processed video where positions are encoded as colors within the pixels passing the threshold test, and finally the live video feed with a dot that tracks the selected color. Tapping the screen changes the color to track to match the color of the pixels under your finger. Tapping and dragging on the screen makes the color threshold more or less forgiving. This is most obvious on the second, color thresholding view.

SimpleImageFilter

A bundled JPEG image is loaded into the application at launch, a filter is applied to it, and the result rendered to the screen. Additionally, this sample shows two ways of taking in an image, filtering it, and saving it to disk.

MultiViewFilterExample

From a single camera feed, four views are populated with realtime filters applied to camera. One is just the straight camera video, one is a preprogrammed sepia tone, and two are custom filters based on shader programs.

FilterShowcase

This demonstrates every filter supplied with GPUImage.

BenchmarkSuite

This is used to test the performance of the overall framework by testing it against CPU-bound routines and Core Image. Benchmarks involving still images and video are run against all three, with results displayed in-application.

Things that need work

This is just a first release, and I'll keep working on this to add more functionality. I also welcome any and all help with enhancing this. Right off the bat, these are missing elements I can think of:

  • Images that exceed 2048 pixels wide or high currently can't be processed on devices older than the iPad 2 or iPhone 4S.
  • Currently, it's difficult to create a custom filter with additional attribute inputs and a modified vertex shader.
  • Many common filters aren't built into the framework yet.
  • Video capture and processing should be done on a background GCD serial queue.
  • I'm sure that there are many optimizations that can be made on the rendering pipeline.
  • The aspect ratio of the input video is not maintained, but stretched to fill the final image.
  • Errors in shader setup and other failures need to be explained better, and the framework needs to be more robust when encountering odd situations.

Hopefully, people will find this to be helpful in doing fast image and video processing within their iOS applications.

      
Posted by k_ben


http://daoudev.tistory.com/entry/모바일-기기에서의-동영상-재생#footnote_link_28_8



1. 지원 파일

 

 

안드로이드 공식 지원 파일 포맷

코덱

확장자

비고

H.263

3gp, mp4

 

H.264 AVC

3gp, mp4, ts

인코딩은 허니컴부터 지원

ts 형식 또한 허니컴부터 지원

MPEG-4 SP

3gp

 

VP8

webm, mkv

2.3.3 부터 지원

스트리밍 형태는 ICS 부터 가능

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

아이폰 공식 지원 파일 포맷

코덱

확장자

비고

H.264

m4v, mp4, mov, 3gp

640 * 480, 30fps, 1.5Mbps

320 * 240, 30fps, 768Kbps

MPEG-4

m4v, mp4, mov, 3gp

640 * 480, 30fps, 2.5Mbps

 

 

 

 

 

 

 

 

 

 

2. 모바일 동영상 재생 방법

 

동영상은 다운로드 또는 스트리밍 방식으로 재생할 수 있습니다.

안드로이드 기반의 모바일 기기에서는 RTSP(Realtime Streaming Protocol)/HLS(HTTP Live Streaming) 지원합니다. 단, HLS 안드로이드 3.0 이상의 OS부터 지원하고 있습니다. iOS기반의 아이폰은 HLS1 지원하며,이외의 방식은 reject 사유에 포함됩니다.

 

참고로 아이폰에서의 동영상 재생 심사 기준에 대해서 알려드리겠습니다.HLS를 사용하지 않고, 10Mb 이상의 파일을 스트리밍 형식으로 재생하는 경우 reject 사유에 포함됩니다.

 

- 인코딩시 비트레이트 조건

화질

비디오

오디오

Low

96Kbps

64Kbps

Medium

256Kbps

64Kbps

High

800Kbps

64Kbps

 

 

모바일에서 일반 동영상 파일을 재생 하기 위해서는 모바일 플랫폼별 지원 포맷으로 변환하는 과정이 필요합니다. (FFmpeg Open Source 에서 지원)

 

 

3. FFMpeg 소개

 

FFMpeg는 인코딩2/먹싱3/트랜스코딩/디먹싱4/디코딩5/스트림/재생 등 멀티미디어 관련한 거의 모든 기능을 갖추고 있는 오픈 소스 멀티 미디어 프레임워크입니다. 크로스 플랫폼(Cross Platform)을 지원하고 GNU Lesser General Public License (LGPL) 라이선스에 의해 배포됩니다.

FFMpeg는 간단한 동영상 출력 프로그램인 ffplay도 제공합니다.

 

FFMpeg Library

  • libavcodec: 오디오/비디오의 인코더/디코더

  • libavformat: 오디오/비디어 컨테이너 포맷의 muxer/demuxer
  • libavutil: FFmpeg 개발  필요한 다양한 유틸리티
  • libpostproc: video post-processing
  • libswscale: 비디오의 image scaling, color-space, pixel-format 변환
  • libavfilter: 인코더와 디코더 사이에서 오디오/비디오를 변경하고 검사
  • libswresample: 오디오 리샘플링(audio resampling)

 

일반적인 컨버팅 절차

 

FFMpeg 동영상 컨버팅 과정

 

 

출처: http://helloworld.naver.com/helloworld/8794

 

1) libavformat  통해 비디어코덱과 오디오 코덱의 정보를 추출

2) libavcodec  통해 비디오/오디오 데이터를 디코딩

3) PC, 모바일에서 동일하게 적용되는 방법이며 위에서 추출한 데이터를 기반으로 파일로 저장/재생/편집

 

 

동영상 플레이어 구현 방법

 

출처: http://helloworld.naver.com/helloworld/8794

 

1) 컨버팅 과정에서 받은 정보를 큐에 저장

2) 큐에 저장된 정보를 SDL6 통해서 지속적으로 렌더링하여 재생

 - 영상비디오 Refresher 프레임 갱신을 요청/처리하여 비디오 Renderer 통해서 화면에 출력

 - 오디오오디오 Renderer 통해 오디오 정보를 재생

 

 

4. 구현 방법 시나리오

 

A. 서버에서 변환하는 경우

 

A-1) 컨텐츠 제공자가 파일을 등록할  변환

 

- 파일이 업로드 되면 서버에서 자동으로 파일을 변환하여 저장하는 방식

- FFmpeg 에서 제공하는 변환 기능을 통해 업로드  파일을 변환 저장 가능

- 유튜브네이버네이트  동영상 스트리밍 서비스에서 주로 사용되는 방식

 

A-2) 컨텐츠 제공자가 파일 스트리밍 요청할 때 변환

 

스트리밍 요청이 왔을 때 서버에서 실시간으로 변환하며 스트리밍

- 네이버 N드라이브가 모바일에서 동영상 전송시 사용되는 방식

- 대다수의 RTSP, HLS 기능을 포함한 서버 툴이 이러한 방식을 사용

 

A-3) 컨텐츠 제공자가 규약된 포맷으로 변환 후 업로드

 

- 컨텐츠 제공자가 규약된 포맷으로 변환하여 서버에 업로드

- 개인 사용자가 사용 가능한 다양한 변환 툴7이 있음.

- 개인용 서버에서 주로 사용되는 방식


 

B. 모바일에서 변환하는 경우

 

B-1) 변환 후 내장 플레이어로 플레이 가능한 파일로 저장

 

- 전달받은 영상 정보를 파일로 변환하여 저장 후 재생하는 방식

- FFmpeg 에 내장된 기능으로 특정 포맷으로 파일 변환/저장이 가능

- 일반 듀얼코어 PC 에서 800M 영상을 변환시 12분 소요

- 이보다  성능이 낮은 모바일 기기에서 구현시 배터리와 성능에 있어서 많은 문제가 있을것으로 예상됨.

 

B-2) 변환 후 내장 플레이어로 데이터 전달

 

 

- 내장 플레이어에 접근 가능한 모듈을 통해 변환 정보를 전송하는 방식

- 모바일 내장 플레이어에 대한 플러그인이 공식적으로 지원되지 않음.

- 별도의 제조사별 협의가 필요하며 다양한 기기에 대한 지원 대책이 필요함.

 

B-3) 변환 기능이 포함된 플레이어를 통해 플레이

 

 

- 전달 받은 영상 정보를 변환 모듈을 추가한 플레이어를 통해 재생하는 방식

- 가장 일반적인 방법으로 FFmpeg 를 활용한 다양한 플레이어가 있음.

- NHN 의 N드라이브 내장 플레이어 (안드로이드, iOS), AVPlayer (iOS) 가 개발 소스를 공개함.8

 

  1. HTTP Live Streaming 애플에서 2009년 iOS 3.0 발표 때 내놓은 프로토콜로 스트리밍 데이터를 MPEG-2 TS에 담아 시간단위로 쪼개서 보냅니다. Adobe사는 Flash Media Server 4.0에서, MicroSoft에서는 IIS Media Server 4.0 에서부터 정식 지원했으며 안드로이드는 3.0에서부터 지원합니다. [본문으로]
  2. 인코딩 (Encoding): 영상 데이터를 특정 동영상 코덱으로 변환하는 것 [본문으로]
  3. 먹싱 (Muxing): 변환한 데이터를 Digital Container Format 파일에 담는 것 [본문으로]
  4. 디먹싱 (Demuxing): 먹싱과 반대 개념으로 동영상 파일로 부터 비트 스트림 데이터를 추출하는 것 [본문으로]
  5. 디코딩 (Decoding): 특정 코덱의 데이터로부터 영상 데이터를 추출하는 것 [본문으로]
  6. SDL(Simple Directmedia Layer) : 비디오, 오디오, 사용자 입력 등의 계층을 추상화하여, 리눅스, 마이크로소프트 윈도, 맥 OS X 등 여러 운영 체제에서 실행이 가능하도록 한 크로스플랫폼 멀티미디어 라이브러리 [본문으로]
  7. 이러한 프로그램을 인코더(Encoder)라고 부른다. 특정 포맷의 동영상을 다른 포맷의 영상파일로 바꿔주는 기능을 수행한다. 국내에서 대중적으로 사용되는 프로그램은 다음의 팟 인코더이며 FFmpeg 를 활용하여 개발되었다. 라이센스는 개인/기업/공공/교육 등에 제한이 없다. [본문으로]
  8. 안드로이드 소스: http://helloworld.naver.com/helloworld/8794, N드라이브에 내장된 플레이어의 개발 과정 및 샘플 프로젝트가 공개됨. iOS 소스: http://luuvish.org/206, 엔터테이먼트 부문 2위 ( 플레이어 1위) 인 AVPlayer 소스가 공개됨. [본문으로]
------------------------------------------------------------------------------

플랫폼별 기본 내장 브라우저에서 플레이 가능한 코덱

 

코덱

iPhone

iPad

Android

H.263[i]

X

O

O

H.264[ii]

O

O

O

MPEG-4 Part2[iii]

O

O

O

VP8[iv]

X

X

O

 

 

 

 

 

 

 


  • iOS 브라우저 특징 - iOS 4.2 이상의 iPad 에서는 내장 브라우저 내에 비디오 컨트롤러가 포함되어 있으며, iPhone에서는 비디오 플레이어를 별도로 실행하여 플레이한다.
  • 안드로이드 브라우저 특징 - HTML 태그에 따라 브라우저 내 재생과 플레이어 호출이 각각 이루어 진다.

 

[i] H.263: H.261 코덱 기반으로 개발되었다. H.261에 비해 절반의 대역폭으로 똑같은 화질을 얻을 수 있기 때문에H.261을 대신해 범용으로 사용되며 비디오 스트리밍 전송을 위한 실시간 전송 프로토콜(RTP)에 사용되고 있다.

[ii] H.264/ MPEG-4 AVC: 매우 높은 데이터 압축률을 가지는 디지털 비디오 코덱 표준으로 MPEG-4 파트 10 또는MPEG-4/AVC(고급 비디오 부호화, Advanced Video Coding)라 부르기도 한다.

[iii] MPEG-4 Part 2 / SP: MPEG-4 Part2 또는 MPEG-4 Visual 또는 MPEG-4 ASP은 ISO/IEC의 동화상 전문가 그룹(MPEG)에서 만든 디지털 비디오 코덱이다. DivX, Xvid 등이 이 코덱의 구현에 해당한다.

[iv] VP8: 구글이 인수한 On2 테크놀로지스의 비디오 코덱 중 하나이다. BSD 라이선스 형식의 특허 대응을 위해, 수정 라이선스로 오픈 소스 소프트웨어화 함.

 

 

 

모바일 브라우저에서 동영상 플레이 하는 방법

 

브라우저 내에서 플레이

1
2
3
4
5
6
7
8
9
10
11
12
<video id="player1" width="600" height="360" controls>
    <source src="./h264.mp4" />
</video>
 
 
// 터치시 플레이가 되도록 스크립트 구성
<script language="javascript">
    var player1 = document.getElementById('player1');<br>
    player1.addEventListener('click', function() {
    player1.play();
    }, false);
</scipt>



미디어 플레이어 호출을 통한 플레이


 

테스트 결과

1) iOS 테스트

플랫폼

테스트

H.264

H.263

MPEG-4 PART 2

비고

재생/호출

재생/호출

재생/호출

iPhone

유튜브

O

X

O

Quick Player 호출 후 재생

iPad

O

O

O

Safari 내에서 재생

2) 안드로이드 테스트

브라우저

테스트

H.264

H.263

MPEG-4

PART 2

비고

재생

호출

재생

호출

재생

호출

기본

유튜브

 

O

O

O

O

O

O

호출 시

다운로드 선택 가능

Chrome

O

O

O

O

O

O

-

Firefox

X

X

X

호출 시

다운로드만 지원함

Dolphin

O

O

O

O

O

O

호출 시

다운로드 선택 가능

Boat

X

O

O

O

O

O

Maxthon

O

O

O

O

O

O

Xscope

O

O

O

O

O

O



      
Posted by k_ben