Interactive Video Part 2: Developing the Player with the new AS3 API

Written by Gilles on April 15, 2011

This step consists of applying a real-time compositing to a video using new Flash Player 10 graphics API.

First, as in Adobe After Effects you would simply use a corner pin effect to distort a picture from motion tracking data, we’ll develop a class in Adobe Flash wich will do the job from the XML generated from the previous tutorial.
Secondly, we will apply a mask video (video generated from previous tutorial) and a screen front video layer to get a realistic result.

This tutorial is the second part of a series on Interactive video made with Adobe Flash and Adobe After Effects. Find the other parts here :

  1. Preparing the Videos with Adobe After Effects
  2. Developing the Player with the new AS3 API
  3. Getting the User Content from Facebook

4 corner pin rendering

Flash Player 10’s new API provides a new set of graphic treatment fonctions using the GPU (Graphics Processing Unit). In our case we will mostly use the Vertex Distortion functions to distort four corner assets, which, split in two, are made of two triangles.

Vertex distortion works in 3 steps:

  1. Defining where the points are on your texture (UV Mapping)
  2. Defining triangles and assigning 3 points to each of them
  3. Setting the position for each point inside your sprite (Distortion)

You define uvmap points of your texture in a vector, position value is a fraction of your texture size (between 0 and 1).

Then you define the two triangles

Texture mapping is now ready. You can draw the shape on the stage. In order to distort the image you have to define the position of the 4 points in the stage and draw it by using the drawTriangle function.


Video layer compositing

This step consists of having a real-time compositing of 3 videos (created in the first tutorial) with the 4 corner pin shape.

In order to have good video syncronisation we found more practical to use timeline embedded video than external FLV files.

We’ll create a new FLA which will contain the 3 videos as movieclips.

Create the first movie clip, call it backvideo, click on export for ActionScript then call the class sequences.MySequenceBackVideo. Inside your movieclip import the backvideo FLV (file > import > Import to stage).
In a the «Import Video» dialog click on embed FLV in SWF and play in timeline and click continue twice and finish.


On your mc timeline move the video key frame to frame 2. Create an action layer on frame 1, you have to declare 2 functions:

on frame 2 add the code:

This tells the player that we started a tracked sequence called my_sequence (the sequence name we defined in the XML file).

On the last frame add the code:


Export for Create frontvideo mc, click on export for ActionScript then call the class sequences.MySequenceFrontVideo. import the backvideo FLV the same way you did before and move video to frame 2.

Create maskvideo mc, click on export for ActionScript then call the class sequences. MySequenceMaskVideo. import the maskvideo FLV the same way you did before and move video to frame 2.

Publish a SWF file and rename it to mySequence.swf.

Create a new FLA, call it main, this one will load the assets and will display the final result.

Get the assets

You have to load the sequence swf and get the asset from loader context, in our case we assume assets are in a different server but you can also load assets in the same server than the main swf.

Load tracking data

Tracking data are stored in a xml file (see previous tutorial), so we need to load this XML and parse it in order to have the list of points position for each tracking sequence.

Draw assets

In order to have same compositing result as we would have in after effect, we’ll manipulate different videos bitmaps by using blending and channel copy function. We’ll create bitmapData to store the mask, the shape and the final result

Drawing process


Multi tracking sequence manager

In a real case the tracking sequence will not appear during the all video and you can have more than  one sequence at the same time. To handle that case, we give name to each sequence (my_sequence in the xml example)  and we use an object store all tracking sequence which are in progress. We use cuePointEntered(«my_sequence») on backVideo timeline as a trigger to know  when the sequence start, this function is delegated to cuePointHandler in the initAssets function. The sequence stop when the length of the points list reach the animation frame number.

Let’s modify the initCompositing function

create the cuePointHandler function

Update the drawVideo function by removing the pointData argument and by having a loop on assetsToTrack rather than only drawing «my_sequence» tracking sequence.

Video Start And Stop

We start listening to the ENTER_FRAME event with the refresh function on initCompositing function and stop with videoCompleteHandler

Result class

Here is the final result you should have. This is perfect from oriented object prospective, we didn’t split the fonctionnalities to make the concept more clear, up to you to fit this to your project.

You can download final project file here and see a demo here

You can download final project file here and see a demo here