Adobe improving ActionScript and Stage3D here you can see how fast is it.
Electronic Arts for new FIFA edition are chose Stage3D and Away3D framework this is really good news.
Life Size Messi
Life size Messi features a true-to-life Messi avatar – LEO – who is aware of and responds to users. LEO knows you’re there – he looks at you, he follows your movements, he responds to your touch. He is like a real man just beyond your screen – intelligent, responsive, alive.
Created by Wieden+Kennedy Amsterdam, Resn and Assembly, Life Size Messi is a promo for EA sports upcoming FIFA 14 release, and is intended as a demonstration of the reality now achievable in-game with the players on the pitch.
Visitors to the site can click on various active areas to find out more about the in-game effects in FIFA 14, or alternatively, just while away the time poking LEO in the eye. Every part of the avatar is animated and responds to interaction, right down to rippling fabric and ticklish elbows. And if you position your mouse in too tempting a position, he may even kick your cursor right off the screen.
While assembling the various components of the experience, several technical hurdles had to be overcome. Executive Producer on the project Jonathan Hawke explains:
Getting high quality 3D models and animations into Flash required a custom 3D pipeline, and Away3D was the best option as a starting point for development. We used Maya for 3D modelling, rigging and animation, exporting the data as MD5 files which gave us a straightforward route to the features we required.
Hundreds of bones were used in the model to preserve high-level information around the fine detail animations. Expressions such as “squint” and “smile” were produced from a complex control rig in Maya and then exported out as “rail” animations representing the extreme of an action. 2-6 rail animations are then blended in Flash in realtime to create a multi-dimensional motion control.
Textures were created from hundreds of high resolution photos carefully composited together, with lighting baked into the textures to allow the use of high quality diffuse parameters without sacrificing performance. Dynamic specular lighting is then used in Away3D to create highlight effects that move across the surface of LEO in real time.
The AI component of LEO was built using sets of component animations grouped into behaviours. Combinations of components are based on algorithms controlling the frequency of the occurrence and amplitude of motion. Animations are masked to the regions we want to see and blended using priority levels so that LEO is never off-balance or out of character. The triggering of behaviours is a mix of user interaction and AI. The AI component knows what you are doing and how often you are doing it, and crucially also remembers what you did!
The technical stats for the final result are impressive – the model contains over 75 megapixels of textures, 150,000 polygons, 750 independently animated bones and 90 megabytes of data. It is a truly astonishing creation, and the first online experience to ‘mimic and reflect human actions’ in response to the user.