What is a rig?
A program
A way of converting inputs, transforms, scalars etc. into outputs – other transforms or scalars in the case of animation.
What is a rig?
A program
A way of converting inputs, transforms, scalars etc. into outputs – other transforms or scalars in the case of animation.
Keeping the workflow in engine removes the import step
修正にはDCCツールに戻る必要があり、再インポートも必要
You always see your changes on the in-engine mesh
エディタ内で作成・修正ができる
We don’t intend to build this into a DCC tool replacement, only to concentrate on improving in-editor workflows.
エディタのワークフローを向上させる
ランタイムでできることも増える
Need some animations in a pinch? Not got Maya/Max/MotionBuilder?
ゲームジャムのようなイベントでも、社内でのプロトタイプでも。もう棒立ちのマネキンじゃない!
On-set tweaks to mocap, fast feedback turnaround on shots
リアルタイムで役者のモーションキャプチャをCGで確認するバーチャルプロダクション
去年のSenuaデモ
DCCツールにエクスポートしてから書き戻すことなしにその場で修正
Between different character proportions
From motion capture sources
Sidestep the ‘skeleton restriction’
スケルトンより柔軟なリグを介してリターゲット
Correctives
Runtime rigging e.g. pistons & other mechanisms
シーケンサの外、アニメーションBPなどで使うにはアニメーションシーケンスに変換
Needed to get this working for use in games outside of sequencer
Performance
Integration with existing animation blueprints
Animation node is built, but not fully functional in Animation Blueprints
Not there yet, but started development on the tools
The old APEX integration has some drawbacks as we expanded our character technologies
While it’s a sophisticated SDK that is a ‘one-stop-shop’ for clothing simulation and will handle both the simulation and the render data, we found some of that going unused in UE4.
For example we didn’t use the rendering side of APEX, instead extracting the simulation to render using our own shader. Later we also replaced the importing of APEX render data from the on-disk asset, preferring instead to extract the simulation mesh and embed the already created Unreal render data in the simulation mesh and drive that instead
Another drawback was content creation was only possible in external tools, either through the APEX DCC plugins or with the APEX Clothing Tool provided by Nvidia. And while this pipeline certainly works – there are differences between how clothing simulates between these external tools and in UE4. Previewing changes in engine required a new asset export and reimport into the engine. There was also the opportunity to streamline this pipeline, reducing the time it takes to get clothing up and running on a character in UE4.
Our solution to this issue was to work closely with Nvidia to integrate a low level clothing simulation (NvCloth) into Unreal Engine.
This affords us much more control of the framework outside the core solver – with the new NvCloth integration solely responsible for simulating the underlying clothing mesh. Unreal can then make all of the decisions regarding how and when to simulate, and how that simulation maps onto and affects our graphics meshes.
We can also begin to expand the feature set inside the engine as we have more open access to the data required to run a simulation, which makes it a much easier task to develop tools to help all developers large and small to build more dynamic characters.
Working with Marijn and Viktor at Nvidia NvCloth was a much smoother integration than the previous system. Having direct access to the simulation and it’s data is a much smoother process than going through the previous parameterized data interface that had caused us issues in the past. We now know in Unreal what all the data is, where it is and how it is manipulated which should help us greatly when expanding this technology in the future.
To this end we decided to build our own authoring tools directly in the UE4 editor, and we had a number of goals when we set out to build this toolset:
One strong desire was to speed up the iteration on clothing. If a painted parameter (max dist. Backstop rad/dist) needed a change, that meant editing in an external DCC and then exporting and reimporting to the engine. We wanted to remove the reimport loop there to achieve much faster iteration.
We also wanted the tool to be somewhat familiar, so someone used to building clothing in Maya for example wouldn’t find themselves completely lost.
We also want the tools to be extensible so we can increase the feature set easily in future. To this end the mesh painting systems have had a heavy refactor, making them more adaptable and allowing the addition of new tools much simpler.
This is a basic diagram showing the high level view of the clothing production pipeline as it currently stands.
The actual character mesh and the APEX clothing assets are created using external DCC tools, and then exported to their respective formats. These assets are imported and combined in UE4 and then can finally be previewed. At this stage if we need parameter changes outside the material config exposed in UE4, we need to go back to our DCC to make the changes and reimport the assets. This is also true for any changes to the collision objects. If it’s observed that some clothing intersects the mesh in an undesirable way then we also need to go back to the DCC to add collisions and re-export back to UE4.
With the advent of the new tools we have developed, this is the new pipeline.
There is no longer a loopback to the DCC when we require parameter changes. We are now able to fully configure a clothing simulation from within the skeletal mesh editor in UE4. This includes the previously available material setup with some extra options and some expanded options, alongside the new painting toolset. This allows the per-vertex parameters to be directly painted in the editor without relying on external tools.
Previewing a change made to a clothing simulation is now just a keystroke away, giving immediate feedback for the content creator.
In immediate mode each character has its own super lightweight simulation. The new PhysX API allows us to call each low level function directly. This eliminates a lot of overhead, and gives us finer control of when and how to execute the simulation.
The simulation runs as part of the animation task which means there's no expensive sync point. If the character is not on screen, we simply don't simulate them. The simulation can easily scale with LOD controlled directly by the artist. Bodies that don't need to know about each other don't pay any cost.
In the classic approach to physics we take all of our rigid bodies and simulate them together. Even for bodies that don't really need to interact. So for example, if you had a bunch of characters with simulated flappy bits, you'd have to first animate each character, update each body in the physics engine, simulate, and then finally push the results back to each character. This is slow because it creates a single sync point in the frame where we have to wait.
In immediate mode each character has its own super lightweight simulation. The new PhysX API allows us to call each low level function directly. This eliminates a lot of overhead, and gives us finer control of when and how to execute the simulation.
The simulation runs as part of the animation task which means there's no expensive sync point. If the character is not on screen, we simply don't simulate them. The simulation can easily scale with LOD controlled directly by the artist. Bodies that don't need to know about each other don't pay any cost.
AnimDynamics was our first step into doing this. It gave users the ability to simulate individual bones. This was significantly faster than the monolithic approach, but it was missing key features like collision response, motors, friction, etc…
Because Immediate Mode uses the PhysX solver directly, we can now simulate physics assets directly from the anim blueprint and get the same behavior as our normal simulation
Paragon is a 60fps game with dozens of heroes. At any point in time there can be several unique characters on screen, each with its own unique simulation requirements. In the classic physics simulation bodies get batched together regardless of which hero they belong to – this poses a problem for measuring the performance of each hero.
In immediate mode we can simply wrap the entire animation task with a timer. This gives us a completely accurate cost for each character on screen which allows artists to get the highest quality out of each individual hero.
By sorting the bodies by level of detail we can easily skip bodies that do not need to simulate. This makes LOD swapping essentially free.
Immediate Mode gives us a x2 speed up over AnimDynamics, and even more for large amounts of ragdolls.
Robo Recall is our new physics based shooter for the Oculus Rift.
The team was very small so we wanted to add engine features that would give them the best bang for the buck
We wanted the player interaction with the enemy to be completely seamless. This led to the new Physical Animation component which converts a regular animation into physical motors.
Each body on the character is fully simulated, which allows the player to freely interact with the animation
It was important that the player’s interaction with the world felt good. A common problem in VR is that the player’s hands can move very fast, which can cause tunneling.
The new Kinematic CCD feature allows for continuous collision detection, even for non-simulating objects.
We wanted enemies to get up from ragdoll. The new Pose Snapshot allows you to save any physically simulated pose and then use it in your anim blueprint
Get up from ragdoll is then a simple LERP in your anim blueprint from the last simulated pose and the closest animated pose.
We also added the concept of constraint profiles. This allows for quickly switching between different constraint requirements at runtime. You can see we used quite a few of them to get the exact look we wanted.
物理アセットのコンストレイント設定のランタイムでの切り替え、起き上がる時、銃をつかむときなど状況にあわせたチューニングを用意できる
There is a new Spring Interpolation blueprint node which provides a very simple spring model for moving objects. This gave weapons a sense of weight without having to simulate them. We also used it for some cheap effects like enemy drones
慣性のある挙動にすることで、武器に重みを感じさせる
ドローンにも適用
Live Link is a system we have developed to allow animation data to be streamed into the editor. It is in its very early stages and is still experimental but our goal is to have a defined entry point for people wanting to stream animation data into the engine, as well as taking care of some of the boilerplate work and editor interaction so that developers can concentrate on building what just what they need to get their data, be it from mocap, motion controllers or DCC.
The drive to implement this came from two sources:- The first is our experience building the “Senua” GDC 2016 demo with Ninja Theory. It used an animation node to directly communicate with the mocap system which had some challenging workflow drawbacks and as we love seeing people use our engine for exciting non game applications such as that we want to give those people a much better pipeline to work with and build from.- The second was looking at the workflow of animators on Paragon. Our Paragon hero characters increasingly make use of dynamic elements driven in game (such as anim dynamics or clothing) which the animators cannot see in Maya. Also as we move into more facial animation we want to be able to quickly view that as it would look in game (with all the materials and lighting) which is difficult to do in Maya directly.
The purpose of Live Link is to act as a centre point for animation data coming into the engine. This is handle by the main Live Link system in the engine. Other parts of the engine, for example the animation editor or sequencer, can request animation data from the live link system to use however they want. The live link system itself has ‘sources’ that it receives animation data from. These can be defined entirely within Unreal plugins, allowing third parties to add their own streaming sources to the engine without having to change any engine code itself.
The purpose of Live Link is to act as a centre point for animation data coming into the engine. This is handle by the main Live Link system in the engine. Other parts of the engine, for example the animation editor or sequencer, can request animation data from the live link system to use however they want. The live link system itself has ‘sources’ that it receives animation data from. These can be defined entirely within Unreal plugins, allowing third parties to add their own streaming sources to the engine without having to change any engine code itself.
The purpose of Live Link is to act as a centre point for animation data coming into the engine. This is handle by the main Live Link system in the engine. Other parts of the engine, for example the animation editor or sequencer, can request animation data from the live link system to use however they want. The live link system itself has ‘sources’ that it receives animation data from. These can be defined entirely within Unreal plugins, allowing third parties to add their own streaming sources to the engine without having to change any engine code itself.
The purpose of Live Link is to act as a centre point for animation data coming into the engine. This is handle by the main Live Link system in the engine. Other parts of the engine, for example the animation editor or sequencer, can request animation data from the live link system to use however they want. The live link system itself has ‘sources’ that it receives animation data from. These can be defined entirely within Unreal plugins, allowing third parties to add their own streaming sources to the engine without having to change any engine code itself.
As part of development of Live Link we have created a plugin for Maya that will stream animation data into the engine. For communication with the engine we have built a framework on top of Unreal Message Bus. Not only does this create a separation in our Maya plugin between the code that interacts with Maya and that which communicates with the engine, but it also allows easier development of other DDC package plugins by building directly on top of our framework and being able to concentrate only on working with the DDC packages API.