Sceneform AR physics experiment
Jul 4, 2020
One of the Sceneform example application demonstrates the image recognition capability (which I use in my AR Map application to activate the augmented classroom billboards) by augmenting a green maze structure onto a sheet of notebook and augmenting a red ball placed into the maze as well. After that the user can tilt the notebook along with the augmented maze to cause the ball to travel inside the maze structure. This is the augmented reality equivalent of toys like these.
That code example uses a JBullet physics engine for the physical simulation portion of the augmented reality: gravity, collision detection of the ball and the maze wall pieces, friction, restitution. The ball has to be affected by gravity so it’d move inside the maze and the maze wall structure has to be constructed along with a bottom plane which prevents the ball from falling out of the scene. The maze has to follow the orientation of the notebook.
I was thinking of some good material for some AR physics experiment and I remembered a real-time CUDA GPGPU demo many years ago where the user threw metal balls against a wooden KEVA plank tower structure. It was something like this; incidentally in this YouTube video the physics engine is also Bullet. Another example of complex KEVA plank tower ball throw simulation could be this.
Here is the resulting app’s android store listing: https://play.google.com/store/apps/details?id=dev.csaba.arphysics.
Bullet is a native C library and JBullet and cz.advel.jbullet:jbullet:20101010 is a Java implementation and that can certainly have some performance consequences. As someone can see the current tower structure is very simple because I’m just examining the limitations. Regardless I made the height of the tower (number of plank levels) to be configurable along with many physical properties such as density, restitution, friction, and gravity. I knew I’d run into interesting things and the project didn’t disappoint me. I already wrote about how I hit a brick-wall when I tried to rekindle with existing Sceneform projects, that set me back for a week. Once I jumped through that hoop and I reached the point of the physics simulation several issues emerged:
- Originally I enabled the simulation right after the tower structure was planted on a surface. However the structure immediately "exploded". I felt that probably I planted the planks too close to each other and they resisted each other if they would be the same polarity magnets. So first I artificially spaced them out so they’d have to fall a little bit. At that point I discovered that in an equilibrium situation they are visibly hovering over each other, not touching. This is because there’s an artificial margin around the box collision shapes: the engine uses that to test collision situations and it’s a very important parameter. The default is 0.03 which in Bullet / JBullet I think supposed to be centimeter (?). However the Sceneform and ARCore standard dimension is in meters, the planks were hovering with a gap of 1-2 centimeters over each other. So first I decreased the margin to 0.0025 (2.5 millimeter) and then I added a logic to the PhyicsController layer (the layer between JBullet and my AR Scene): artificially remove that margin from the AR scene boxes unbeknownst to the ARCore. This way the physics engine could be happy and the AR Scene won’t see any gaps (even if they are as little as 2.5 mm) either. These techniques decreased the initial puff of the structure but it didn’t eliminate it completely. The sphere collision shape doesn’t require this treatment.
- Besides the initial bounce there is another annoying phenomena: if I turn on the simulation right after the tower placement the whole structure is jittery, the planks are restless and the tower slowly slides apart. This is a side effect of possibly rounding errors and imperfections in the simulation and the engine needs to use anti measure to sooth the elements of the simulations. The engine can use heuristics to deactivate items to avoid this. Also there’s a parallel concept of "sleeping" items. Someone can specify the threshold levels for sleeping: ballRB.setSleepingThresholds(0.8f, 1.0f);. When an item goes into a sleeping state it can still fall out of that when another body kinetically pushes it or interacts with it.
- The simulation itself seems to be very limited: the planks don’t really rotate. It’s either the limitation of cz.advel.jbullet:jbullet:20101010 Java port or I need to enable something in the engine (use some different algorithms?).
I have several future plans lined up:
- Use Sceneform-Bullet physics engine. This is also a Java derivative of Bullet but it embeds Bullet in it’s C implementation as an NDK native library. This requires compilation of the native part and not so long ago NDK left out the GCC from the SDK and that’s a huge headache from my long-time VR project and I predict that it’ll cause problems with the compilation of Sceneform-Bullet as well for sure.
- Another engine I would like to integrate as a selection choice is JMonkey: it does have a Bullet core, but it also has its own engine implementation which I’m very curious about. It seems like that both integrating JMonkey or Sceneform-Bullet will require some major refactoring of how the physics layer interacts with the AR Scene layer. Currently the AR Scene layer hands over the AR Nodes to the physics layer, and let’s the physics layer move and rotate the AR objects according to the physics simulation. This is a little tighter relationship then a full fledged well-defined API.
- I’d want to integrate other scenarios. For example I could randomly scatter a bunch of planks bounded by a rectangular 1 meter by 1 meter box. Then place an extra giant "hockey puck" which would be a Sceneform TransformableNode. The user would be able to grab that hockey puck and yank it around which would stir the objects within the simulation box. On the Bullet physics end the puck would be a so called "Kinematic object" so the physics engine would not be able to move it, but the controller layer would move it based on the user’s gestures. The goal of the concept would be something seen here (source code here).