What's new in ARKit 2.0

Two weeks after the end of the WWDC conference. Sessions are watched, documentation is re-read, demo projects are made, which means you can arrange all the information collected in the article.



In the first version of ARKit, it was possible to track the movement of the phone in space, determine the amount and heat of light around, and also receive information about horizontal planes. In ARKit 1.5, which came out with iOS 11.3, they improved image quality, added definition of vertical planes, recognition of static 2D images and autofocus. Let's see what was added in version 2.0.


Save and restore the AR-card


We were given the opportunity to save the environment map along with the objects of augmented reality. Having a map, you can initialize an AR session with it, after which previously placed objects will appear in the right places. The saved card can also be transferred to the server and used on other devices.


It is implemented this way: the ARSession has a method getCurrentWorldMapWithCompletionHandler , which is returned by ARWorldMap . Inside this object, information is stored about the reference points with which ARKit can restore the zero coordinate of the scene, as well as an array of ARAnchors to which objects can be attached. ARWorldMap can be saved or sent somewhere. To restore the card, you need to transfer it to the initialWorldMap field of the ARSessionConfiguration before starting the session. After starting the session status will go to the .limited state with the cause .relocalizing . As soon as ARKit collects a sufficient number of points for recovery, the zero coordinate will be set to the correct position and the session status will go to the .normal state.


For best performance, Apple advises the following:



You do not need to monitor these parameters ARFrame , as now ARFrame has a worldMappingStatus field. But you need to take them into account when designing an application.


Multiplayer Augmented Reality


The mechanism of saving the environment map allowed us to synchronize the coordinate system between several devices. Knowing the position of each device relative to the environment map, it is possible to construct multi-user scenarios.


The presentation featured a SwiftShot game in which you need to shoot from your slingshots at the opponent's slingshots.



The game is written in Swift + SceneKit. Synchronization of user actions takes place using the MultipeerConnectivity framework. Sources of the application can be downloaded here .


Environmental reflection


Adding a metal object of virtual reality to the scene, I would like to see in it a reflection of the objects of the real world. For this, ARWorldTrackingConfiguration an environmentTexturing field. If you use SceneKit as an engine and set the environmentTexturing field to .automatic , you will get this result:



Getting the image from the camera, ARKit builds a cubic map with the texture of the environment. Information that does not fit into the frame is generated using machine learning algorithms.


Tracking moving 2D images


In ARKit 1.5, only static images are tracked. In the second version, the restriction is removed, and now you can get the coordinates of moving images. Similar functionality previously provided SDK Vuforia. At the presentation, as an example of use, we showed the replacement of a photo on a video in a photo frame:



For better tracking you need to use contrast, well-textured images with distinct signs. Xcode will issue a warning if this requirement is not met.


To track images, use ARImageTrackingConfiguration . The trackingImages array is passed to the configuration and set to maximumNumberOfTrackedImages . The image coordinates will be returned as an ARImageAnchor .


Tracking static 3D objects


Also added support for recognizing static 3D objects. Before recognition, the object must be scanned. This can be done using an application from Apple . The scanned object should be monolithic, matte and well textured.


To track objects, create an ARReferenceObject from a file or resource directory and add it to ARWorldTrackingConfiguration.detectionObjects . Information about the objects you will receive in ARFrame .


As an example, the presentation showed the display of information about the statuette in the museum in augmented reality.


Face tracking


In previous versions, it was possible to obtain the coordinates and rotation of the face, a polygonal grid of the face, and an array of Blendershells (51 emotions with progress from zero to one). In the second version we will see three innovations:


determination of directional light.


ARKit 2 uses facial images as a source of light information. With it, you can determine the intensity, temperature and direction of light. This will make the mask more realistic;


language tracking.


To the Blender tongueOut , tongueOut was added, which shows [0,1] the degree of “height” of the language. From myself I can add that I tried to show the language of almost all my friends, whom I gave to play with animoji;


eye tracking


ARFaceAnchor has three new fields: leftEyeTransform , rightEyeTransform and lookAtPoint . There are already demos on the Internet with examples of use:



General improvements in the new version:



All enhancements, except for switching to 4: 3 aspect ratio, will be applied to your applications automatically. For the latter, you need to rebuild the application with the new SDK.




If the information was useful to you, support me with an upward arrow. And I am ready to answer questions in the comments.

Source: https://habr.com/ru/post/415277/


All Articles