ARCore allows developers to build augmented reality experiences for Android devices. It uses SLAM (simultaneous localization and mapping) to track a device's position and understand its environment. ARCore can detect surfaces like tables and walls to place virtual objects, estimate lighting for realistic rendering, and track objects as users move. Developers can use the ARCore Java SDK for simple AR apps or the Unity SDK for more complex experiences. Features like Cloud Anchors and Augmented Images allow sharing AR content across devices.
8. As your device moves
through the world,
ARCore combines visual
data from the device’s
camera and IMU to
compute the position and
orientation of the phone.
Motion Tracking
9. ARCore understands the
physical structure of your
environment, detecting
horizontal and vertical
surfaces, like tables, desks
and walls, and makes these
surfaces available for your app
as planes.
Environmental
Understanding
10. ARCore can detect
information about the
lighting of its environment
so you can render your
virtual objects under the
same conditions as the
environment around them.
Light Estimation
13. ARCore Java SDK ARCore Unity SDK
Start requires Unity knowledge
API exposed in C#
Run or export to Android Project
Rich 3d editing and scripting
Physics engines
Artist and Designer Tools
Top Engine performance
Graphics Rendering
Cross Platform*
Simple to start
Convenient Java api
Android Setup, Tests, CI etc.
Little to zero 3d editing capabilities
No Scene editing capabilities
No 3d objects interaction
No simple 3d object animations,
transformations, shaders
Runs in DalvikVM / ART
19. Sceneform SDK A high-level scene graph API
A realistic physically based
renderer
An Android Studio plugin for importing,
viewing, and building 3D assets + Google
Poly support
Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information
https://medium.com/coding-blocks/arcore-diving-into-the-world-of-augmented-reality-31ba228d8530
Augmented reality is direct or indirect live view of a physical, real-world environment whose elements are "augmented" by computer-generated perceptual information. Virtual reality is the use of computer technology to create a simulated environment, placing the user inside an experience.
Both technologies enable us to experience computing more like we experience the real world; they make computing work more like we do in regular life-- in a 3D space. In terms of how the two technologies are used, think of it like this. VR transports you to a new experience. You don’t just get to see a place, you feel what it’s like to be there. AR brings computing into your world, letting you interact with digital objects and information in your environment.
Generally speaking, this difference makes AR a better medium for day-to-day applications, because users don’t have to shut out the world to engage with them.
Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information
https://medium.com/coding-blocks/arcore-diving-into-the-world-of-augmented-reality-31ba228d8530
On the mobile AR side again. This is how most of the world will experience augmented reality for the first time. In fact, the rapid development of smart phones has actually contributed to the growth of the VR and AR industries. That's because the same components that make smartphones work; gyroscopes, accelerometers, miniaturized high resolution displays are also required for AR and VR headsets. The high demand for smartphones has driven the mass production of these components throughout the past 10 years resulting in greater hardware innovations and decreases in costs. And the most basic sense, AR is created using the front and rear facing cameras on your phone. You hold it up, and your screen is able to display digital objects and information integrated within your real world. Your phone can now act like a portal to new worlds, experiences, and information.
HARDWARE USED FOR AR
Whether it's happening on a smartphone or inside a standalone headset, every AR app is intended to show convincing virtual objects. One of the most important things that systems like ARCore do is motion tracking. AR platforms need to know when you move. The general technology behind this is called Simultaneous Localization and Mapping or SLAM. This is the process by which technologies like robots and smartphones analyze, understand, and orient themselves to the physical world. SLAM processes require data collecting hardware like cameras, depth sensors, light sensors, gyroscopes, and accelerometers. ARCore uses all of these to create an understanding of your environment and uses that information to correctly render augmented experiences by detecting planes and feature points to set appropriate anchors. In particular, ARCore uses a process called Concurrent Odometry and Mapping or COM. That might sound complex, but basically, COM tells a smartphone where it's located in space in relationship to the world around it. It does this by capturing visually distinct features in your environment. These are called feature points. These feature points can be the edge of a chair, a light switch on a wall, the corner of a rug, or anything else that is likely to stay visible and consistently placed in your environment. Any high-contrast visual conserve as a feature point. This means that vases, plates, cups, wood textures, wallpaper design, statues, and other common elements could all work as potential feature points. ARCore combined, it's new awareness of feature points with the inertial data, all the information about your movement, from your smartphone. Many smartphones in existence today have gyroscopes for measuring the phones angle and accelerometers for measuring the phones speed. Together, feature points in inertial data work together to help ARCore determine your phones pose. Pose means any object's position and orientation to the world around it. Now that ARCore knows the pose of your phone, it knows where it needs to place the digital assets to seem logical in your environment. Remember, virtual objects need to have a place and be at the right scale as you walk around them. For example, this lion needs to have its feet on the ground to create the illusion that it is standing there, rather than floating in space.
Once AR Core has analyzed your surrounding and placed planes and reference points where they belong, you'll be able to set anchors for your AR objects. Anchors, also referred to as anchor points, are the points in your environment that AR Core knows should always hold the respective digital object. This applies specifically to static digital objects. For example, say you want to place a digital lamp on a table. You would set the anchor to be on top of the table which AR Core has already discovered and recognized as a horizontal plane. Now, once that lamp is placed, it will stay where you've put it and respond the way they should to your movements and orientation. If you turn around, the lamp stay on the table. If you turn back, it will still be there waiting for you. For objects that are meant to move around in space, such as an airplane or a helicopter, anchoring like we described for the lamp wouldn't apply. Anchor points are hard to pull off for AR platforms because setting them requires all of the plane finding, motion tracking, and computer vision systems that we have already discussed. These points separate top quality AR systems from those that simply projected digital objects onto the feed from your phone's camera. The reason they're needed is that the motion tracking is not perfect. As you walk around, error, referred to as drift, accumulates and the devices pose may not reflect where you actually are. Anchors allow the underlying system to correct that error by indicating which points are important.
ARCore integrates virtual content with the real world as seen through your phone's camera and shown on your phone's display with technologies like motion tracking, environmental understanding, and light estimation.
Motion tracking uses your phone's camera, internal gyroscope, and accelerometer to estimate its pose in 3D space in real time.
Environmental understanding is the process by which ARCore “recognizes” objects in your environment and uses that information to properly place and orient digital objects. This allows the phone to detect the size and location of flat horizontal surfaces like the ground or a coffee table.
Light estimation in ARCore is a process that uses the phone’s cameras to determine how to realistically match the lighting of digital objects to the real world’s lighting, making them more believable within the augmented scene.
Feature points are visually distinct features in your environment, like the edge of a chair, a light switch on a wall, the corner of a rug, or anything else that is likely to stay visible and consistently placed in your environment.
Concurrent odometry and mapping (COM) is a motion tracking process for ARCore, and tracks the smartphone’s location in relation to its surrounding world.
Plane finding is the smartphone-specific process by which ARCore determines where surfaces are in your environment and uses those surfaces to place and orient digital objects. ARCore looks for clusters of feature points that appear to lie on common horizontal or vertical surfaces, like tables or walls, and makes these surfaces available to your app as planes. ARCore can also determine each plane's boundary and make that information available to your app. You can use this information to place virtual objects resting on flat surfaces.
Anchors “hold” the objects in their specified location after a user has placed them.
Motion tracking is not perfect. As you walk around, error, referred to as drift, may accumulate, and the device's pose may not reflect where you actually are. Anchors allow the underlying system to correct that error by indicating which points are important.
ARCore integrates virtual content with the real world as seen through your phone's camera and shown on your phone's display with technologies like motion tracking, environmental understanding, and light estimation.
Motion tracking uses your phone's camera, internal gyroscope, and accelerometer to estimate its pose in 3D space in real time.
Environmental understanding is the process by which ARCore “recognizes” objects in your environment and uses that information to properly place and orient digital objects. This allows the phone to detect the size and location of flat horizontal surfaces like the ground or a coffee table.
Light estimation in ARCore is a process that uses the phone’s cameras to determine how to realistically match the lighting of digital objects to the real world’s lighting, making them more believable within the augmented scene.
Feature points are visually distinct features in your environment, like the edge of a chair, a light switch on a wall, the corner of a rug, or anything else that is likely to stay visible and consistently placed in your environment.
Concurrent odometry and mapping (COM) is a motion tracking process for ARCore, and tracks the smartphone’s location in relation to its surrounding world.
Plane finding is the smartphone-specific process by which ARCore determines where surfaces are in your environment and uses those surfaces to place and orient digital objects. ARCore looks for clusters of feature points that appear to lie on common horizontal or vertical surfaces, like tables or walls, and makes these surfaces available to your app as planes. ARCore can also determine each plane's boundary and make that information available to your app. You can use this information to place virtual objects resting on flat surfaces.
Anchors “hold” the objects in their specified location after a user has placed them.
Motion tracking is not perfect. As you walk around, error, referred to as drift, may accumulate, and the device's pose may not reflect where you actually are. Anchors allow the underlying system to correct that error by indicating which points are important.
ARCore integrates virtual content with the real world as seen through your phone's camera and shown on your phone's display with technologies like motion tracking, environmental understanding, and light estimation.
Motion tracking uses your phone's camera, internal gyroscope, and accelerometer to estimate its pose in 3D space in real time.
Environmental understanding is the process by which ARCore “recognizes” objects in your environment and uses that information to properly place and orient digital objects. This allows the phone to detect the size and location of flat horizontal surfaces like the ground or a coffee table.
Light estimation in ARCore is a process that uses the phone’s cameras to determine how to realistically match the lighting of digital objects to the real world’s lighting, making them more believable within the augmented scene.
Feature points are visually distinct features in your environment, like the edge of a chair, a light switch on a wall, the corner of a rug, or anything else that is likely to stay visible and consistently placed in your environment.
Concurrent odometry and mapping (COM) is a motion tracking process for ARCore, and tracks the smartphone’s location in relation to its surrounding world.
Plane finding is the smartphone-specific process by which ARCore determines where surfaces are in your environment and uses those surfaces to place and orient digital objects. ARCore looks for clusters of feature points that appear to lie on common horizontal or vertical surfaces, like tables or walls, and makes these surfaces available to your app as planes. ARCore can also determine each plane's boundary and make that information available to your app. You can use this information to place virtual objects resting on flat surfaces.
Anchors “hold” the objects in their specified location after a user has placed them.
Motion tracking is not perfect. As you walk around, error, referred to as drift, may accumulate, and the device's pose may not reflect where you actually are. Anchors allow the underlying system to correct that error by indicating which points are important.
WHAT TO USE - RECOMMENDATION
Pros and Cons using Java/Unity/Unreal (2 min)
Unity is a popular game engine for creating 3D objects, for video games, films, AR content and a variety of other projects. Unity has a ton of tools from simple to professional to allow for the streamlined creation of 3D objects and environments. Unity can be used hand-in-hand with AR Core to create an experience of your own. With unity, you can import the objects directly from poly and then use a collection of tools and plug-ins to easily incorporate these objects into your app, experience or game you're trying to build. Using Unity can become complex. Some people devote their entire careers to improving their skills with the platform. For our purposes, we're going to stick to teaching you the Unity skills that will help you most when it comes to building AR content.
Unity is a cross-platform game engine and development environment for both 3D and 2D interactive applications. It has a variety of tools, from the simple to the professionally complex, to allow for the streamlined creation of 3D objects and environments.
Poly toolkit for Unity is a plugin that allows you to import assets from Poly into Unity at edit time and at runtime.
Edit-time means manually downloading assets from Poly and importing them into your app's project while you are creating your app or experience.
Runtime means downloading assets from Poly when your app is running. This allows your app to leverage Poly's ever-expanding library of assets.
Demo same changes in Java vs Unity (3 min)
Use hello_ar project for java and unity.
Try to do some simple changes in java project
Show same and much bigger changes in one click in unity
Demo same changes in Java vs Unity (3 min)
Use hello_ar project for java and unity.
Try to do some simple changes in java project
Show same and much bigger changes in one click in unity
Demo same changes in Java vs Unity (3 min)
Use hello_ar project for java and unity.
Try to do some simple changes in java project
Show same and much bigger changes in one click in unity
Demo same changes in Java vs Unity (3 min)
Use hello_ar project for java and unity.
Try to do some simple changes in java project
Show same and much bigger changes in one click in unity
Sceneform SDK is a high-level 3D framework that makes it easy for users to build AR apps in Java. It offers a new library for Android that enables the rapid creation and integration of AR experiences, and combines ARCore with a powerful physically-based 3D renderer. It includes a runtime API for working with graphics and rendering, and a plugin to help you import, preview, and tweak the look and feel of your assets directly in Android Studio.
Sceneform is highly optimized for mobile. Java developers can now build immersive, 3D apps without having to learn complicated APIs like OpenGL. They can use it to build AR apps from scratch as well as add AR features to existing ones.
What follows is a walkthrough of how to use Sceneform. It's more technically advanced than most of the other content in this course--it's very helpful to have a little background in Java to fully appreciate how you might use it yourself--but we've included it so that aspiring creators can start to learn how to use Sceneform to make their own AR content.
Using Sceneform
To use Sceneform, first get the plugin. Like most other Android Studio Plugins, you can download this from the Android Studio Plugins page.
Preferences
> Plugins
>Browse Repositories
Google Sceneform Tools (Beta)
Once you have the plugin, it's easy to get assets imported. The first thing you'll need to do is place source files in your project. We recommend the sampledata folder because sampledata does not get bundled into your final project. This is important because you actually aren't going to want to include these raw source assets in your app. Supported files include:
obj
fbx
glTF
This is because Sceneform will convert these source files into a runtime-optimized format so that it performs well and looks great on a phone.
This new 'Runtime Optimized Binaries' format is called .sfb (or SFB), and it's what you'll eventually want to include in your APKs.
To import a Sceneform asset, right click on the model, which will trigger an import wizard flow.
All the import wizard does is set up the connections between your model file and the generated files. The file path is automatically filled in when you import from the context menu. The "SFA" and "SFB" are created when you import, so in the example below we're just telling the importer where to put them once they're generated.
Now just click the finish button and your asset will be imported. From here, you'll want to see your new files.
The plugin will add your asset to the gradle build, hook it up to your build dependencies so it's always up-to-date with source assets, and generate the new SFB runtime-optimized file.
You'll notice that as soon as we finish, we pop up the SFB file viewer.
This viewer lets you see what your asset will look like without deploying it. The plugin uses the same renderer as we do on the phone: WYSIWYG.
Let's say you wanted to tweak the object, like making it look shinier. You can customize your assets with the SFA. The SFA file defines how Sceneform renders an asset. You can change the parameters of the look and feel of the asset. And when you change it, the SFB will be built using the new settings we selected.
Once the file is saved with new parameters, the SFB will be regenerated and reloaded in the viewer. When you have the SFB open in the editor you can edit the asset definition, and as soon as we get your SFB updated with the new parameters you'll see exactly how the asset will look in your app.
This is just one possible parameter tweak of many. To learn more about Material Parameters, including options for each of our supported file types, visit: https://developers.google.com/ar/develop/java/sceneform/sfa
Now it's time to put the asset in AR by loading it at runtime. This is code from the HelloSceneform app:
As you can see, we've changed the setSource line to just load with a Uri.
From here, you've created the first iteration of your AR app using Sceneform!
To learn more about the Sceneform SDK, and how to work with these ModelRenderables now that they are in the runtime, check out the "Rendering for Android AR apps" session from Google I/O 2018 here: https://www.youtube.com/watch?v=jzaMMV6w_OE
Demo same changes in Java vs Unity (3 min)
Use hello_ar project for java and unity.
Try to do some simple changes in java project
Show same and much bigger changes in one click in unity
Try run demo results first before coding
Demo same changes in Java vs Unity (3 min)
Use hello_ar project for java and unity.
Try to do some simple changes in java project
Show same and much bigger changes in one click in unity
Demo same changes in Java vs Unity (3 min)
Use hello_ar project for java and unity.
Try to do some simple changes in java project
Show same and much bigger changes in one click in unity