Build AI-powered immersive apps for visionOS
A beginner-friendly, 3D-first course designed for developers who want to prepare for the next ‘iPhone moment’ - where AI meets spatial computing.
With Apple Vision Pro, the boundaries between digital and physical worlds are blurring. But this shift isn’t just about placing 3D objects in space - It’s about creating immersive experiences with embedded intelligence.
How do you build such AI powered immersive experiences? For the immersive 3D bit, you need RealityKit - Apple’s core 3D framework. For the AI bit, you need the Foundation models framework - Apple’s core AI framework.
For most developers, both 3D based immersive experiences and AI-driven UX are unfamiliar territory. Even if you’ve built 3D apps before, visionOS introduces entirely new paradigms - especially when combined with Apple’s powerful on-device AI capabilities.
That’s exactly what this course is for. Built from the ground up for visionOS, this course takes you through the A–Z of RealityKit - and shows you how to layer AI on top of it to build next-generation AI powered spatial apps.
Hardware and Software Requirements
Setting Up Your Development Environment
Hello World visionOS
Anatomy Of A 3D Model
3D Model Formats
Where To Find 3D Models
Conversion From Other Formats
Reality Composer Pro Basics
Creating A Solar System Experience (Part 1)
Creating A Solar System Experience (Part 2)
Debugging 3D Scenes
The Building Blocks Of visionOS
Creating 2D Windows
Volumes And Model3D
Immersive Spaces (Part 1)
Immersive Spaces (Part 2)
RealityView For 3D Experiences
RealityKit Architecture For Dummies
Entities, Components And Systems (Part 1)
Entities, Components And Systems (Part 2)
Gestures And User Input (Part 1)
Gestures And User Input (Part 2)
Animations For Dummies
Custom Animations (Part 1)
Custom Animations (Part 2)
Lighting For Dummies
Lighting A 3D Scene (Part 1)
Lighting A 3D Scene (Part 2)
Immersive Audio (Part 1)
Immersive Audio (Part 2)
Attaching SwiftUI Views To 3D Models
What We’re Building
Introduction To Apple's On-Device LLM
Hello World AI
Using Instruction Files To Personalise The AI
Extracting Structured Data From The AI
Building the AI Tutor UI (Part 1)
Building the AI Tutor UI (Part 2)
Building the AI Tutor UI (Part 3)
Implementing A “Look-to-Scroll” Interaction
Connecting the AI to the User Interface
Adding an AI-Powered Lyrics Remix Feature
Building an AI Debugger
Using Tool Calling to Improve LLM Accuracy
Dr. Nikhil Jacob
This course comes with a 7 day money back guarantee. Just let us know within this period if you don't like it and we will return your money, with no questions asked. In other words, no risk for you!
An Apple silicon device is required for this course. An Apple Vision Pro is ideal but no necessary - you can use the Vision Pro simulator in Xcode.
This course assumes prior SwiftUI knowledge. If you're not familiar, you can start with our "Mastering visionOS: Foundations With SwiftUI" course. You should also have a basic understanding of programming concepts like classes, functions, variables, conditional statements, and loops. While we’ll briefly touch on these, the primary focus will be on visionOS, RealityKit, and Apple’s spatial computing frameworks.