For the latest AWE event visit AWE Website
  • No notifications yet.
  • Edit Profile
  • Settings
  • Invite Friends
  • See Tutorial
  • Sign Out
logo image
  • logo image
  • Home
  • Agenda
  • Sponsors & Exhibitors
  • Speakers
  • Expo Floor Plan
  • 2017 Auggie Awards
  • Blog
  • AWE in the News
  • AWE Playground - AWE USA 2017
  • AWE USA 2016
  • About AWE
  • Contact Us
  • Past Auggie Winners
  • More
    • Blog
    • AWE in the News
    • AWE Playground - AWE USA 2017
    • AWE USA 2016
    • About AWE
    • Contact Us
    • Past Auggie Winners
    • No notifications yet.
    • Edit Profile
    • Settings
    • Invite Friends
    • See Tutorial
    • Sign Out
Registered User? Login
Forgot Password?
Sign Up
loader image
New User? Sign Up
Forgot Password?
Login
loader image
BACK
Gary Brown
VP Marketing Movidius
Movidius, an Intel Company
Bio
As VP of Marketing at Movidius, Gary managed the launch of the Myriad 2 Vision Processing Unit (VPU) which enables ultra-low power vision in drones, VR/AR headsets, surveillance cameras and other connected devices. At Intel, Gary now manages product marketing and strategic partnerships. Gary’s background in embedded DSP and his passion for vision technology help him navigate Movidius into this new era of machine intelligence.
Sessions
  • Deep Learning in AR: the 3 Year Horizon
    04:15 PM - 04:30 PM Jun 1
    Deep learning techniques are gaining in popularity in many facets of embedded vision, and this holds true for AR and VR. Will they soon dominate every facet of vision processing? This talk explores this question by examining the theory and practice of applying deep learning to real world problems for Augmented Reality, with real examples describing how this shift is happening today quickly in some areas, and slower in others. Today it’s widely accepted for image recognition tasks that Deep Learning techniques involving Convolutional Neural Networks (CNNs) are dominating. Other application solutions use hybrid approaches. Other applications are still holding out using classical embedded vision techniques. These themes are then explored further through specific real world examples of gesture tracking (which is moving to CNN), stereo depth (hybrid approach), SLAM (moving toward a hybrid approach), and ISP (imaging pipelines holding with traditional algorithms). The talk contrasts the mix of algorithms being deployed today with a prediction of the mix we expect to find in AR/VR headsets 3 years from now.
Powered byBizzabo
Powered byBizzabo