San Francisco, California, United States
423 followers 399 connections

Join to view profile

About

20+ years of experience building robotics, simulation, and ML systems across Meta (FAIR…

Activity

Join now to see all activity

Experience & Education

  • Stealth Venture

View Eric’s full experience

See their title, tenure and more.

or

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Publications

  • Block-Sparse Recurrent Neural Networks

    Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse RNNs are easier to deploy on devices and high-end server processors. Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on…

    Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse RNNs are easier to deploy on devices and high-end server processors. Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms. In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization to create blocks of weights with zeros. Using these techniques, we demonstrate that we can create block-sparse RNNs with sparsity ranging from 80% to 90% with small loss in accuracy. This allows us to reduce the model size by roughly 10x. Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count. Our technique works with a variety of block sizes up to 32x32. Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.

    Other authors
    See publication
  • The Sticky Collision Beam

    Game Developer Magazine

    Creating intelligent third-person cameras is rife with difficulty. There are enemies to keep in view, obstacles to avoid, and important objects to highlight. Here, engine programmer Eric Undersander proposes a camera with a sticky collision beam to bypass many of the major issues faced by AI-controlled cameras.

    See publication

Recommendations received

More activity by Eric

View Eric’s full profile

  • See who you know in common
  • Get introduced
  • Contact Eric directly
Join to view full profile

Other similar profiles

Explore top content on LinkedIn

Find curated posts and insights for relevant topics all in one place.

View top content

Add new skills with these courses