Machine Learning for Artists and Designers: Using the Gesture Recognition Toolkit with Nick Gillian

Saturday, April 9, 2016
10:00 AM – 6:00 PM
The Frank-Ratchye STUDIO for Creative Inquiry
College of Fine Arts, Room 111
Organizer: Ali Momeni

*This workshop is invitation only and not open to the public.*

Abstract

This full-day workshop, presented by Nicholas Gillian-author of the Gesture Recognition Toolkit (also on GitHub) and machine learning expert from Google ATAP–aims to provide a hands-on guide to the real-time application of machine learning and gesture recognition for interaction design, musical instrument design, gesture recognition and computer vision. Participants for this workshop do not require any prior knowledge of machine learning or gesture recognition, as the fundamentals of these topics will be reviewed in the opening section of the workshop. This workshop is intended to be as hands-on as possible, providing its participants the opportunity, using free open-source software, to experience for themselves the power of a number of machine learning algorithms. Participants will be encouraged to bring along their own sensor devices, laptops and audio software and get the opportunity to train, test and experiment with a number of machine learning algorithms for real-time gestural control of music. The content of the workshop will be focused on providing a high-level description of the core concepts of machine learning, giving participants the practical knowledge and essential skills they require to get the most of any machine learning algorithm, without requiring deep knowledge of underlying machine learning theory.

Workshop Scope

The workshop is suitable for anyone interested in applying machine learning to real-time interactive systems for gesture recognition; computer vision; mapping of sensor data for intelligent systems; rapid prototyping of HCI interfaces; building digital musical instruments, interactive robotics, etc..  

This will be a hands on workshop, with several hours of interactive exploration for each participant spread throughout the day.  Participants are encouraged to bring along their own sensor devices, laptops, and audio software to experience for themselves the power of interactive machine learning.

The workshop will cover several key areas of interactive machine learning including:

  • The fundamentals of machine learning
  • Classification for discrete gesture recognition (Naive Bayes and Random Forests)
  • Regression for continuous mapping (Neural Networks)
  • Feature Extraction and data pre/post processing
  • Putting everything together to build more complex systems
  • (see the agenda below for more details)

Participant Prerequisites

Participants for this workshop do not require any prior knowledge of machine learning or gesture recognition as the fundamentals of these topics will be reviewed in the opening section of the workshop.

Participants should be comfortable compiling and running openFrameworks applications; using make, terminal and git; and reading C++ code.  However, very little C++ coding skills will be required as participants will be supplied with several pre-written interactive demos throughout the workshop.

Tools

The workshop will use the following open-source tools throughout the day:

  • Gesture Recognition Toolkit (C++ open-source library for real-time gesture recognition)
  • OpenFrameworks (C++ framework for interactive computing)
  • Open Sound Control (OSC, a network protocol for data streaming)
  • Audio Software: TBD, maybe we will do the basic audio stuff in openFrameworks
  • (see the section below for details on what to install before the workshop)

Agenda

10am: kick off

10:15am – 11am: machine learning fundamentals

  • Supervised learning
  • Classification, Regression, Timeseries Analysis, Clustering
  • Training, Testing, Validation
  • Machine learning pipeline
  • Classification Algorithm Selection
  • Demo: pixel classification

11am – 1pm: discrete classification

  • Naive Bayes, the basics
  • Demo GyroOSC: Tilt Detection
  • Demo GyroOSC: Shake Detection
  • Demo Kinect Posture Classification 

1pm – 3pm: regression

  • Linear Regression, the basics
  • Demo (Sensor?): continuous filter mapping

3pm – 3:15pm: coffee break

3:15pm – 4pm: feature extraction, pre/post processing

  • Feature extraction
  • Demo: audio classification
  • Feature selection
  • Preprocessing: lowpass/highpass filtering, derivatives
  • Post processing: filtering labels, gesture timeouts

4pm – 5:30pm: building larger systems

  • State machines
  • Combining discrete gesture recognition and continuous mapping
  • Solving hard problems simply
  • Gesture Spotting
  • What to do when things don’t work
  • Interactive Demo: Orb Zooming

5:30 – 6pm: closing discussion, review of what we’ve covered, next steps, wrap up

6pm: end