Machine Learning for Artists and Designers: Designing real-time interactions using the Wekinator with Rebecca Fiebrink

Saturday-Sunday, Sept. 24-25, 2016
10:00 AM – 6:00 PM
The Frank-Ratchye STUDIO for Creative Inquiry
College of Fine Arts, Room 111
Organizer: Ali Momeni

*This workshop is invitation only and not open to the public.*

Abstract

This two-day workshop, presented by Rebecca Fiebrink, will provide a hands-on introduction to the design of real-time, sensor-based interactions using the Wekinator machine learning software. The Wekinator is a free, open-source, GUI-based program for machine learning that allows users to easily develop new digital musical instruments, gesturally-controlled games and animations, employ sound and video analysis in customised applications, and more. It integrates easily with many types of sensors, game controllers, and creative software tools (e.g., Max/MSP, Processing, openFrameworks) using the Open Sound Control protocol.

The two days will include tutorials on machine learning and sensor processing alternating with hands-on sessions in which participants will build their own sensor-based interactions. No prior machine learning or programming knowledge is required (though the sessions should still be valuable to machine learning experts who are interested in hands-on and creative applications, and to programmers who want to integrate machine learning into their own coding projects). Participants are encouraged to bring along their own laptops (any operating system). There will be an option to bring your own sensors, and controllers (e.g., check out http://www.wekinator.org/examples/ for out-of-the-box code to connect Wekinator to other software and hardware), but you will be able to fully participate with only a laptop (or sharing someone else’s laptop).

About the instructor

Dr. Rebecca Fiebrink has been developing machine learning tools for creative practice since she released the first version of the Wekinator in 2008. She is a Lecturer (equivalent to an Assistant Professor in the USA) in Computing at Goldsmiths,

Dr. Rebecca Fiebrink has been developing machine learning tools for creative practice since she released the first version of the Wekinator in 2008. She is a Lecturer in Computing at Goldsmiths, University of London, where she teaches undergrad through PhD students in creative computing, music computing, games programming, digital arts computing, and computer science. She is the creator of Machine Learning for Musicians and Artists online course offered on the Kadenze platform, the first MOOC about machine learning to target creative practitioners.

Workshop Scope

The workshop is suitable for anyone interested in applying machine learning to creating real-time interactive systems with sensors, audio, or video. This includes the creation of new interactive art installations, gestural interaction in gaming, new digital musical instruments, and other systems that interpret or respond to human actions or the environment.

This will be a hands-on workshop, with about 8 hours of interactive exploration for each participant spread through the two days. Participants are encouraged to bring along their own laptops.

The main focus of the hands-on activities will be on creating customised real-time interactions using webcams, mobile phone sensors, the mouse, and Leap Motion to connect to sound and animation. All software to connect to these devices will be provided (so no coding is necessary). Participants also have the option of bringing other sensors and controllers (e.g., check out http://www.wekinator.org/examples/ for out-of-the-box code to connect Wekinator to other software and hardware). Other hardware and software can be connected to Wekinator using the Open Sound Control protocol.

Brief tutorials will cover several key areas of interactive machine learning, including:

  • What is machine learning? How is it useful in real-time interactions?
  • Classification (e.g., for sound and gesture labeling)
  • Regression (e.g., for building expressive continuous controllers)
  • Temporal modeling (e.g., to detect when a gesture or action has happened)
  • How is machine learning for interaction different from more conventional machine learning applications?
  • Basic strategies to work with sensor, audio, and video data

(See the agenda below for more details)

Participant Prerequisites

Participants do not require any prior knowledge of machine learning or programming, although participants with machine learning expertise should still find it interesting, and participants who can do at least some programming will be able to integrate Wekinator into their own software projects.

Participants will need to bring a laptop, or pair up with another participant with a laptop.

Tools

The workshop will use the following open-source tools throughout the day:

  • The Wekinator toolkit for real-time, interactive machine learning
  • Example applications for connecting the Wekinator to sensors, audio, video, and existing software and hardware environments such as Arduino, Web apps, Max/MSP, ChucK, Processing, OpenFrameworks, …
  • Interactive classification and regression visualisation programs to explore different learning algorithms
  • The Wekinator Input Helper program for feature processing

Agenda

Day 1

  • Intro to machine learning
  • Intro to classification
  • Hands-on creation of simple real-time classifiers (e.g., facial expression recognition from your webcam, used to control sound)
  • How do different classifiers work, and how should you decide which one to use?
  • Hands-on experimentation with the Classifier Explorer

Break

  • Intro to regression and mappings
  • Hands-on creation of simple real-time mappings (e.g., expressive gestural control of sound)
  • Comparing regression algorithms
  • Hands-on experimentation with the Regression Explorer
  • Free-form hacking time

Day 2

  • The schedule for Day 2 will be determined by participants’ interests. We’ll devote plenty of time to hacking. Depending on interest, we may offer some short tutorials and guided hacks on the following topics:
    • Temporal modeling using dynamic time warping
    • More information about how different classifiers or regression algorithms work
    • Strategies for working with audio, video, and complex sensor data
    • Anything else people want to discuss