David Abel

Recent Posts

NIPS 2017

A Silly Game: Word...

Simple RL

The Relevance of C...

A Primer on Possib...


( See All )

NIPS 2017

12/11/2017

I just returned from a wonderful (albeit a bit chaotic) trip to the 31st NIPS down in Long Beach, CA. As many folks have mentioned, the conference has continued to grow to an unprecedented scope, this year reaching 8,000+ people.

I took some notes on the talks I attended. These include statistics about the conference and publication/review process (see page 8-9) as well as more detailed description of the talks I went to.

Highlights

In addition to these notes, my five highlights from the conference are:

  1. John Platt's talk on energy, fusion, and the next 100 years of human civilization. Definitely worth a watch! He does a great job of framing the problem and building up to his research groups focus. I'm still optimistic that renewables can provide more than the proposed 40% of energy for the world.

  2. Kate Crawford's talk on bias in ML. As with Joelle Pineau's talk on reproducibility, Kate's talk comes at an excellent time to get folks in the community to think deeply about these issues as we build the next generation of tools and systems.

  3. Joelle Pineau's talk (no public link yet available) on reproducibility during the Deep RL Symposium.

  4. Ali Rahimi's test of time talk that caused a lot of buzz around the conference (the ``alchemy" piece beings at the 11minute mark). My takeaway is that Ali is calling for more rigor from our experimentation, methods, and evaluation (and not necessarily just more theory). In light of the findings presented in Joelle's talk, I feel compelled to agree with Ali (at least for Deep RL, where experimental methods are still in the process of being defined). In particular I think with RL we should open up to other kinds of experimental analysis beyond just ``which algorithm got the most reward on task X", and consider other diagnostic tools to understand our algorithms: when did it converge? how suboptimal is the converged policy? how well did it explore the space? how often did an algorithm find a really bad policy? why? where does it fail and why?. Ali and Ben just posted a follow up to their talk that's worth a read.

  5. The Hierarchical RL workshop! This event was a blast. In part because I love this area and find there to be so many open foundational questions, but also because the speaker lineup and poster collection was fantastic. When videos become available I'll post links to some of my highlights, including the panel (see the end of my linked notes above for a rough transcript of the panel).

Misc. Thoughts

And a few other miscellaneous thoughts:

Cheers,
-Dave