Published on

TensorFlow . Probability : Hyperparameter Search

Presentation
Authors

Sam Witteveen and I started the TensorFlow and Deep Learning Singapore group on MeetUp in February 2017, and the twenty-seventh MeetUp, aka TensorFlow 4K, was hosted by Google Singapore, and celebrated the TensorFlow Group acheiving over 4000 members!

We were also delighted to welcome Yew Ken Chia as a new speaker at the TF&DL MeetUp : He gave a presentation describing the research that he did on the TextGraphs-19 Shared Task, successfully submitted to the upcoming EMNLP-2019 Textgraphs workshop in HongKong : "Explanations for School Science Questions".

Sam's talk was a preview of the talk that he'll be giving at the 2019 TensorFlow World summit in California at the end of October : "Training Models at Scale with TPUs: Donuts, Pods and Slices".

My talk was touted as being about Hyperparameter Optimisation (using TF.probability), but it seemed natural to me to introduce the ideas of TF.probability by way of regression in stages :

  • Regular L2 linear regression
  • N(mx+c, sigma=1) regression on max-loglikelihood
  • N(mx+c, sigma=ax+d) regression on max-loglikelihood
  • Gaussian Processes for regression

That was then followed up by using Gaussian Processes to determine where to sample good Learning Rates for SGD trainings of a CIFAR10 model. Hopefully, it made sense to the audience...

The slides for my talk are here :

Presentation Screenshot

If there are any questions about the presentation please ask below, or contact me using the details given on the slides themselves.

Presentation Content Example

and the notebooks are here :