Title:Serverless Machine Learning with TensorFlow
Date:9:00am-4:30pm, 1/25, Friday
Instructor:Chris Rawles, Google
Chris is a ML Solutions Engineer in Google Cloud where he teaches ML to Google customers and builds ML models using TensorFlow and Google Cloud. In his past work as a researcher, Chris used machine learning to study earthquakes.
Course Outline:
  • 9-10am: Module 1: Identify use cases for machine learning
  • 10-11am: Module 2: Explore a dataset, create ML datasets, and create a benchmark
  • 11-12pm: Module 3: Getting started with TensorFlow
    • Use of tf.estimator
    • Dealing with input data
    • Performing feature engineering
    • Building and training models
    • Lab
  • 12-1:30pm: Lunch break
  • 1:30-2:30pm: Module 4: Distributed training and monitoring
  • 2:30-3:00pm: Module 5: Productionize trained ML models, and scale up ML
  • 3:00-4:00pm: Module 6: Advanced feature engineering and combining features
  • 4:00-4:30pm: Module 7: Hyper-Parameter tuning
Who should learn:Developers, Data Scientists who are working on machine learning, deep learning.
Level:Beginner to Intermediate
Prerequsite:The following is prefered but not required.
  • Experience using Python
  • Basic proficiency with a common query language such as SQL
  • A working knowledge of data modeling and extract, transform, load activities
  • Basic familiarity with machine learning and/or statistics
  • join slack (#ainextconsea19) for discussion: slack invitation
Back to Home
Title:Accelerating AI through Automated ML
Date:9am-12pm, 1/25, Friday
Instructor:Sujatha Sagiraju, Microsoft
Course Outline: As a data scientist, for a given machine learning problem, you run multiple ML models to find the right model. For example, for a machine learning classification problem, you could be running your data through many different classifiers available such as SVM, Logistic Regression, Boosted Decision Tress etc. In addition, you are also trying many different hyper parameters such as learning rate, depth of the tree etc. There is no way to find out which model & hyper parameters is best other than trying out many different combinations manually. That is lot of training jobs and manual tuning before you find an optimal model that gives you the performance characteristics that you are satisfied with. AutoML uses intelligent optimization techniques to build a high quality model and does so by providing a simple interface to the user. It is just one method call.
  • 9-10am: Module 1: Intro and overview
  • 10-11:30am: Module 2: Code lab
    • Installation and Configurtion
    • Using Azure ML
    • Inspect and Experience Models
    • Deploy Models
  • 11:30-12pm: Module 3: Summary and Best practices
Who should learn:Data scientists, Developers, BI professionals, Analysts
Level:Beginner to Intermediate
Prerequsite:
  • In order to attend the Azure Machine Learning workshop, users should have an Azure Subscription with contributor or above access. If you do not have an Azure subscription you can get a Free trial from here - https://azure.microsoft.com/en-us/free/
  • You should also create a free account with Azure Notebooks https://notebooks.azure.com/
  • Back to Home
    Title:Fast and lean data science with Tensorflow, Keras and TPUs
    Date:9:00am-12:00pm, 1/26, Saturday
    Instructor:Martin Gorner, Google
    Course Outline:

    Training deep learning models used to be a game of patience. Training runs took hours and you needed hundreds of them to tune a model. Today, as software engineers add machine learning to their skill sets, they need to work faster because they have products to ship. Making use of the cloud is a big component of that, since it provides nearly unlimited compute resources, but introduces costs that you need to keep an eye on.

    This workshop gets you up and running designing, training and deploying state of the art vision models in minutes instead of hours by using Google's Tensor Processing Units (TPUs). We will also share tips and best practices for working with TPUs using modern Keras and Tensorflow 2.0-ready code.

    • 9:00-10:00am: Module 1: Get your data ready for TPU-speed training
      • Use tf.data.dataset
      • Explore data in TF's eager mode
      • Package data in TFRecords
      • Sharding best practices for extreme performance
    • 10:00-10:30am: Module 2: First vision model: transfer learning
      • Use transfer learning
      • Build a Keras classifier
    • 10:30-11:30am: Module 3: Design and improve your own model
    • Train on TPUs, use convolutional layers, maxpooling, strides, use Keras functional style, use modern vision architectures such as squeezenet
    • 11:30-12:00pm: Module 4: Scale up: multiple trainings at once on powerful infrastructure
    • Use ML Engine to accelerate iterative research on models, train, deploy and test a detection model, experiment with different hardware setups (multi-GPU, TPU)
    Who should learn:Developers who would like to add deep learning skills to their skill set. Prior deep learning experience is welcome but all base concepts will be re-explained. The code is Keras in Python. This session focuses on practical data science tasks like model development training and deployment as part of a regular software development project where efficiency and agility are key.
    Level:Beginner to Intermediate
    Prerequsite:The workshop requires good proficiency in general software development and some basic Python skills. Prior deep learning experience is welcome but all base concepts will be re-explained.
  • join slack (#ainextconsea19) for discussion: slack invitation
  • Back to Home
    Title:Build and Manage Machine Learning Pipelines
    Date:1:30pm-4:30pm, 1/26, Saturday
    Instructor:Amy Unruh, Google
    Course Outline: Kubeflow is an open source Kubernetes-native platform for developing, orchestrating, deploying, and running scalable and portable ML workloads — including support for distributed training, data preprocessing and feature engineering, scalable serving, and more. It helps support reproducibility and collaboration in ML workflow lifecycles, allowing you to manage end-to-end orchestration of ML pipelines, to run your workflow in multiple or hybrid environments (such as swapping between on-premises and cloud building blocks, depending upon context), and makes it easy to reuse building blocks across different workflows.

    In this workshop, we will deep dive on building and managing machine learning workloads and can scale with kubeflow.
    • 1:30-2:00pm: Module 1: Intro and overview
    • 2:00-4:00pm: Module 2: Code labs
      • Installation and Configurtion
      • Construct ML workflow
      • Run ML workflow
      • Both from UI and from Jupyter notebook
    • 4:00-4:30pm: Module 3: Summary and Best practice
    Who should learn:Data scientist and engineers, Developers, ML engineers
    Level:Beginner to Intermediate
    Prerequsite:
  • join slack (#ainextconsea19) for discussion: slack invitation
  • Back to Home