Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by OCDevel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by OCDevel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

MLG 024 Tech Stack

1:01:36
 
Share
 

Manage episode 188965755 series 1457335
Content provided by OCDevel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by OCDevel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Try a walking desk to stay healthy while you study or work!

Notes and resources at ocdevel.com/mlg/24

Hardware

Desktop if you're stationary, as you'll get the best performance bang-for-buck and improved longevity; laptop if you're mobile.

Desktops. Build your own PC, better value than pre-built. See PC Part Picker, make sure to use an Nvidia graphics card. Generally shoot for 2nd-best of CPUs/GPUs. Eg, RTX 4070 currently (2024-01); better value-to-price than 4080+.

For laptops, see this post (updated).

OS / Software

Use Linux (I prefer Ubuntu), or Windows, WSL2, and Docker. See mla/12 for details.

Programming Tech Stack

Deep-learning frameworks. You'll use both TF & PT eventually, so don't get hung up. mlg/9 for details.

  1. Tensorflow (and/or Keras)
  2. PyTorch (and/or Lightning)

Shallow-learning / utilities: ScikitLearn, Pandas, Numpy

Cloud-hosting: AWS / GCP / Azure. mla/13 for details.

Episode Summary

The episode discusses setting up a tech stack tailored for machine learning, emphasizing the necessity of choosing a primary programming language and framework, which, in this case, are Python and TensorFlow. The decision is supported by the ongoing popularity and community support for these tools. This preference is further influenced by the necessity for GPU optimization, which TensorFlow provides, allowing for enhanced performance through utilizing Nvidia's CUDA technology.

A notable change in the landscape is the decline of certain deep learning frameworks such as Theano, and the rise of competitors like PyTorch, which is gaining traction due to its ease of use in comparison to TensorFlow. The author emphasizes the importance of selecting frameworks with robust community support and resources, highlighting TensorFlow's lead in the market in this respect.

For hardware, the suggestion is a custom-built PC with a powerful Nvidia GPU, such as the 1080 TI, running Ubuntu Linux for best compatibility. However, for those who favor cloud services, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are viable options, with a preference for GCP due to cost and performance benefits, particularly with the upcoming Tensor Processing Units (TPUs).

On the software side, the use of Pandas for data manipulation, NumPy for mathematical operations, and Scikit-Learn for shallow learning tasks provides a comprehensive toolkit for machine learning development. Additionally, the use of abstraction libraries such as Keras for simplifying TensorFlow syntax and TensorForce for reinforcement learning are recommended.

The episode further explores system architectures, suggesting a separation of concerns between a web app server and a machine learning (job) server. Communication between these components can be efficiently managed using a message queuing system like RabbitMQ, with Celery as a potential abstraction layer.

To support developers in implementing their machine learning pipelines, the recommendation extends to leveraging existing datasets, using Scikit-Learn for convenient access, and standardizing data for effective training results. The author points to several books and resources to assist in understanding and applying these technologies effectively, ending with your own workstation recommendations and building TensorFlow from source for performance gains as a potential advanced optimization step.

  continue reading

57 episodes

Artwork

MLG 024 Tech Stack

Machine Learning Guide

591 subscribers

published

iconShare
 
Manage episode 188965755 series 1457335
Content provided by OCDevel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by OCDevel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Try a walking desk to stay healthy while you study or work!

Notes and resources at ocdevel.com/mlg/24

Hardware

Desktop if you're stationary, as you'll get the best performance bang-for-buck and improved longevity; laptop if you're mobile.

Desktops. Build your own PC, better value than pre-built. See PC Part Picker, make sure to use an Nvidia graphics card. Generally shoot for 2nd-best of CPUs/GPUs. Eg, RTX 4070 currently (2024-01); better value-to-price than 4080+.

For laptops, see this post (updated).

OS / Software

Use Linux (I prefer Ubuntu), or Windows, WSL2, and Docker. See mla/12 for details.

Programming Tech Stack

Deep-learning frameworks. You'll use both TF & PT eventually, so don't get hung up. mlg/9 for details.

  1. Tensorflow (and/or Keras)
  2. PyTorch (and/or Lightning)

Shallow-learning / utilities: ScikitLearn, Pandas, Numpy

Cloud-hosting: AWS / GCP / Azure. mla/13 for details.

Episode Summary

The episode discusses setting up a tech stack tailored for machine learning, emphasizing the necessity of choosing a primary programming language and framework, which, in this case, are Python and TensorFlow. The decision is supported by the ongoing popularity and community support for these tools. This preference is further influenced by the necessity for GPU optimization, which TensorFlow provides, allowing for enhanced performance through utilizing Nvidia's CUDA technology.

A notable change in the landscape is the decline of certain deep learning frameworks such as Theano, and the rise of competitors like PyTorch, which is gaining traction due to its ease of use in comparison to TensorFlow. The author emphasizes the importance of selecting frameworks with robust community support and resources, highlighting TensorFlow's lead in the market in this respect.

For hardware, the suggestion is a custom-built PC with a powerful Nvidia GPU, such as the 1080 TI, running Ubuntu Linux for best compatibility. However, for those who favor cloud services, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are viable options, with a preference for GCP due to cost and performance benefits, particularly with the upcoming Tensor Processing Units (TPUs).

On the software side, the use of Pandas for data manipulation, NumPy for mathematical operations, and Scikit-Learn for shallow learning tasks provides a comprehensive toolkit for machine learning development. Additionally, the use of abstraction libraries such as Keras for simplifying TensorFlow syntax and TensorForce for reinforcement learning are recommended.

The episode further explores system architectures, suggesting a separation of concerns between a web app server and a machine learning (job) server. Communication between these components can be efficiently managed using a message queuing system like RabbitMQ, with Celery as a potential abstraction layer.

To support developers in implementing their machine learning pipelines, the recommendation extends to leveraging existing datasets, using Scikit-Learn for convenient access, and standardizing data for effective training results. The author points to several books and resources to assist in understanding and applying these technologies effectively, ending with your own workstation recommendations and building TensorFlow from source for performance gains as a potential advanced optimization step.

  continue reading

57 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play