Redefining AI is the 2024 New York Digital Award winning tech podcast! Discover a whole new take on Artificial Intelligence in joining host Lauren Hawker Zafer, a top voice in Artificial Intelligence on LinkedIn, for insightful chats that unravel the fascinating world of tech innovation, use case exploration and AI knowledge. Dive into candid discussions with accomplished industry experts and established academics. With each episode, you'll expand your grasp of cutting-edge technologies and ...
…
continue reading
Content provided by OCDevel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by OCDevel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
MLG 033 Transformers
MP3•Episode home
Manage episode 465771261 series 1457335
Content provided by OCDevel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by OCDevel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Links:
- Notes and resources at ocdevel.com/mlg/33
- 3Blue1Brown videos: https://3blue1brown.com/
- Try a walking desk stay healthy & sharp while you learn & code
- Try Descript audio/video editing with AI power-tools
- RNN Limitations: Sequential processing prevents full parallelization—even with attention tweaks—making them inefficient on modern hardware.
- Breakthrough: “Attention Is All You Need” replaced recurrence with self-attention, unlocking massive parallelism and scalability.
- Layer Stack: Consists of alternating self-attention and feed-forward (MLP) layers, each wrapped in residual connections and layer normalization.
- Positional Encodings: Since self-attention is permutation invariant, add sinusoidal or learned positional embeddings to inject sequence order.
- Q, K, V Explained:
- Query (Q): The representation of the token seeking contextual info.
- Key (K): The representation of tokens being compared against.
- Value (V): The information to be aggregated based on the attention scores.
- Multi-Head Attention: Splits Q, K, V into multiple “heads” to capture diverse relationships and nuances across different subspaces.
- Dot-Product & Scaling: Computes similarity between Q and K (scaled to avoid large gradients), then applies softmax to weigh V accordingly.
- Causal Masking: In autoregressive models, prevents a token from “seeing” future tokens, ensuring proper generation.
- Padding Masks: Ignore padded (non-informative) parts of sequences to maintain meaningful attention distributions.
- Transformation & Storage: Post-attention MLPs apply non-linear transformations; many argue they’re where the “facts” or learned knowledge really get stored.
- Depth & Expressivity: Their layered nature deepens the model’s capacity to represent complex patterns.
- Residual Links: Crucial for gradient flow in deep architectures, preventing vanishing/exploding gradients.
- Layer Normalization: Stabilizes training by normalizing across features, enhancing convergence.
- Parallelization Advantage: Entire architecture is designed to exploit modern parallel hardware, a huge win over RNNs.
- Complexity Trade-offs: Self-attention’s quadratic complexity with sequence length remains a challenge; spurred innovations like sparse or linearized attention.
- Pretraining & Fine-Tuning: Massive self-supervised pretraining on diverse data, followed by task-specific fine-tuning, is the norm.
- Emergent Behavior: With scale comes abilities like in-context learning and few-shot adaptation, aspects that are still being unpacked.
- Distributed Representation: “Facts” aren’t stored in a single layer but are embedded throughout both attention heads and MLP layers.
- Debate on Attention: While some see attention weights as interpretable, a growing view is that real “knowledge” is diffused across the network’s parameters.
57 episodes
MP3•Episode home
Manage episode 465771261 series 1457335
Content provided by OCDevel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by OCDevel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Links:
- Notes and resources at ocdevel.com/mlg/33
- 3Blue1Brown videos: https://3blue1brown.com/
- Try a walking desk stay healthy & sharp while you learn & code
- Try Descript audio/video editing with AI power-tools
- RNN Limitations: Sequential processing prevents full parallelization—even with attention tweaks—making them inefficient on modern hardware.
- Breakthrough: “Attention Is All You Need” replaced recurrence with self-attention, unlocking massive parallelism and scalability.
- Layer Stack: Consists of alternating self-attention and feed-forward (MLP) layers, each wrapped in residual connections and layer normalization.
- Positional Encodings: Since self-attention is permutation invariant, add sinusoidal or learned positional embeddings to inject sequence order.
- Q, K, V Explained:
- Query (Q): The representation of the token seeking contextual info.
- Key (K): The representation of tokens being compared against.
- Value (V): The information to be aggregated based on the attention scores.
- Multi-Head Attention: Splits Q, K, V into multiple “heads” to capture diverse relationships and nuances across different subspaces.
- Dot-Product & Scaling: Computes similarity between Q and K (scaled to avoid large gradients), then applies softmax to weigh V accordingly.
- Causal Masking: In autoregressive models, prevents a token from “seeing” future tokens, ensuring proper generation.
- Padding Masks: Ignore padded (non-informative) parts of sequences to maintain meaningful attention distributions.
- Transformation & Storage: Post-attention MLPs apply non-linear transformations; many argue they’re where the “facts” or learned knowledge really get stored.
- Depth & Expressivity: Their layered nature deepens the model’s capacity to represent complex patterns.
- Residual Links: Crucial for gradient flow in deep architectures, preventing vanishing/exploding gradients.
- Layer Normalization: Stabilizes training by normalizing across features, enhancing convergence.
- Parallelization Advantage: Entire architecture is designed to exploit modern parallel hardware, a huge win over RNNs.
- Complexity Trade-offs: Self-attention’s quadratic complexity with sequence length remains a challenge; spurred innovations like sparse or linearized attention.
- Pretraining & Fine-Tuning: Massive self-supervised pretraining on diverse data, followed by task-specific fine-tuning, is the norm.
- Emergent Behavior: With scale comes abilities like in-context learning and few-shot adaptation, aspects that are still being unpacked.
- Distributed Representation: “Facts” aren’t stored in a single layer but are embedded throughout both attention heads and MLP layers.
- Debate on Attention: While some see attention weights as interpretable, a growing view is that real “knowledge” is diffused across the network’s parameters.
57 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.