Go offline with the Player FM app!
Bronwen Aker - harnessing AI for improving your workflows
Manage episode 478406337 series 58350
Guest Info:
Name: Bronwen Aker
Contact Information (N/A): https://br0nw3n.com/
Time Zone(s): Pacific, Central, Eastern
–Copy begins–
Disclaimer: The views, information, or opinions expressed on this program are solely the views of the individuals involved and by no means represent absolute facts. Opinions expressed by the host and guests can change at any time based on new information and experiences, and do not represent views of past, present, or future employers.
Recorded: https://youtube.com/live/guhM8v8Irmo?feature=share
Show Topic Summary: By harnessing AI, we can assist in being proactive in discovering evolving threats, safeguard sensitive data, analyze data, and create smarter defenses. This week, we’ll be joined by Bronwen Aker, who will share invaluable insights on creating a local AI tailored to your unique needs. Get ready to embrace innovation, transform your work life, and contribute to a safer digital world with the power of artificial intelligence! (heh, I wrote this with the help of AI…)
Questions and topics: (please feel free to update or make comments for clarifications)
Things that concern Bronwen about AI: (https://br0nw3n.com/2023/12/why-i-am-and-am-not-afraid-of-ai/) Data Amplification: Generative AI models require vast amounts of data for training, leading to increased data collection and storage. This amplifies the risk of unauthorized access or data breaches, further compromising personal information.
Data Inference: LLMs can deduce sensitive information even when not explicitly provided. They may inadvertently disclose private details by generating contextually relevant content, infringing on individuals’ privacy.
Deepfakes and Misinformation: Generative AI can generate convincing deepfake content, such as videos or audio recordings, which can be used maliciously to manipulate public perception or deceive individuals. (Elections, anyone?)
Bias and Discrimination: LLMs may inherit biases present in their training data, perpetuating discrimination and privacy violations when generating content that reflects societal biases.
Surveillance and Profiling: The utilization of LLMs for surveillance purposes, combined with big data analytics, can lead to extensive profiling of individuals, impacting their privacy and civil liberties.
Setting up a local LLM? CPU models vs. gpu models pros/cons? Benefits?
What can people do if they lack local resources? Cloud instances? Ec2? Digital Ocean? Use a smaller model?
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
AI coding assets are hallucinating package names
5.2 percent of package suggestions from commercial models didn't exist, compared to 21.7 percent from open source or openly available models
Attackers can then create malicious packages matching the invented name, some are quite convincing with READMEs, fake github repos, even blog posts
An evolution of typosquatting named “slopsquating” by Seth Michael Larson of Python Software Foundation
Threat actor "_Iain" posted instructions and videos using AI for mass-generated fake packages from creation to exploitation
Additional information / pertinent LInks (Would you like to know more?):
https://br0nw3n.com/2024/06/llms-and-prompt-engineering/ - Prompt Engineering talk
https://br0nw3n.com/wp-content/uploads/LLM-Prompt-Engineering-LayerOne-May-2024.pdf (slides)
Daniel Meissler ‘Fabric’ - https://github.com/danielmiessler/fabric
https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/
Ollama tutorial (co-founder of ollama - Matt Williams): https://www.youtube.com/@technovangelist
https://www.whiterabbitneo.com/ - AI for DevSecOps, Security
https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/
https://www.youtube.com/watch?v=OuF3Q7jNAEc - neverending story using an LLM
Show points of Contact:
Amanda Berlin: https://www.linkedin.com/in/amandaberlin/
Brian Boettcher: https://www.linkedin.com/in/bboettcher96/
Bryan Brake: https://linkedin.com/in/brakeb
Brakesec Website: https://www.brakeingsecurity.com
Youtube channel: https://youtube.com/@brakeseced
Twitch Channel: https://twitch.tv/brakesec
463 episodes
Manage episode 478406337 series 58350
Guest Info:
Name: Bronwen Aker
Contact Information (N/A): https://br0nw3n.com/
Time Zone(s): Pacific, Central, Eastern
–Copy begins–
Disclaimer: The views, information, or opinions expressed on this program are solely the views of the individuals involved and by no means represent absolute facts. Opinions expressed by the host and guests can change at any time based on new information and experiences, and do not represent views of past, present, or future employers.
Recorded: https://youtube.com/live/guhM8v8Irmo?feature=share
Show Topic Summary: By harnessing AI, we can assist in being proactive in discovering evolving threats, safeguard sensitive data, analyze data, and create smarter defenses. This week, we’ll be joined by Bronwen Aker, who will share invaluable insights on creating a local AI tailored to your unique needs. Get ready to embrace innovation, transform your work life, and contribute to a safer digital world with the power of artificial intelligence! (heh, I wrote this with the help of AI…)
Questions and topics: (please feel free to update or make comments for clarifications)
Things that concern Bronwen about AI: (https://br0nw3n.com/2023/12/why-i-am-and-am-not-afraid-of-ai/) Data Amplification: Generative AI models require vast amounts of data for training, leading to increased data collection and storage. This amplifies the risk of unauthorized access or data breaches, further compromising personal information.
Data Inference: LLMs can deduce sensitive information even when not explicitly provided. They may inadvertently disclose private details by generating contextually relevant content, infringing on individuals’ privacy.
Deepfakes and Misinformation: Generative AI can generate convincing deepfake content, such as videos or audio recordings, which can be used maliciously to manipulate public perception or deceive individuals. (Elections, anyone?)
Bias and Discrimination: LLMs may inherit biases present in their training data, perpetuating discrimination and privacy violations when generating content that reflects societal biases.
Surveillance and Profiling: The utilization of LLMs for surveillance purposes, combined with big data analytics, can lead to extensive profiling of individuals, impacting their privacy and civil liberties.
Setting up a local LLM? CPU models vs. gpu models pros/cons? Benefits?
What can people do if they lack local resources? Cloud instances? Ec2? Digital Ocean? Use a smaller model?
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
AI coding assets are hallucinating package names
5.2 percent of package suggestions from commercial models didn't exist, compared to 21.7 percent from open source or openly available models
Attackers can then create malicious packages matching the invented name, some are quite convincing with READMEs, fake github repos, even blog posts
An evolution of typosquatting named “slopsquating” by Seth Michael Larson of Python Software Foundation
Threat actor "_Iain" posted instructions and videos using AI for mass-generated fake packages from creation to exploitation
Additional information / pertinent LInks (Would you like to know more?):
https://br0nw3n.com/2024/06/llms-and-prompt-engineering/ - Prompt Engineering talk
https://br0nw3n.com/wp-content/uploads/LLM-Prompt-Engineering-LayerOne-May-2024.pdf (slides)
Daniel Meissler ‘Fabric’ - https://github.com/danielmiessler/fabric
https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/
Ollama tutorial (co-founder of ollama - Matt Williams): https://www.youtube.com/@technovangelist
https://www.whiterabbitneo.com/ - AI for DevSecOps, Security
https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/
https://www.youtube.com/watch?v=OuF3Q7jNAEc - neverending story using an LLM
Show points of Contact:
Amanda Berlin: https://www.linkedin.com/in/amandaberlin/
Brian Boettcher: https://www.linkedin.com/in/bboettcher96/
Bryan Brake: https://linkedin.com/in/brakeb
Brakesec Website: https://www.brakeingsecurity.com
Youtube channel: https://youtube.com/@brakeseced
Twitch Channel: https://twitch.tv/brakesec
463 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.