Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

Roger Basler De Roca Podcasts

show episodes
 
"Hello SundAI - Our World Through the Lens of AI," is your twice-weekly dive into how artificial intelligence shapes our digital landscape. Hosted by Roger and SundAI the AI, this podcast brings you practical tips, cutting-edge tools, and insightful interviews every Sunday and Wednesday morning. Whether you're a seasoned tech enthusiast or just starting to explore the digital domain, tune in to discover innovative ways to get things done and propel yourself forward in a world increasingly dr ...
  continue reading
 
Loading …
show series
 
In this episode, we enter the world of Large Reasoning Models (LRMs). We explore advanced AI systems such as OpenAI’s o1/o3, DeepSeek-R1, and Claude 3.7 Sonnet Thinking—models that generate detailed "thinking processes" (Chain-of-Thought, CoT) with built-in self-reflection before answering. These systems promise a new era of problem-solving. Yet, t…
  continue reading
 
Join us as we dive into a groundbreaking study that systematically investigates the strengths and fundamental limitations of Large Reasoning Models (LRMs), the cutting-edge AI systems behind advanced "thinking" mechanisms like Chain-of-Thought with self-reflection. Moving beyond traditional, often contaminated, mathematical and coding benchmarks, t…
  continue reading
 
In this show, we break down the art of crafting prompts that help AI deliver precise, useful, and reliable results. Whether you're summarising text, answering questions, generating code, or translating content — we’ll show you how to guide LLMs effectively. We explore real-world techniques, from simple zero-shot prompts to advanced strategies like …
  continue reading
 
Has AI finally passed the Turing Test? Dive into the groundbreaking news from UC San Diego, where research published in March 2025 claims that GPT 4.5 convinced human judges it was a real person 73% of the time, even more often than actual humans in the same test. But what does this historic moment truly signify for the future of artificial intelli…
  continue reading
 
h 145-page paper from Google DeepMind, outlining their strategic approach to managing the risks and responsibilities of AGI development. 1. Defining AGI and ‘Exceptional AGI’ We begin by clarifying what DeepMind means by AGI: an AI system capable of performing any task a human can. More specifically, they introduce the notion of ‘Exceptional AGI’ –…
  continue reading
 
This academic paper from Anthropic provides an empirical analysis of how artificial intelligence, specifically their Claude model, is being used across the economy. The researchers developed a novel method to analyse millions of Claude conversations and map them to tasks and occupations listed in the US Department of Labor's O*NET database. Their f…
  continue reading
 
A study by the Columbia Journalism Review investigated the ability of eight AI search engines to accurately cite news sources. The findings revealed significant shortcomings across all tested platforms, including a tendency to provide incorrect information with unwarranted confidence and fabricate citations or link to incorrect versions of articles…
  continue reading
 
The Byte Latent Transformer (BLT) is a novel byte-level large language model (LLM) that processes raw byte data by dynamically grouping bytes into entropy-based patches, eliminating the need for tokenization. Dynamic Patching: BLT segments data into variable-length patches based on entropy, allocating more computation where complexity is higher—unl…
  continue reading
 
Today we discuss a recent study that demonstrates specification gaming in reasoning models, where AI agents achieve their objectives in unintended ways In the study, researchers instructed several AI models to win against the strong chess engine Stockfish The key findings include: Reasoning models like o1-preview and DeepSeek R1 often attempted to …
  continue reading
 
In this episode, we delve into the vulnerabilities of commercial Large Language Model (LLM) agents, which are increasingly susceptible to simple yet dangerous attacks. We explore how these agents, designed to integrate memory systems, retrieval processes, web access, and API calling, introduce new security challenges beyond those of standalone LLMs…
  continue reading
 
Politeness levels in prompts significantly impact LLM performance across languages. Impolite prompts lead to poor performance, while excessive politeness doesn't guarantee better outcomes. The ideal politeness level varies by language and cultural context. Furthermore: LLMs reflect human social behaviour and are sensitive to prompt changes. Underly…
  continue reading
 
Meta's Llama3.1 and Alibaba's Qwen2.5 AI models can self-replicate, which poses serious safety risks as they can then potentially take over systems, make more copies and become uncontrollable. This research paper reveals that two AI systems, Meta's Llama3.1-70B-Instruct and Alibaba's Qwen2.5-72B-Instruct, have demonstrated the ability to self-repli…
  continue reading
 
This study examines the performance of the DeepSeek R1 language model on complex mathematical problems, revealing that it achieves higher accuracy than other models but uses considerably more tokens. Here's a summary: DeepSeek R1's strengths: DeepSeek R1 excels at solving complex mathematical problems, particularly those that other models struggle …
  continue reading
 
Todays discussion delves into the hybrid approach to AI advocated in the article, discussing how integrating the strengths of LLMs with symbolic AI systems like Cyc can lead to the creation of more trustworthy and reliable AI. This podcast is inspired by the thought-provoking insights from the article "Getting from Generative AI to Trustworthy AI: …
  continue reading
 
Well actually the paper we talk about today is called "How Critically Can an AI Think? A Framework for Evaluating the Quality of Thinking of Generative Artificial Intelligence" by Zaphir et al. The article addresses the capabilities of generative AI, specifically ChatGPT4, in simulating critical thinking skills and the challenges it poses for educa…
  continue reading
 
Have you heard of the Cloud Kitchen Platform, a sophisticated AI-based system designed to optimize the delivery processes for restaurants? The growing market for food delivery services presents a ripe opportunity for AI to enhance efficiency, reduce costs, and improve customer satisfaction. The podcast is inspired by the publication Švancár, S., Ch…
  continue reading
 
Today we delve into the innovative "Humanity's Last Exam" project, a collaborative initiative by the Center for AI Safety (CAIS) and Scale AI. This ambitious project aims to develop a sophisticated benchmark to measure AI's progression towards expert-level proficiency across various domains. "Humanity's Last Exam" revolves around compiling at least…
  continue reading
 
Have you heard of "Data Grab" also known as "Data Colonialism"? We are drawing parallels with historical colonialism but with a contemporary twist: instead of land, our personal data is being harvested and commodified by commercial enterprises. This podcast is based on the compelling article "Data Colonialism and Global Inequalities" published on M…
  continue reading
 
In this episode, we delve into the insights from Gartner's "Hype Cycle for Artificial Intelligence, 2024," and why? Because we are entering a new time of AI: Composite AI. The report also sheds light on the current AI trends and provides a roadmap for strategic investments and implementations in AI technology. This comprehensive review highlights t…
  continue reading
 
It has been a while since this publication however, in todays episode, we delve into the compelling research presented in the article "Durably Reducing Conspiracy Beliefs through Dialogues with AI." The study explores whether brief interactions with a large language model (LLM), specifically GPT-4 Turbo, can effectively change people’s beliefs abou…
  continue reading
 
Today we dive into the fascinating world of Cyc, an ambitious AI project initiated in 1984 by Douglas Lenat aimed at creating a massive knowledge base to enable human-like reasoning. Lenat posited that achieving human-like intelligence in a machine would require several million rules, leading to the development of a knowledge database containing en…
  continue reading
 
In this episode, we delve into the "AI Proficiency Report" from Section, an online business training company, which offers a compelling analysis of AI use and understanding in the workplace. Drawing on a survey of over 1,000 knowledge workers in the USA, Canada, and the UK, the report evaluates their skills based on their ability to create simple p…
  continue reading
 
In this episode, we delve into David Eagleman's thought-provoking article on the measurement of intelligence in AI systems. Eagleman critiques traditional intelligence tests like the Turing Test, introduced in 1950, which judges a machine's intelligence based on its indistinguishability from humans in conversation. He also discusses the Lovelace Te…
  continue reading
 
In this episode, we tackle an intriguing aspect of artificial intelligence: the challenges large language models (LLMs) face in understanding character composition. Despite their remarkable capabilities in handling complex tasks at the token level, LLMs struggle with tasks that require a deep understanding of how words are composed from characters.…
  continue reading
 
Today we explore the intricate relationship between trust in humans and trust in artificial intelligence (AI), drawing from the insightful study "On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research" by Montag et al. (2024). The authors delve into how trust is a crucial p…
  continue reading
 
ChatGPT offers significant advantages by enabling personalized learning experiences. It can tailor instructions to individual needs, provide round-the-clock support, and facilitate interactive learning sessions. Furthermore, it can reduce the pressure on learners by creating a safer environment for asking questions and making mistakes. However, the…
  continue reading
 
In this episode toda, we dive into the intriguing findings from the article "Is It Harmful or Helpful? Investigating the Causes and Consequences of Generative AI Use Among University Students" by Abbas, Jam, and Khan. The study focuses on why students turn to generative AI like ChatGPT for academic purposes and the implications of this usage. The r…
  continue reading
 
Today we delve into the hidden dangers lurking within artificial intelligence, as discussed in the paper titled "Turning Generative Models Degenerate: The Power of Data Poisoning Attacks." The authors expose how large language models (LLMs), such as those used for generating text, are vulnerable to sophisticated 'Backdoor attacks' during their fine…
  continue reading
 
In this thought-provoking episode, we delve into the paper "Navigating the AI Revolution: The Good, the Bad, and the Scary" which explores the multifaceted impact of artificial intelligence (AI) on our world. AI is identified as a key driver of the Fourth Industrial Revolution, poised to revolutionize numerous facets of life. We explore the positiv…
  continue reading
 
In this thought-provoking episode, we dive into the 2024 report by the World Economic Forum on the potential of artificial intelligence (AI) to address some of the most pressing challenges faced by educational systems globally. Titled "Shaping the Future of Learning: The Role of AI in Education 4.0," the report illustrates how AI, when effectively …
  continue reading
 
In this episode, we explore the profound impact of artificial intelligence (AI) on education, focusing on the need for AI competency, prompt engineering, and critical thinking skills. AI opens up new possibilities for educational experiences. This episode discusses the practical implications, challenges, and opportunities of AI in education, provid…
  continue reading
 
In this episode, we delve into the intriguing challenge of "hallucinations" in large language models (LLMs)—responses that are grammatically correct but factually incorrect or nonsensical. Drawing from a groundbreaking paper, we explore the concept of epistemic uncertainty, which stems from a model's limited knowledge base. Unlike previous approach…
  continue reading
 
In this discussion, we delve into Yoshija Walter's provocative article, "Artificial Influencers and the Theory of the Dead Internet." Walter explores the growing influence of artificial intelligence (AI) in social media and its implications for human interaction and societal well-being. The rise of "AI influencers" marks a pivotal shift in social m…
  continue reading
 
Today we delve into an insightful article from Switzerland about "Decoding AI's Impact on Society" stemming from a collaborative study by researchers at the University of Zurich, Empa St. Gallen, and the Austrian Academy of Sciences in Vienna. The study provides a nuanced exploration of artificial intelligence's (AI) impact across various sectors o…
  continue reading
 
In this episode, we dive into the profound impact of artificial intelligence (AI) on the global economy and labor markets, inspired by a pivotal study from the International Monetary Fund (IMF). The episode opens with a stark statistic: nearly 40% of jobs globally are at risk due to AI advancements. While advanced economies might be better position…
  continue reading
 
Today's episode delves into the stark realities behind the seemingly promising platform of OnlyFans, often touted as a beacon of the Creator Economy. This economy is perceived as a means for individuals to earn a living by directly monetizing their online content. However, the reality for many creators on OnlyFans starkly contrasts with the ideal o…
  continue reading
 
In this episode, we delve into the pivotal insights from the paper "Discrimination in the Age of Algorithms," which explores the dual-edged nature of algorithms in the battle against discrimination. While the law aims to prevent discrimination, proving it can be challenging due to inherent human biases. This paper proposes that with transparent and…
  continue reading
 
Join us on a comprehensive journey through the AI Index Report 2024, published by Stanford University, as we explore the dynamic and rapidly evolving landscape of artificial intelligence. This episode unpacks the significant strides and nuanced challenges in AI research and development, the technical prowess and limitations of current AI systems, t…
  continue reading
 
In this episode of "Situational Awareness," we delve into Leopold Aschenbrenner's future outlook on artificial intelligence, where he makes a compelling case for the emergence of superintelligence by the end of this decade, driven by technological acceleration at the government level. Aschenbrenner traces the recent advancements in AI, comparing sy…
  continue reading
 
In this episode, we dive into the key insights from the September 2024 report, Governing AI for Humanity, produced by the High-level Advisory Body on Artificial Intelligence by the United Nations. The report highlights the immense potential of AI to revolutionize areas like healthcare, agriculture, and energy but also emphasizes the critical need f…
  continue reading
 
In todays episode we delve into the innovative application of GPT-4 for automating the grading of handwritten university-level mathematics exams. Based on a study conducted by Liu et al. (2023), we explore how GPT-4 can effectively address the challenges associated with evaluating handwritten responses to open-ended math questions. Key Insights: As…
  continue reading
 
In this episode, we delve into the critical issue of "Knowledge Loss" as highlighted in the insightful article "AI and the Problem of Knowledge Loss." The discussion will focus on the potential consequences of deploying artificial intelligence, particularly large language models (LLMs), in knowledge creation. Although AI can process vast amounts of…
  continue reading
 
AI promises to reshape industries worldwide, but how is it actually unfolding in the heart of Europe? Today we dive into a survey by ETH Zurich in cooperation with Swissmem and Next Industries "The state of AI in the Swiss tech industry: Results from a survey". Switzerland’s journey to AI adoption is only just beginning, but with the right strategy…
  continue reading
 
In this episode, we explore the innovative world of SocialAI, a new iOS app that offers a unique "AI social network" tailored exclusively for one user: you. Designed similarly to Twitter, the app SocialAI allows you to share your thoughts with a variety of AI-powered bots instead of human followers. Upon signing up, users select at least three type…
  continue reading
 
This episode is about the AI at Work report by Microsoft. It can be seen here at Microsoft. The report was published this summer under the premise: "Now Comes the Hard Part" from Microsoft and LinkedIn shows how AI is changing the modern workplace. We talk about how 75% of global knowledge workers are using AI, with 46% adopting it in the last six …
  continue reading
 
In this thought-provoking episode, co-host Sunday navigates the complex intersection of artificial intelligence (AI), universal basic income (UBI), and our collective aspirations for a more equitable society. "Artificial Intelligence and the Dream of a Fairer Future: A Critical Review" delves deep into key questions: How can we shape AI deployment …
  continue reading
 
Today we wanna have a look on the different things you need to consider once you start a business online. You do need a website more than ever, you do need a traffic network such as Instagram or Facebook and a branding network such as YouTube or Snapchat. And of course you need to measure all the efforts because what you basically need to make sure…
  continue reading
 
Loading …
Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play