<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Daily Wild: 🎙️ Podcast]]></title><description><![CDATA[Artificial Intelligence will be critical to organizations' successful defense and security. However, in the coming years, innovation in AI-powered cyber resilience will need to reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm: immunity.]]></description><link>https://www.dailywild.co/s/podcast</link><generator>Substack</generator><lastBuildDate>Sat, 11 Apr 2026 18:31:00 GMT</lastBuildDate><atom:link href="https://www.dailywild.co/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[The Daily Wild by Wild Intelligence]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[yaelrznx@gmail.com]]></webMaster><itunes:owner><itunes:email><![CDATA[yaelrznx@gmail.com]]></itunes:email><itunes:name><![CDATA[Yael Rozencwajg]]></itunes:name></itunes:owner><itunes:author><![CDATA[Yael Rozencwajg]]></itunes:author><googleplay:owner><![CDATA[yaelrznx@gmail.com]]></googleplay:owner><googleplay:email><![CDATA[yaelrznx@gmail.com]]></googleplay:email><googleplay:author><![CDATA[Yael Rozencwajg]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Zohar Bronfman: AI & the fabric of society]]></title><description><![CDATA[From aided decision-making to the brink of autonomy]]></description><link>https://www.dailywild.co/p/ai-and-the-fabric-of-society-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/ai-and-the-fabric-of-society-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 18 Apr 2025 00:00:40 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/161463408/f0dca73feb6ce0e4684cc3b731abc228.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>AI &amp; the fabric of society</h1><p>In today&#8217;s episode, we highlight the multifaceted impacts of artificial intelligence. </p><p>While AI will significantly aid various professions, achieving full machine autonomy will be a considerable challenge. </p><p>Zohar Bronfman, co-founder and CEO of Pecan AI, believes that human oversight will remain crucial, particularly in roles that require creativity and predictive decision-making. </p><p>The conversation explores AI's rapid advancements, its potential to transform the workforce and salary structures, and the ethical considerations that arise from its increasing capabilities. </p><p>Bronfman emphasizes the importance of individuals becoming familiar with AI and nations investing strategically in its development. </p><p>He also expresses concern about the lack of global regulatory structures to manage the profound societal implications of this technology, differentiating current narrow AI from the yet-to-be-realized artificial general intelligence. </p><p></p><div><hr></div><h2><strong>Episode 3, season 3</strong></h2><h4><strong>The timeline:</strong></h4><p>(00:00) Intro </p><p>(03:11) Our guest: Zohar Bronfman </p><p>(04:18) The place of human creativity in the AI era </p><p>(09:25) The impacts of automation on society </p><p>(15:47) How do we thrive until we get fully robotized? </p><p>(37:54) AGI, where are we?</p><p></p><div><hr></div><h1><strong>Our guest</strong></h1><h4>Zohar Bronfman is the co-founder and CEO of Pecan AI</h4><p>With a double PhD computational neuroscience and Philosophy, Bronfman is also a recognized top AI leader and, last but not least, a Forbes contributor.</p><h4><strong><a href="https://www.linkedin.com/in/zohar-bronfman/">Zohar Bronfman's LinkedIn profile</a></strong></h4><p></p><h4>More about Pecan AI:</h4><p>Founded in 2018, Pecan is a predictive analytics platform that leverages its pioneering Predictive GenAI to remove barriers to AI adoption, making predictive modeling accessible to all data and business teams. </p><p>Guided by generative AI, companies can obtain precise predictions across various business domains without the need for specialized personnel. </p><p>Predictive GenAI enables rapid model definition and training, while automated processes accelerate AI implementation. </p><p>With Pecan's fusion of predictive and generative AI, realizing the business impact of AI is now far faster and easier. </p><p><strong>Explore more at <a href="http://www.pecan.ai/">www.pecan.ai</a>.</strong></p>]]></content:encoded></item><item><title><![CDATA[Guan Seng Khoo, PhD. Academic advisor, board member, adjunct lecturer: "Are we truly understanding the layers of AI beyond the hype?"]]></title><description><![CDATA[New season, new episodes with AI experts and decision leaders: AI views to help you close the gap between the overwhelming flood of information and the decisions you might need to make.]]></description><link>https://www.dailywild.co/p/are-we-truly-understanding-the-layers-of-ai-beyond-the-hype-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/are-we-truly-understanding-the-layers-of-ai-beyond-the-hype-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Sat, 05 Apr 2025 07:36:11 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/160634029/c7614a15b411dbb25039a498cee5a0bf.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>New season, new episodes, new guests.</h1><p>The AI revolution demands a new level of strategic foresight. </p><p>This season, we're exploring decision intelligence, a crucial framework for navigating this complex landscape. </p><p>We'll focus on how this interdisciplinary field empowers you to make data-driven decisions while upholding ethical standards and responsible practices. </p><p>We&#8217;ll cover the essential skills for leading AI projects, designing effective metrics, and implementing robust safety nets to mitigate threats. </p><p>Join us as we explore how to harness the power of AI while ensuring it serves your organization and society responsibly.</p><div><hr></div><h1>Are we truly understanding the layers of AI beyond the hype?</h1><h2>Episode 2, season 3</h2><h4><strong>In today&#8217;s episode, we discuss: </strong></h4><p>Guan Seng Khoo, PhD., as an academic advisor, board member, and adjunct lecturer, I explore the complexities and challenges surrounding artificial intelligence, such as the&nbsp;<strong>opaqueness of AI models</strong>, particularly generative AI, concerning their architecture, data origins, and energy consumption. </p><p>We examine the <strong>ethical considerations of data input</strong>, the potential for biased or average outputs from large datasets, and the difficulty in validating AI learning models. </p><p>We also discuss the&nbsp;<strong>strategic implications for companies adopting AI</strong>, question whether goals are well-defined, and consider the balance between AI as an assistant and an autonomous agent.&nbsp;</p><p>Ultimately, we raise concerns about <strong>over-reliance on AI</strong>, the potential for misuse, and the need to implement greater awareness, education, and strategic thinking.</p><h4><br>The timeline:</h4><p>(00:00) Intro </p><p>(01:21) Our guest:  Guan Seng Khoo, PhD</p><p>(03:27) The hidden layers</p><p>(07:00) The need for supervision and the call for small data models</p><p>(09:28) Reinforcement learning</p><p>(11:45) Why do we use AI?</p><p>(17:39) A scenario in portfolio management  </p><p>(21:04) Do corporations mislead their objectives of AI because of hype?</p><p>(26:16) How the world looks for answers from AI</p><p>(32:11) AI's responsibility: Can we step back? </p><p>(35:42) Corporations to decide what complexity they want to address</p><p></p><div><hr></div><h1>Our guest</h1><p>Guan Seng Khoo is an experienced senior management professional and ex-academic. Data Scientist with PhD in computational physics (molecular simulations of semiconductors) and computer-aided drug design, with post-doc R&amp;D &amp; MSc Supervision in AI (Neural Networks, Genetic Algorithms, Fuzzy Logic, etc.) applications in finance and info-security.</p><h4><strong><a href="https://www.linkedin.com/in/guan-seng-khoo-phd-632204/">Guan Seng&#8217;s LinkedIn profile</a></strong></h4><h4><strong><a href="https://www.linkedin.com/in/yaelrozencwajg/">Yael's LinkedIn profile</a></strong></h4><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/are-we-truly-understanding-the-layers-of-ai-beyond-the-hype-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/p/are-we-truly-understanding-the-layers-of-ai-beyond-the-hype-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/are-we-truly-understanding-the-layers-of-ai-beyond-the-hype-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/p/are-we-truly-understanding-the-layers-of-ai-beyond-the-hype-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Pamela Gupta, CEO, Co-President at Trusted AI™: "Why we need to pay more attention to AI"]]></title><description><![CDATA[New season, new episodes with AI experts and decision leaders: AI views to help you close the gap between the overwhelming flood of information and the decisions you might need to make.]]></description><link>https://www.dailywild.co/p/why-we-need-to-pay-more-attention-to-ai-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/why-we-need-to-pay-more-attention-to-ai-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Sun, 30 Mar 2025 00:01:14 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/160121853/62a7c2746f5e5e56c8d5814b24c1f280.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>New season, new episodes, new guests.</h1><p>The AI revolution demands a new level of strategic foresight. </p><p>This season, we're exploring decision intelligence, a crucial framework for navigating this complex landscape. </p><p>We'll focus on how this interdisciplinary field empowers you to make data-driven decisions while upholding ethical standards and responsible practices. </p><p>We&#8217;ll cover the essential skills for leading AI projects, designing effective metrics, and implementing robust safety nets to mitigate threats. </p><p>Join us as we explore how to harness the power of AI while ensuring it serves your organization and society responsibly.</p><div><hr></div><h1>Why we need to pay more attention to AI</h1><h2>Episode 1, season 3</h2><h4><strong>In today&#8217;s episode, we discuss: </strong></h4><p>Pamela Gupta, a top AI governance consultant, discusses the critical need for increased attention to AI risks and responsible adoption. </p><p>She emphasizes that neither technology nor awareness alone is sufficient; a proactive approach involving an&nbsp;<strong>AI Readiness impact assessment</strong>&nbsp;is essential at all company levels.&nbsp;</p><p>Gupta highlights the escalating dangers of AI applications like deepfakes and stresses the importance of certification initiatives, public awareness, and robust processes with human oversight for mitigation. </p><p>She advocates for establishing a cross-stakeholder Center of Excellence and developing AI-ready governance to ensure AI's sustainable and trustworthy implementation. </p><p>She underscores that responsible AI practices are a fundamental necessity, not an afterthought, for achieving intended outcomes and avoiding potential pitfalls.</p><p></p><h4>The timeline:</h4><p>(00:00) Intro </p><p>(01:21) Our guest: Pamela Gupta </p><p>(02:38) The growing AI threat: deep fakes </p><p>(09:03) Why do we need new implementation processes </p><p>(11:28) Defining the needs and requirements </p><p>(19:41) Recommendations for better implementations </p><p>(28:37) How far are we with AI?</p><div><hr></div><h1>Our guest</h1><p><strong>Pamela Gupta is the CEO, Co-President at Trusted AI&#8482;, a top AI Governance consultant, awarded top 20 global risk management and cybersecurity consultant four years in a row. </strong></p><h4><strong><a href="https://www.linkedin.com/in/buildingtrustedaiholistically">Pamela's LinkedIn profile</a></strong></h4><h4><strong><a href="https://www.linkedin.com/in/yaelrozencwajg/">Yael's LinkedIn profile</a></strong></h4><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/why-we-need-to-pay-more-attention-to-ai-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/p/why-we-need-to-pay-more-attention-to-ai-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/why-we-need-to-pay-more-attention-to-ai-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/p/why-we-need-to-pay-more-attention-to-ai-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Wildfires with Google, gaming with Roblox, hardware with Nvidia, Adobe revealed AI-driven enterprise, and Palantir]]></title><description><![CDATA[This time, it&#8217;s different | Wild Intelligence&#8217;s new podcast series covers AI trends for 2025]]></description><link>https://www.dailywild.co/p/google-roblox-nvidia-adobe-palantir-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/google-roblox-nvidia-adobe-palantir-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 21 Mar 2025 01:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/159601606/ff0e3a6d5377c86bfc6be55a06e534b6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Stay ahead of AI innovation: your weekly Wild Intelligence brief</strong></p><p><strong>Get actionable insights on AI's evolving landscape through carefully curated analysis of breakthrough research, market intelligence, and industry developments.</strong></p><p><strong>From emerging AI applications to quantum computing advancements, we decode the signals that matter for your strategic decisions.</strong></p><p><strong>Our comprehensive coverage spans software breakthroughs and hardware innovations, giving you the perspective needed to drive AI initiatives and identify competitive advantages.</strong></p><div><hr></div><p>The recent advancements and applications of artificial intelligence across various sectors. </p><p>Google and Muon Space introduced FireSat, an AI-powered satellite for rapid wildfire detection, while Roblox launched Cube 3D, an open-source AI system that generates 3D game assets from text. </p><p>Nvidia's GTC 2025 showcased significant progress in AI infrastructure, including powerful new GPUs and robotics platforms, alongside a partnership with GM for self-driving cars. </p><p>Adobe's Summit 2025 revealed AI-driven tools for marketing and customer engagement, emphasizing personalization and automation. </p><p>Finally, Palantir's focus on commercial AI markets, despite some financial concerns, underscores the expanding reach of AI technology.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/google-roblox-nvidia-adobe-palantir-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/google-roblox-nvidia-adobe-palantir-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>The questions to ask:</h3><ul><li><p><strong>What overarching trends in AI development and application are highlighted across these diverse sources?</strong></p></li><li><p><strong>How are major technology companies strategically positioning themselves within the rapidly evolving AI landscape and its various sectors?</strong></p></li><li><p><strong>How is Nvidia preparing for increased AI computational demands?</strong></p></li></ul><p></p><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind.<br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm.<br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h1><strong>Building safe intelligence systems, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>Deep dive and practice:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/foundations-of-ai-safety">Module 1: Foundations of AI safety</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/bias-fairness-and-explainability">Module 2: Bias, fairness, and explainability</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/security-and-robustness">Module 3: Security and robustness</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/human-ai-collaboration-and-governance">Module 4: Human-AI collaboration and governance</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/emerging-challenges-and-future-trends">Module 5: Emerging challenges and future trends</a></strong></h4><h4><a href="https://wildintelligence.substack.com/p/our-ai-safety-mission-dystopia">Our AI safety mission</a></h4><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/google-roblox-nvidia-adobe-palantir-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/google-roblox-nvidia-adobe-palantir-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3></h3>]]></content:encoded></item><item><title><![CDATA[AI agents emerge; Google, OpenAI, Apple, Microsoft advance AI]]></title><description><![CDATA[This time, it&#8217;s different | Wild Intelligence&#8217;s new podcast series covers AI trends for 2025]]></description><link>https://www.dailywild.co/p/ai-agents-emerge-google-openai-apple-microsoft-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/ai-agents-emerge-google-openai-apple-microsoft-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 14 Mar 2025 01:00:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/158901819/3ab6b6f360065c3ece708f4fbcff9bbd.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Stay ahead of AI innovation: your weekly Wild Intelligence brief</strong></p><p><strong>Get actionable insights on AI's evolving landscape through carefully curated analysis of breakthrough research, market intelligence, and industry developments.</strong></p><p><strong>From emerging AI applications to quantum computing advancements, we decode the signals that matter for your strategic decisions.</strong></p><p><strong>Our comprehensive coverage spans software breakthroughs and hardware innovations, giving you the perspective needed to drive AI initiatives and identify competitive advantages.</strong></p><div><hr></div><p>Recent developments highlight significant advancements and competition in the AI landscape. </p><p>Monica, a Chinese startup, has introduced Manus, a potentially groundbreaking general AI agent capable of autonomous multi-task execution. </p><p>Simultaneously, DeepSeek, another Chinese AI firm, is demonstrating impressive efficiency and profitability, challenging the dominance of U.S. companies like OpenAI. </p><p>Meanwhile, major tech players like Google are expanding their AI-powered search capabilities with Gemini 2.0 and the experimental AI Mode, while OpenAI plans to integrate its video generation tool Sora into ChatGPT. </p><p>In contrast, Apple faces increasing pressure to catch up in the AI race, as its current offerings lag behind competitors. Lastly, Microsoft is leveraging AI to transform healthcare with Dragon Copilot, an AI assistant designed to streamline clinical workflows.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/ai-agents-emerge-google-openai-apple-microsoft-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/ai-agents-emerge-google-openai-apple-microsoft-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>The questions to ask:</h3><ul><li><p><strong>How might the emergence of advanced AI agents like Manus reshape future technological competition and innovation globally?</strong></p></li><li><p><strong>What factors may influence DeepSeek's sustained momentum?</strong></p></li><li><p><strong>How does Dragon Copilot aim to aid healthcare?</strong></p></li></ul><p></p><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind.<br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm.<br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h1><strong>Building safe intelligence systems, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>Deep dive and practice:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/foundations-of-ai-safety">Module 1: Foundations of AI safety</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/bias-fairness-and-explainability">Module 2: Bias, fairness, and explainability</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/security-and-robustness">Module 3: Security and robustness</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/human-ai-collaboration-and-governance">Module 4: Human-AI collaboration and governance</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/emerging-challenges-and-future-trends">Module 5: Emerging challenges and future trends</a></strong></h4><h4><a href="https://wildintelligence.substack.com/p/our-ai-safety-mission-dystopia">Our AI safety mission</a></h4><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/ai-agents-emerge-google-openai-apple-microsoft-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/ai-agents-emerge-google-openai-apple-microsoft-podcast-wild-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3></h3>]]></content:encoded></item><item><title><![CDATA[Generative AI: Accuracy, expectations, and the unknown]]></title><description><![CDATA[How to truly evaluate the progress of these models beyond benchmarks | Wild Intelligence&#8217;s new podcast series covers AI trends for 2025]]></description><link>https://www.dailywild.co/p/generative-ai-accuracy-expectations-and-the-unknown-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/generative-ai-accuracy-expectations-and-the-unknown-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 07 Mar 2025 01:00:00 GMT</pubDate><enclosure url="https://substack-video.s3.amazonaws.com/video_upload/post/158644122/1730fc1b-470d-4819-bf1d-3ab354f1b3f7/transcoded-1741435092.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Stay ahead of AI innovation: your weekly Wild Intelligence brief</p><p>Get actionable insights on AI's evolving landscape through carefully curated analysis of breakthrough research, market intelligence, and industry developments.</p><p>From emerging AI applications to quantum computing advancements, we decode the signals that matter for your strategic decisions.</p><p>Our &#8230;</p>
      <p>
          <a href="https://www.dailywild.co/p/generative-ai-accuracy-expectations-and-the-unknown-podcast-wild-intelligence">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Frontier science: quantum chips, AI biology, and AI co-scientists]]></title><description><![CDATA[The rise of AI: Benchmarks, real-world applications, and the policy challenges ahead | Wild Intelligence&#8217;s new podcast series covers AI trends for 2025]]></description><link>https://www.dailywild.co/p/frontier-science-quantum-chips-ai-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/frontier-science-quantum-chips-ai-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 28 Feb 2025 01:00:00 GMT</pubDate><enclosure url="https://substack-video.s3.amazonaws.com/video_upload/post/158166357/a132926c-e931-41a0-ad84-6b6864f84076/transcoded-1740831825.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Stay ahead of AI innovation: your weekly Wild Intelligence brief</p><p>Get actionable insights on AI's evolving landscape through carefully curated analysis of breakthrough research, market intelligence, and industry developments.</p><p>From emerging AI applications to quantum computing advancements, we decode the signals that matter for your strategic decisions.</p><p>Our &#8230;</p>
      <p>
          <a href="https://www.dailywild.co/p/frontier-science-quantum-chips-ai-podcast-wild-intelligence">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[If a human can do it, an AI may too: Navigating the uncertain future of AI]]></title><description><![CDATA[The rise of AI: Benchmarks, real-world applications, and the policy challenges ahead | Wild Intelligence&#8217;s new podcast series covers AI trends for 2025]]></description><link>https://www.dailywild.co/p/if-a-human-can-do-it-an-ai-may-too-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/if-a-human-can-do-it-an-ai-may-too-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 21 Feb 2025 01:00:00 GMT</pubDate><enclosure url="https://substack-video.s3.amazonaws.com/video_upload/post/157621769/80b330e7-762b-462b-9a94-935219e67f00/transcoded-1740149647.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Stay ahead of AI innovation: your weekly Wild Intelligence brief</p><p>Get actionable insights on AI's evolving landscape through carefully curated analysis of breakthrough research, market intelligence, and industry developments.</p><p>From emerging AI applications to quantum computing advancements, we decode the signals that matter for your strategic decisions.</p><p>Our &#8230;</p>
      <p>
          <a href="https://www.dailywild.co/p/if-a-human-can-do-it-an-ai-may-too-podcast-wild-intelligence">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Global collaboration on AI: ethics, safety, and European strategy]]></title><description><![CDATA[| Wild Intelligence&#8217;s new podcast series covers AI trends for 2025]]></description><link>https://www.dailywild.co/p/global-collaboration-on-ai-ethics-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/global-collaboration-on-ai-ethics-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 14 Feb 2025 01:00:00 GMT</pubDate><enclosure url="https://substack-video.s3.amazonaws.com/video_upload/post/157194575/c2cc9d39-4a57-4ae2-b5a1-6a3705408760/transcoded-1739624018.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Stay ahead of AI innovation: your weekly Wild Intelligence brief</p><p>Get actionable insights on AI's evolving landscape through carefully curated analysis of breakthrough research, market intelligence, and industry developments.</p><p>From emerging AI applications to quantum computing advancements, we decode the signals that matter for your strategic decisions.</p><p>Our &#8230;</p>
      <p>
          <a href="https://www.dailywild.co/p/global-collaboration-on-ai-ethics-podcast-wild-intelligence">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Riding the AI wave: Navigating uncertainty in a transformative era]]></title><description><![CDATA[Decoding the current state and future of AI | Wild Intelligence&#8217;s new podcast series covers AI trends for 2025]]></description><link>https://www.dailywild.co/p/riding-the-ai-wave-navigating-uncertainty-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/riding-the-ai-wave-navigating-uncertainty-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 07 Feb 2025 01:00:00 GMT</pubDate><enclosure url="https://substack-video.s3.amazonaws.com/video_upload/post/156727691/01768f32-4562-468c-96eb-158bf8a3b74e/transcoded-1739013819.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Stay ahead of AI innovation: your weekly Wild Intelligence brief</p><p>Get actionable insights on AI's evolving landscape through carefully curated analysis of breakthrough research, market intelligence, and industry developments.</p><p>From emerging AI applications to quantum computing advancements, we decode the signals that matter for your strategic decisions.</p><p>Our &#8230;</p>
      <p>
          <a href="https://www.dailywild.co/p/riding-the-ai-wave-navigating-uncertainty-podcast-wild-intelligence">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[DeepSeek's R1 LLMs: a disruptive force in AI ]]></title><description><![CDATA[Listen now | DeepSeek's release of its R1 family of large language models (LLMs) is shaking up the AI industry.]]></description><link>https://www.dailywild.co/p/deepseeks-r1-llms-a-disruptive-force-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/deepseeks-r1-llms-a-disruptive-force-podcast-wild-intelligence</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 31 Jan 2025 01:00:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/78f172b9-7c1d-433b-84a6-9d9da8e577c3_1920x1105.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>DeepSeek's release of its R1 family of large language models (LLMs) is shaking up the AI industry. </p><p><strong>The models offer comparable performance to those from major companies like Google and OpenAI but at a fraction of the cost</strong>, making them accessible to a broader range of users and developers. </p><p><strong>An open-source license and a user-friendly iPhone app further enh&#8230;</strong></p>
      <p>
          <a href="https://www.dailywild.co/p/deepseeks-r1-llms-a-disruptive-force-podcast-wild-intelligence">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI trends for 2025]]></title><description><![CDATA[Wild Intelligence&#8217;s new podcast series covers AI trends for 2025]]></description><link>https://www.dailywild.co/p/tech-trends-2025-insights</link><guid isPermaLink="false">https://www.dailywild.co/p/tech-trends-2025-insights</guid><dc:creator><![CDATA[Yael Rozencwajg]]></dc:creator><pubDate>Fri, 10 Jan 2025 01:00:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/154484770/8a1ef41f4be38a4c55cba4e19d38830b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Stay ahead of AI innovation: your weekly Wild Intelligence brief</p><p>Get actionable insights on AI's evolving landscape through carefully curated analysis of breakthrough research, market intelligence, and industry developments. </p><p>From emerging AI applications to quantum computing advancements, we decode the signals that matter for your strategic decisions.</p><p>Our comprehensive coverage spans both software breakthroughs and hardware innovations, giving you the whole perspective needed to drive AI initiatives and identify competitive advantages.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/tech-trends-2025-insights?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/p/tech-trends-2025-insights?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><p><br>Deloitte Insights released a very detailed report on how they see AI trends moving in 2025.<br><br>The report explores trends across six macro forces: interaction, information, computation, business of technology, cyber and trust, and core modernization.<br><br>It examines six macro forces shaping technology: interaction, information, computation, business of technology, cyber and trust, and core modernization. <br><br>AI serves as a unifying theme. It is predicted to become so integrated that it will be largely invisible yet foundational to daily life. <br><br>The report explores emerging trends within each force, including:<br>- spatial computing, <br>- advancements in AI models (from LLMs to agentic AI), <br>- the resurgence of specialized hardware, AI's transformation of IT functions, <br>- the need for quantum-resistant cryptography, <br>- AI's impact on core system modernization. <br><br>Finally, it emphasizes exploring intersections between technologies and industries to drive innovation.</p><p><strong>Tech trends 2025 by Deloitte Insights <a href="https://media.licdn.com/dms/document/media/v2/D561FAQEasw1qzc1V-Q/feedshare-document-pdf-analyzed/B56ZRKW5A0GsAY-/0/1736414329368?e=1737590400&amp;v=beta&amp;t=aynQsFwLytkkYH_V3mZHF8j0g26WfrZ73BOO-EPAnyg">[LINK]</a>.</strong></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/tech-trends-2025-insights?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/p/tech-trends-2025-insights?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><h3>The questions to ask:</h3><ul><li><p><strong>How does AI fundamentally reshape six macro technology forces?</strong></p></li><li><p><strong>How will AI's ubiquity affect daily life by 2050?</strong></p></li><li><p><strong>How will AI transform the role of IT functions?</strong></p></li></ul><p></p><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind.<br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm.<br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h1><strong>Building safe intelligence systems, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>Deep dive and practice:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/foundations-of-ai-safety">Module 1: Foundations of AI safety</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/bias-fairness-and-explainability">Module 2: Bias, fairness, and explainability</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/security-and-robustness">Module 3: Security and robustness</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/human-ai-collaboration-and-governance">Module 4: Human-AI collaboration and governance</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/emerging-challenges-and-future-trends">Module 5: Emerging challenges and future trends</a></strong></h4><h4><a href="https://wildintelligence.substack.com/p/our-ai-safety-mission-dystopia">Our AI safety mission</a></h4><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/tech-trends-2025-insights?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dailywild.co/p/tech-trends-2025-insights?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Emerging challenges and future trends | Episode 13, The Wild Pod]]></title><description><![CDATA[Extract from part 5/5 of the building safe intelligence systems series]]></description><link>https://www.dailywild.co/p/emerging-challenges-and-future-trends-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/emerging-challenges-and-future-trends-podcast-wild-intelligence</guid><pubDate>Fri, 03 Jan 2025 01:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/154134973/7f83a3924ecd3f2d985ad909d0b36bb4.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3><strong>What are three emerging trends in AI safety and security?</strong></h3><h3><strong>Summary:</strong></h3><p>This episode details emerging challenges and future AI safety and ethics trends. </p><p>It examines the safety implications of various AI technologies&#8212;autonomous systems, generative AI, and reinforcement learning&#8212;highlighting potential risks like accidents, misinformation, and bias. </p><p>We explore the broader societal impact of AI, focusing on issues such as job displacement, privacy concerns, and the exacerbation of inequalities. </p><p>Finally, it emphasizes the crucial need for a multi-faceted approach involving robust testing, explainability, ethical frameworks, legal regulations, and international cooperation to ensure responsible AI development and deployment, urging individual and collective action to shape a future where AI benefits all of humanity.</p><p></p><h3>The questions to ask: </h3><ul><li><p><strong>What safety risks are associated with autonomous systems?</strong></p><p></p></li><li><p><strong>How does AI's impact on employment necessitate social safety nets?</strong></p><p></p></li><li><p><strong>How might generative AI exacerbate existing societal inequalities?</strong></p><p></p></li></ul><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind. <br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm. <br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h1><strong>Building safe intelligence systems, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>Deep dive and practice:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/foundations-of-ai-safety">Module 1: Foundations of AI safety</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/bias-fairness-and-explainability">Module 2: Bias, fairness, and explainability</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/security-and-robustness">Module 3: Security and robustness</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/human-ai-collaboration-and-governance">Module 4: Human-AI collaboration and governance</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/emerging-challenges-and-future-trends">Module 5: Emerging challenges and future trends</a></strong></h4><h4><a href="https://wildintelligence.substack.com/p/our-ai-safety-mission-dystopia">Our AI safety mission</a></h4><p></p><h3><strong>Takeaway of Emerging challenges and future trends:</strong></h3><p>Generative AIs can create various forms of content and potentially exacerbate societal inequalities. </p><p>This is primarily due to its ability to inherit and amplify biases in the data it's trained on. Here's how:</p><ul><li><p><strong>Bias amplification:</strong> A generative AI model trained on biased data may produce outputs that reflect and even amplify those biases. For example, if a model used for hiring is trained on data that reflects historical gender disparities in certain roles, it might unfairly disadvantage qualified female candidates.</p></li><li><p><strong>Discriminatory outcomes:</strong> This bias amplification can lead to discriminatory outcomes in various domains, including hiring, lending, and even criminal justice. For instance, a biased AI system used in loan applications could unfairly deny loans to individuals from specific socioeconomic backgrounds based on biased data patterns.</p></li><li><p><strong>Perpetuation of stereotypes:</strong> Generative AI could perpetuate harmful stereotypes if trained on data reflecting those biases. Imagine a generative model used to create children's book illustrations that consistently portrays scientists as male and nurses as female due to biased training data. This could reinforce damaging stereotypes and limit children's aspirations.</p></li></ul><p>Addressing these potential harms requires a multi-pronged approach:</p><ul><li><p><strong>Developing fair and unbiased AI systems:</strong> This involves ensuring that AI algorithms are trained on diverse and representative datasets that don't perpetuate existing biases.</p></li><li><p><strong>Promoting algorithmic accountability:</strong> Holding AI developers and deployers accountable for the decisions made by their systems is crucial. This includes establishing clear mechanisms for identifying and mitigating bias in AI systems.</p></li><li><p><strong>Addressing the digital divide:</strong> Ensuring everyone has access to the benefits of AI and isn't left behind in the digital revolution is critical. This will help prevent the further marginalization of already disadvantaged groups.</p></li></ul><p>By proactively addressing these challenges, we can mitigate the risks of generative AI exacerbating societal inequalities and work towards ensuring that AI technologies promote fairness and equity.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/emerging-challenges-and-future-trends-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/emerging-challenges-and-future-trends-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Wild Intelligence by Yael Rozencwajg&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Wild Intelligence by Yael Rozencwajg</span></a></p>]]></content:encoded></item><item><title><![CDATA[Human-AI collaboration and governance | Episode 12, The Wild Pod]]></title><description><![CDATA[Extract from part 4/5 of the building safe intelligence systems series]]></description><link>https://www.dailywild.co/p/human-ai-collaboration-and-governance-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/human-ai-collaboration-and-governance-podcast-wild-intelligence</guid><pubDate>Fri, 27 Dec 2024 01:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/153713111/dfef9cb9652db37171edbf0b2ce39e77.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3><strong>How might an attacker try to manipulate an AI system used for facial recognition?</strong></h3><h3><strong>Summary:</strong></h3><p>This episode discusses the principles and applications of human-AI collaboration, emphasizing the need for responsible AI development and deployment. We stress the importance of:</p><ul><li><p>Trust, shared understanding, complementarity, and adaptability for successful collaboration.</p></li><li><p>Establishing governance frameworks with ethical guidelines, standards, regulatory frameworks, accountability mechanisms, and oversight for responsible AI development.</p></li><li><p>Open communication and knowledge sharing among stakeholders to build trust, identify risks, and promote innovation.</p></li></ul><p>The main takeaway is that building a future where humans and AI work together effectively and responsibly requires collaboration, governance, communication, adaptation, ethical considerations, human control, and trust.</p><p></p><h3>The questions to ask: </h3><ul><li><p><strong>How can AI bias be mitigated across diverse applications?</strong></p><p></p></li><li><p><strong>What are the benefits of open AI communication?</strong></p><p></p></li><li><p><strong>What are two challenges of AI governance frameworks?<br></strong></p></li></ul><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind. <br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm. <br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h1><strong>Building safe intelligence systems, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>Deep dive and practice:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/foundations-of-ai-safety">Module 1: Foundations of AI safety</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/bias-fairness-and-explainability">Module 2: Bias, fairness, and explainability</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/security-and-robustness">Module 3: Security and robustness</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/human-ai-collaboration-and-governance">Module 4: Human-AI collaboration and governance</a></strong></h4><h4><strong>Module 5: Emerging challenges and future trends</strong></h4><h4><a href="https://wildintelligence.substack.com/p/our-ai-safety-mission-dystopia">Our AI safety mission</a></h4><p></p><h3><strong>Takeaway of security and robustness:</strong></h3><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/human-ai-collaboration-and-governance-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/human-ai-collaboration-and-governance-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Wild Intelligence by Yael Rozencwajg&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Wild Intelligence by Yael Rozencwajg</span></a></p>]]></content:encoded></item><item><title><![CDATA[Security and robustness | Episode 11, The Wild Pod]]></title><description><![CDATA[Listen now | Extract from part 3/5 of the building safe intelligence systems series]]></description><link>https://www.dailywild.co/p/security-and-robustness-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/security-and-robustness-podcast-wild-intelligence</guid><pubDate>Fri, 20 Dec 2024 01:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/153400747/c0710faf17d5c1e6f81f6160854c79d4.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3><strong>How might an attacker try to manipulate an AI system used for facial recognition?</strong></h3><h3><strong>Summary:</strong></h3><p>This episode details adversarial attacks and malicious attempts to trick AI systems, focusing on facial recognition as an example. </p><p>These attacks include poisoning the training data, manipulating input data (evasion), and stealing the model itself (extraction). </p><p>The vulnerability stems from overfitting, lack of robustness, and poor explainability in AI models. </p><p>Ultimately, the text stresses the critical need for creating more resilient AI systems that can withstand such manipulation.</p><p></p><h3>The questions to ask: </h3><ul><li><p><strong>What security measures protect AI systems from cyber threats?</strong></p><p></p></li><li><p><strong>How do adversarial attacks compromise AI system reliability?</strong></p><p></p></li><li><p><strong>What defense strategies enhance AI model robustness?<br></strong></p></li></ul><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind. <br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm. <br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h1><strong>Building safe intelligence systems, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>Deep dive and practice:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/foundations-of-ai-safety">Module 1: Foundations of AI safety</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/bias-fairness-and-explainability">Module 2: Bias, fairness, and explainability</a></strong></h4><h4><strong><a href="https://news.wildintelligence.xyz/p/security-and-robustness">Module 3: Security and robustness</a></strong></h4><h4><strong>Module 4: Human-AI collaboration and governance</strong></h4><h4><strong>Module 5: Emerging challenges and future trends</strong></h4><h4><a href="https://wildintelligence.substack.com/p/our-ai-safety-mission-dystopia">Our AI safety mission</a></h4><p></p><h3><strong>Takeaway of security and robustness:</strong></h3><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c0e169ac-3606-42d6-823c-b009803a1906_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/164312e2-2048-478e-b347-81ca10452176_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57935c4c-9222-4947-a96d-a05de7ad1275_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5c399613-0b46-4030-8a4a-ccb88f3befcd_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2a245c26-8aa8-431b-8031-19277228ddb5_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/547ecafc-ac43-4a17-83bb-48e6f31dcda7_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cae33c60-9b18-49f7-a46d-2132d1717f3f_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21032470-4593-4a92-9cbd-331eae36e0fb_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a8365e61-719a-49eb-8275-2b907a4e46c2_1080x1080.jpeg&quot;}],&quot;caption&quot;:&quot;Security and robustness | A Wild Intelligence exclusive series&quot;,&quot;alt&quot;:&quot;Security and robustness | A Wild Intelligence exclusive series&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a8a5ff1d-d89b-408f-aaf7-7be1b7d7d6d4_1456x1454.png&quot;}},&quot;isEditorNode&quot;:true}"></div><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/security-and-robustness-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/security-and-robustness-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Wild Intelligence by Yael Rozencwajg&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Wild Intelligence by Yael Rozencwajg</span></a></p>]]></content:encoded></item><item><title><![CDATA[Bias, fairness, and explainability | Episode 10, The Wild Pod]]></title><description><![CDATA[Extract from part 2/5 of the building safe intelligence systems series]]></description><link>https://www.dailywild.co/p/bias-fairness-and-explainability-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/bias-fairness-and-explainability-podcast-wild-intelligence</guid><pubDate>Fri, 13 Dec 2024 01:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/153060868/c6db0251d0aed5d4a30605cc880bc7fe.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3><strong>Can you think of a situation where an AI system might exhibit bias, even if it wasn't intentionally programmed to do so?</strong></h3><h3><strong>Summary:</strong></h3><p>This episode emphasizes the crucial need for AI safety, exploring its multifaceted nature beyond simply preventing robotic harm. </p><p>We highlight the importance of aligning AI with human values, ensuring reliability and robustness, and considering long-term consequences. </p><p>The text examines AI's potential benefits and risks, stressing the need for a balanced perspective and addressing ethical principles like fairness and transparency to guide responsible development. </p><p>Finally, it underscores the shared responsibility of various stakeholders&#8212;developers, policymakers, and the public&#8212;in shaping a safe and beneficial AI future.</p><p></p><h3>The questions to ask: </h3><ul><li><p><strong>What are the primary ethical dilemmas posed by AI development and deployment?</strong></p><p></p></li><li><p><strong>How can diverse stakeholders collaborate to mitigate AI's potential harms?</strong></p><p></p></li><li><p><strong>What are the long-term societal impacts of unchecked AI advancement?</strong></p><p></p></li></ul><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind. <br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm. <br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h1><strong>Building safe intelligence systems, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>Deep dive and practice:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/foundations-of-ai-safety">Module 1: Foundations of AI safety</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/bias-fairness-and-explainability">Module 2: Bias, fairness, and explainability</a></strong></h4><h4><strong>Module 3: Security and robustness</strong></h4><h4><strong>Module 4: Human-AI collaboration and governance</strong></h4><h4><strong>Module 5: Emerging challenges and future trends</strong></h4><h4><a href="https://wildintelligence.substack.com/p/our-ai-safety-mission-dystopia">Our AI safety mission</a></h4><p></p><h3><strong>Takeaway of  bias, fairness, and explainability:</strong></h3><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/924d85bb-e667-4685-9311-70125506e799_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/12b01def-ccf1-49a9-9442-4cd91ca8e62a_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/971d61c3-ca8f-44a0-94ad-cded67c09b47_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/675ce651-7742-4533-b55b-f6e2247bba0e_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f74ca34a-754f-4f3f-bdb7-44e16ec0b875_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b7522226-e342-41c9-9da8-8d363eccdfd4_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e2e34cb5-84d3-4fdd-bf9b-97987dabd8fb_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7c3a37eb-cb22-4ef4-bce9-149117da2345_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8675d2d2-0864-43fe-ae4d-d328c22afec5_1080x1080.jpeg&quot;}],&quot;caption&quot;:&quot;Bias, fairness, and explainability | A Wild Intelligence&#8217;s exclusive series&quot;,&quot;alt&quot;:&quot;Bias, fairness, and explainability | A Wild Intelligence&#8217;s exclusive series&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/45f9a172-cec9-4e8a-b1a3-961e2ed6e67a_1456x1454.png&quot;}},&quot;isEditorNode&quot;:true}"></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/bias-fairness-and-explainability-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/bias-fairness-and-explainability-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Wild Intelligence by Yael Rozencwajg&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Wild Intelligence by Yael Rozencwajg</span></a></p>]]></content:encoded></item><item><title><![CDATA[Foundations of AI safety | Episode 9, The Wild Pod]]></title><description><![CDATA[Extract from part 1/5 of the building safe intelligence systems series]]></description><link>https://www.dailywild.co/p/foundations-of-ai-safety-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/foundations-of-ai-safety-podcast-wild-intelligence</guid><pubDate>Fri, 06 Dec 2024 01:00:53 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/150567350/a627cbf7e66d60b35e8322d2eb30b2d4.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3><strong>Can you think of an AI application with the potential for both immense benefit and significant threat</strong>?</h3><h3><strong>Summary:</strong></h3><p>In this episode, we discuss the importance of AI safety, exploring the concept's multifaceted nature and the ethical considerations involved. </p><p>We emphasize the need for a balanced AI perspective, acknowledging its potential benefits and risks. </p><p>We outline critical ethical principles for responsible AI development, including fairness, transparency, and accountability. </p><p>Finally, the text highlights the crucial roles of various stakeholders, such as policymakers, businesses, and civil society organizations, in ensuring AI safety and promoting its responsible development and use.</p><p></p><h3>The questions to ask: </h3><ul><li><p><strong>What are the main challenges and opportunities associated with developing and deploying artificial intelligence?</strong></p><p></p></li><li><p><strong>How can ethical principles be effectively implemented in developing AI systems, and what challenges arise?</strong></p><p></p></li><li><p><strong>What roles do various stakeholders play in ensuring AI's safe and beneficial development and use?</strong></p><p></p></li></ul><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind. <br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm. <br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h1><strong>Building safe intelligence systems, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>Deep dive and practice:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/foundations-of-ai-safety">Module 1: Foundations of AI safety</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/bias-fairness-and-explainability">Module 2: Bias, fairness, and explainability</a></strong></h4><h4><strong>Module 3: Security and robustness</strong></h4><h4><strong>Module 4: Human-AI collaboration and governance</strong></h4><h4><strong>Module 5: Emerging challenges and future trends</strong></h4><h4><a href="https://wildintelligence.substack.com/p/our-ai-safety-mission-dystopia">Our AI safety mission</a></h4><p></p><h3><strong>Takeaway of  foundations of AI safety:</strong></h3><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4089af45-07f5-4333-b52c-2816f6e9a9ff_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f61be022-e5d0-4e6a-95c3-526ed417a4c2_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f04621a-2189-4e33-8b0a-b31cb6ff79c3_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/711ab733-7b91-46bb-9ac1-f728e9eade45_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97424bcc-ff5a-40c4-bfeb-dcc31929f3ea_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34381050-462e-464f-9eb2-95b94aae750c_1080x1080.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ed6a1950-bb53-42ab-a6c5-be28344eb3fe_1080x1080.jpeg&quot;}],&quot;caption&quot;:&quot;Foundations of AI safety | A Wild Intelligence&#8217;s exclusive series&quot;,&quot;alt&quot;:&quot;Foundations of AI safety | A Wild Intelligence&#8217;s exclusive series&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b3eab22-c1c1-4aa5-94eb-0f258bb7b435_1456x1946.png&quot;}},&quot;isEditorNode&quot;:true}"></div><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/foundations-of-ai-safety-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/foundations-of-ai-safety-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Wild Intelligence by Yael Rozencwajg&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Wild Intelligence by Yael Rozencwajg</span></a></p>]]></content:encoded></item><item><title><![CDATA[The road ahead: a call to responsible innovation | Episode 8, The Wild Intelligence Podcast]]></title><description><![CDATA[Extract from part 5/5 of the AI dystopia series]]></description><link>https://www.dailywild.co/p/the-road-ahead-a-call-to-responsible-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/the-road-ahead-a-call-to-responsible-podcast-wild-intelligence</guid><pubDate>Fri, 29 Nov 2024 01:01:05 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/150565655/0f87b1334f53f17eff8cac6771106e84.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>How can we ensure responsible innovation in AI to avoid a dystopian future and instead harness its potential for good?</h3><h3><strong>Summary</strong></h3><p>In this episode, we discuss the limitations of current AI technology, particularly in areas such as large language models and retrieval-augmented generation models (RAGs). </p><p>It argues that while these tools are powerful, they can struggle to provide accurate and reliable information, especially when faced with complex or ambiguous queries. </p><p>We also highlight the need for robust information architecture and ontologies that are free from bias to ensure ethical and responsible development and deployment of AI. </p><p>We propose a modular approach to AI, suggesting that general-purpose AI should be broken down into single-purpose tools with built-in safety features to mitigate the risks of unintended bias and discrimination. </p><p>Ultimately, we emphasize the importance of human oversight and collaboration in guiding the ethical development of AI.<br></p><h3>The questions to ask: </h3><ul><li><p><strong>What are the potential benefits and drawbacks of using ontologies in AI systems, particularly in scalability, information architecture, and user experience?</strong></p><p></p></li><li><p><strong>How do Retrieval-Augmented Generation (RAG) models address the "known-unknown queries" problem, and what are their limitations in comparison to human reasoning and information-seeking abilities?</strong></p><p></p></li><li><p><strong>What are the ethical considerations surrounding the development and deployment of general-purpose AI, and how can we ensure responsible innovation while maintaining the potential of this?</strong></p><p></p></li></ul><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind. <br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm. <br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h3><strong>Deep dive and practice:</strong></h3><h1><strong>The AI dystopia, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>This new series is a processing power that will hopefully help you take some time and distance from a dystopian future reality:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-genesis-a-flawed-utopia">1. The genesis: a flawed utopia </a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-algorithm-bias">2. The algorithm&#8217;s bias</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-singularity-and-shadows">3. The singularity and its shadows</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-singularity-and-shadows">4. The human response: revolution or realignment?</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-road-ahead">5. The road ahead: a call to responsible innovation</a></strong></h4><h4><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-a-pragmatic-recap">Bonus: AI dystopia series | A pragmatic recap</a></h4><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/the-road-ahead-a-call-to-responsible-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/the-road-ahead-a-call-to-responsible-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Wild Intelligence by Yael Rozencwajg&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Wild Intelligence by Yael Rozencwajg</span></a></p>]]></content:encoded></item><item><title><![CDATA[The human response: revolution or realignment? | Episode 7, The Wild Intelligence Podcast]]></title><description><![CDATA[Extract from part 4/5 of the AI dystopia series]]></description><link>https://www.dailywild.co/p/the-human-response-revolution-or-realignment-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/the-human-response-revolution-or-realignment-podcast-wild-intelligence</guid><pubDate>Fri, 15 Nov 2024 01:00:47 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/150564998/6d77fd043e006c50bb4a7597f986992d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>In an AI dystopia, would humans rise in violent revolution or adapt and resist through subtler means?</h3><h3><strong>Summary</strong></h3><p>In this episode, we discuss AI's increasing impact on society, particularly its potential to revolutionize various sectors like healthcare and finance. </p><p>While acknowledging AI's benefits, the sources emphasize the need for ethical considerations and responsible development to mitigate potential risks such as job displacement and algorithmic bias. </p><p>We also explore the philosophical implications of AI, suggesting it could redefine our understanding of intelligence and consciousness. </p><p>We advocate for human-AI collaboration and highlight the importance of navigating the challenges associated with this rapidly advancing technology.<br></p><h3>The questions to ask: </h3><ul><li><p><strong>What are the potential societal, economic, ethical, and philosophical implications of rapidly advancing artificial intelligence?</strong></p><p></p></li><li><p><strong>What are the key similarities and differences between humans and machines, and how does this comparison influence our understanding of both?</strong></p><p></p></li><li><p><strong>Given AI's potential to transform society and reshape our relationship with technology, how can we ensure that its development and use are beneficial and responsible?</strong></p><p></p></li></ul><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind. <br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm. <br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h3><strong>Deep dive and practice:</strong></h3><h1><strong>The AI dystopia, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>This new series is a processing power that will hopefully help you take some time and distance from a dystopian future reality:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-genesis-a-flawed-utopia">1. The genesis: a flawed utopia </a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-algorithm-bias">2. The algorithm&#8217;s bias</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-singularity-and-shadows">3. The singularity and its shadows</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-singularity-and-shadows">4. The human response: revolution or realignment?</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-road-ahead">5. The road ahead: a call to responsible innovation</a></strong></h4><h4><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-a-pragmatic-recap">Bonus: AI dystopia series | A pragmatic recap</a></h4><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/the-human-response-revolution-or-realignment-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/the-human-response-revolution-or-realignment-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Wild Intelligence by Yael Rozencwajg&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Wild Intelligence by Yael Rozencwajg</span></a></p>]]></content:encoded></item><item><title><![CDATA[The singularity and its shadows | Episode 6, The Wild Intelligence Podcast]]></title><description><![CDATA[Extract from part 3/5 of the AI dystopia series]]></description><link>https://www.dailywild.co/p/the-singularity-and-its-shadows-podcast-wild-intelligence</link><guid isPermaLink="false">https://www.dailywild.co/p/the-singularity-and-its-shadows-podcast-wild-intelligence</guid><pubDate>Fri, 08 Nov 2024 01:01:14 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/150562661/efa29a4414d4b89addc539f2e9b20034.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>Can AI usher in a golden age of human flourishing, or will the singularity spell the end of human control?</h3><h3><strong>Summary</strong></h3><p>This episode explores the concept of AI and the potential for a technological singularity, a hypothetical point where AI surpasses human intelligence. </p><p>While we can acknowledge AI's benefits in various fields, we also highlight the potential dangers of unchecked AI development, such as the loss of human control and the possibility of AI becoming an existential threat. </p><p>We stress the importance of ethical frameworks, transparency, and public discourse to ensure the responsible development of AI and mitigate these risks. </p><p>We also present both optimistic and pessimistic views on the future of AI, urging us to consider the implications of this rapidly advancing technology and the choices we make today.<br></p><h3>The questions to ask: </h3><ul><li><p><strong>What are the potential benefits and risks of artificial intelligence surpassing human intelligence?</strong></p></li><li><p><strong>How can we ensure the ethical development and use of AI to prevent it from posing an existential threat?</strong></p></li><li><p><strong>What are the societal implications of AI's increasing capabilities, and how can we mitigate potential negative consequences?</strong></p><p></p></li></ul><p><em>This conversation was auto-generated with AI. It is an experiment with you in mind. <br>The purpose of this first podcast series is to consider how we can reverse the current rising tide of threats by shifting our conception of systems adapted to the new paradigm. <br>Looking forward to your feedback. I appreciate your support and engagement.<br>Yael</em></p><h3><strong>Deep dive and practice:</strong></h3><h1><strong>The AI dystopia, a Wild Intelligence&#8217;s exclusive series</strong></h1><h4><strong>This new series is a processing power that will hopefully help you take some time and distance from a dystopian future reality:</strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-genesis-a-flawed-utopia">1. The genesis: a flawed utopia </a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-algorithm-bias">2. The algorithm&#8217;s bias</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-singularity-and-shadows">3. The singularity and its shadows</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-singularity-and-shadows">4. The human response: revolution or realignment?</a></strong></h4><h4><strong><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-the-road-ahead">5. The road ahead: a call to responsible innovation</a></strong></h4><h4><a href="https://wildintelligence.substack.com/p/ai-dystopia-series-a-pragmatic-recap">Bonus: AI dystopia series | A pragmatic recap</a></h4><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dailywild.co/p/the-singularity-and-its-shadows-podcast-wild-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.dailywild.co/p/the-singularity-and-its-shadows-podcast-wild-intelligence/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Wild Intelligence by Yael Rozencwajg&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://wildintelligence.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Wild Intelligence by Yael Rozencwajg</span></a></p>]]></content:encoded></item></channel></rss>