Mehr als 30 Jahre OOP-Erfahrung trifft auf moderne Innovation: Taucht mit uns tief in die wichtigsten Themen gegenwärtiger Software-Architektur ein – auf den "Virtual Deep Dives | powered by OOP".
Diese Konferenz versteht sich als Online-Ergänzung zur OOP München und bietet die Möglichkeit, sich intensiv und interaktiv mit den neuesten Trends und Best Practices in der Software-Architektur auseinanderzusetzen. Unsere Expert:innen und Branchenführer werden tiefe Einblicke in ihre Arbeitsweise geben und wertvolles Wissen teilen, das Sie direkt in Ihre Projekte integrieren können.
» Zu den Virtual Deep Dives
Die im Konferenzprogramm der OOP 2024 angegebenen Uhrzeiten entsprechen der Central European Time (CET).
During the talk, we'll dive into the historical context of Generative AI and examine their challenges. From legal compliance to fairness, transparency, security, and accountability, we'll discuss strategies for implementing Responsible AI principles.
It's important to note that the landscape for AI-driven products is still evolving, and there are no established best practices. The legislative framework surrounding these models remains uncertain, making it even more vital to engage in discussions that shape responsible AI practices.
Target Audience: Decision Makers, Developers, Managers, Everyone - AI-driven products require cross-functional teams
Prerequisites: None
Level: Basic
Extended Abstract:
Foundation models like GPT-4, BERT, or DALL-E 2 are remarkable in their versatility, trained on vast datasets using self-supervised learning. However, the adaptability of these models brings forth ethical, socio-technical, and legal questions that demand responsible development and deployment.
During the talk, we'll delve into the history of AI to better understand the evolution of generative models. We'll explore strategies for implementing Responsible AI principles, tackling issues such as legal compliance, fairness, transparency, security, accountability and their broader impact on society.
It's important to note that there are currently no established best practices for AI-driven products, and the legislative landscape surrounding them remains unclear. This underscores the significance of our discussion as we collectively navigate this emerging field.
Isabel Bär is a skilled professional with a Master's degree in Data Engineering from the Hasso-Plattner-Institute. She has made contributions in the field of AI software, focusing on areas like MLOps and Responsible AI. Beyond being a regular speaker at various conferences, she has also taken on the role of organizing conferences on Data and AI, showcasing her commitment to knowledge sharing and community building. Currently, she is working as a consultant in a German IT consulting company.
Vortrag Teilen
Are Large Language Models (LLMs) sophisticated pattern matchers ('parrots') without understanding or potential prodigies that eventually surpass human intelligence? Drawing insights from both camps, we attempt to reconcile these perspectives, examines the current state of LLMs, their potential trajectories, and the profound impact these developments have on how we engineer software in the years to come.
Target Audience: Developers and Architects
Prerequisites: A basic understanding of Large Language Models is helpful but not required
Level: Basic
Extended Abstract:
Large Language Models (LLMs) are complex 'black box' systems. Their capabilities remain largely mysterious, only beginning to be understood through interaction and experimentation. While these models occasionally yield surprisingly accurate responses, they also exhibit shocking, elementary mistakes and limitations, creating more confusion than clarity.
When seeking expert insights, we find two diverging perspectives. On one side, we have thinkers like Noam Chomsky and AI experts such as Yann LeCun, who view LLMs as stochastic 'parrots' — sophisticated pattern matchers without true comprehension.
In contrast, AI pioneers like Geoffrey Hinton and Ilya Sutskever see LLMs as potential 'prodigies' — AI systems capable of eventually surpassing human intelligence and visionaries like Yuval Noah Harari that view LLMs as substantial societal threats.
Regardless of whether we see LLMs as 'parrots' or 'prodigies', they undeniable are catalyzing a paradigm shift in software engineering, broadening horizons, and pushing the boundaries of the field.
What are the theories underpinning these experts' views? Can their perspectives be reconciled, and what can we learn for the future of software engineering?
To answer these questions, we examine the current capabilities and developments of LLMs and explore their potential trajectories.
Steve Haupt, an agile software developer at andrena objects, views software development as a quality-driven craft. Fascinated by AI, he explores its implications for software craftsmanship, working on AI projects and developing best practices. Steve focuses on applying Clean Code and XP principles to AI development. He regularly speaks on AI and co-created an AI training course, aiming to bridge traditional software development with modern AI technologies for sustainable solutions.
More content from this speaker? Have a look at sigs.de: https://www.sigs.de/experten/steve-haupt/
Vortrag Teilen