Skip to content
Login
Insights from TED’s first AI conference
Early talent & AI

Insights from TED’s first AI conference

Handshake Europe’s Engineering Lead, Markus shares the talks he found most insightful at TED AI.

Looking back at last year, 2023 was the year we all got to play around with AI. Unlike previous AI-powered applications and offerings, ChatGPT uniquely succeeded at grabbing the attention of millions of people worldwide and putting the potential of generative AI at everyone’s fingertips for the first time. Thanks to its free-tier offering and ease of use, it quickly became “the fastest-growing app in the history of the web.”

Insights from TED’s first AI conference 2

This playful moment was characterized by the variety of applications people found for AI in their daily lives, like crafting poems, summarising emails, doing their homework, telling jokes, recommending movies, conversing with ChatGPT like a friend, learning about new topics, …

At the same time, it awoke many people to the potential of the underway AI revolution, sparking both excitement and fear in them. Therefore, it’s not a surprise that in 2023, we observed unprecedented advances in AI and also reached peak AI hype with inflated expectations (see Gartner), accompanied by fiercely led debates about AI safety risks, necessary regulation, AI’s impact on the labor market, rising inequality, and the dangers of tech monopolies.

In this kind of environment, it becomes difficult to clearly make out the different viewpoints, separate opinion from fact, and not get lost in a one-sided AI Doomer or Techno-Optimist perspective. It helps to remember that the starting point of these debates is the shared acknowledgment of AI’s potential or risk to affect the lives of billions of people and the manifold changes it might bring to our economy and society. The core disagreement lies in differing expectations for these changes to be detrimental or beneficial for humankind and our ability to steer this technological development on a safe and responsible path.

With some of these opinions being at the extreme ends of the spectrum between unbridled optimism (”AI will solve all our problems”) and existential pessimism (”AI will kill us all”) it’s difficult to imagine a productive debate between these different sides. But with the stakes this high it’s important to strive to create an open space for debate and bringing people together instead of creating and further reinforcing insurmountable divisions.

This is exactly what the TED team attempted to do with their first AI-focused conference, “TED AI,” which took place on October 17, 2023, at the Herbst Theater in San Francisco. This single-day event, which I had the privilege of attending in person, had a fully packed schedule with 30+ talks delivered by a wide group of speakers. The talks were grouped into 4 main blocks: Intelligence & Scale, Synthetics & Realities, Autonomy & Dependence, and Art & Storytelling.

The debate around AI safety featured prominently in many of these talks, and without giving anything away, I can already let you know that no universal consensus was reached on this topic. Instead, the interesting part for the audience consisted of hearing the different perspectives presented by each speaker and comparing their arguments for either an optimistic or worried and critical outlook on AI developments.

With the Herbst Theater as the backdrop for this AI conference the TED team chose a place of historical significance. This was the site where, on June 26, 1945, on the last day of a 2-month-long conference, representatives of 50 countries signed the United Nations Charter, the foundational treaty of the United Nations. The hope expressed in that historical moment to formulate “a solid structure upon which we can build a better world” also rings true for me for the ongoing AI safety debate.

Insights from TED’s first AI conference 3

In the following, I’ll be giving the list of talks I found most insightful, along with my personal notes. Most talks from the conference have been made available as a recording by now and can be found either on YouTube or the TED website. You can find the full schedule of talks here.

Andrew Ng

(still unreleased recording)

As a recognised leader in the field of AI, a professor at Stanford University and the co-founder of the online education platform Coursera, Andrew Ng has made it his mission to teach millions of people about AI and how to use these new types of tools effectively.

In his talk, he compares AI to electricity and expresses his hope that it can be used to augment all parts of our lives in a similar way. He acknowledges that there are existing problems with AI that give people concerns, but ultimately, these should be seen as Engineering problems to which we can develop effective solutions. As an example, he talks about issues with AI currently reflecting and amplifying social biases we have in our society, but which can be addressed by Reinforcement learning from human feedback (RLHF), a special technique to train and fine-tune AI models based on direct human feedback.

He does not agree with some researchers calling for an AI moratorium, a temporary stop for all research and development on further advanced AI models. Instead, he provocatively states that “We don’t need less AI, we need more”. Given all the complex problems we as humans currently face in this world, he believes any additional (artificial) intelligence will be welcome and beneficial in helping solve these.

He sees concerns about AI safety as a temporary phenomenon happening during the transition period towards major technological advancement. Similar to how people had fears about electricity and how we were able to make aviation safe through a strict safety engineering practice, a practice he also suggests for the development of AI technology.

Insights from TED’s first AI conference 3

Percy Liang

(still unreleased recording)

As a professor and researcher at Stanford University Percy Liang focuses on research into foundation models. He’s specifically concerned with the transparency around these models and co-founded TogetherAI to drive the development of open-source generative AI models.

In his talk, he reflects fondly on his time as a PhD student working on Large Language Models (LLMs) in a very open, collaborative, and transparent research culture. He contrasts this with the locked and intransparent culture today, where most impressive AI progress is driven by private companies who only publicize very limited information about their work.

This leads to the following important questions going unanswered:

  • How can independent researchers audit these models and their underlying training data?
  • What’s the environmental impact of these models and under which labor conditions were they developed?
  • What risk assessments were made by the company?
  • Whose values are these models trained towards and how are they determined?

He believes in the potential of open-source software for helping to get back to a transparent, participative process for building AI. In his opinion, the leading AI companies need to be challenged by open source initiatives in a similar way that Encyclopedia Britannica’s status was challenged by Wikipedia.

Stephen Wolfram

‍How to Think Computationally About AI, the Universe and Everything

With his deep scientific background as a computer scientist and physicist, Stephen Wolfram gave a very technical talk, expanding on his long-term research into the question of whether computation is what’s underneath everything in our universe, so if effectively, everything is computable. Additionally, he talked about how AI can integrate with the computational language developed by Wolfram Research.

He talked about the challenge of “computational irreducibility”, not being able to fully compute and predict what a system will do. Generative AI models are an example of such systems and he explained that humans might want to limit these systems to make them better understandable and easier to reason about. But these changes would also significantly limit what an AI can do. He believes it to be inevitable that we will have powerful AI systems that make decisions that might seem pointless to us. As humans, we will have to get comfortable with that reality and should focus on defining and clearly stating what we want from AI, an ability he calls “computational thinking”.

Max Tegmark

How to Keep AI Under Control

As the author of the thought-provoking book “Life 3.0”, Max Tegmark has spent his time thinking deeply about AI and its potential impact on the future of life on Earth. He has been a key organizer behind the open letter calling for a 6-months AI moratorium. While he’s overall optimistic about AI’s potential, he is deeply concerned by the current lack of AI safety standards and, therefore, has been vocal in calling for (government) regulation and educating the public through articles in the media (Time).

In his talk, he expressed his deep concerns about AI companies rushing to create a super intelligence, what would be called an Artificial General Intelligence (AGI). He is worried because the entire AI industry is lacking a convincing plan for AI safety. The biggest insight he wants to share with the audience is that “almost none of the AI benefits pepole are excited about, require superintelligence”. He’s cautioning that “hubris kills” and we shouldn’t try to fly too close to the sun.

He sees the most promising path forward in first focusing on mastering our current tools and finding more effective ways of verifying that AI systems are doing exactly what we expect them to do. He suggests expanding on existing approaches for formal verifications of programs and even employing AI to create easily reviewable and provable descriptions of the systems.

Sign up to our monthly Early Talent and AI newsletter to keep up with the latest news around AI and the future of work from trusted sources as well generative AI tools and resources that help you do more with less.

Level the playing field for your students